00:00:00.001 Started by upstream project "autotest-spdk-master-vs-dpdk-main" build number 3696 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3297 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.001 Started by timer 00:00:00.081 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.081 The recommended git tool is: git 00:00:00.081 using credential 00000000-0000-0000-0000-000000000002 00:00:00.083 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.108 Fetching changes from the remote Git repository 00:00:00.109 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.134 Using shallow fetch with depth 1 00:00:00.134 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.134 > git --version # timeout=10 00:00:00.147 > git --version # 'git version 2.39.2' 00:00:00.147 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.156 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.156 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:03.967 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:03.979 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:03.990 Checking out Revision 4313f32deecbb7108199ebd1913b403a3005dece (FETCH_HEAD) 00:00:03.990 > git config core.sparsecheckout # timeout=10 00:00:04.001 > git read-tree -mu HEAD # timeout=10 00:00:04.016 > git checkout -f 4313f32deecbb7108199ebd1913b403a3005dece # timeout=5 00:00:04.033 Commit message: "packer: Add bios builder" 00:00:04.033 > git rev-list --no-walk 4313f32deecbb7108199ebd1913b403a3005dece # timeout=10 00:00:04.146 [Pipeline] Start of Pipeline 00:00:04.161 [Pipeline] library 00:00:04.163 Loading library shm_lib@master 00:00:04.164 Library shm_lib@master is cached. Copying from home. 00:00:04.179 [Pipeline] node 00:00:04.264 Running on GP11 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:04.266 [Pipeline] { 00:00:04.279 [Pipeline] catchError 00:00:04.281 [Pipeline] { 00:00:04.297 [Pipeline] wrap 00:00:04.309 [Pipeline] { 00:00:04.317 [Pipeline] stage 00:00:04.319 [Pipeline] { (Prologue) 00:00:04.478 [Pipeline] sh 00:00:04.764 + logger -p user.info -t JENKINS-CI 00:00:04.783 [Pipeline] echo 00:00:04.785 Node: GP11 00:00:04.791 [Pipeline] sh 00:00:05.082 [Pipeline] setCustomBuildProperty 00:00:05.097 [Pipeline] echo 00:00:05.100 Cleanup processes 00:00:05.122 [Pipeline] sh 00:00:05.413 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.413 730379 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.426 [Pipeline] sh 00:00:05.709 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.709 ++ grep -v 'sudo pgrep' 00:00:05.709 ++ awk '{print $1}' 00:00:05.709 + sudo kill -9 00:00:05.709 + true 00:00:05.721 [Pipeline] cleanWs 00:00:05.730 [WS-CLEANUP] Deleting project workspace... 00:00:05.730 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.736 [WS-CLEANUP] done 00:00:05.739 [Pipeline] setCustomBuildProperty 00:00:05.751 [Pipeline] sh 00:00:06.029 + sudo git config --global --replace-all safe.directory '*' 00:00:06.095 [Pipeline] httpRequest 00:00:06.150 [Pipeline] echo 00:00:06.151 Sorcerer 10.211.164.101 is alive 00:00:06.159 [Pipeline] httpRequest 00:00:06.164 HttpMethod: GET 00:00:06.164 URL: http://10.211.164.101/packages/jbp_4313f32deecbb7108199ebd1913b403a3005dece.tar.gz 00:00:06.164 Sending request to url: http://10.211.164.101/packages/jbp_4313f32deecbb7108199ebd1913b403a3005dece.tar.gz 00:00:06.187 Response Code: HTTP/1.1 200 OK 00:00:06.187 Success: Status code 200 is in the accepted range: 200,404 00:00:06.188 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_4313f32deecbb7108199ebd1913b403a3005dece.tar.gz 00:00:28.190 [Pipeline] sh 00:00:28.470 + tar --no-same-owner -xf jbp_4313f32deecbb7108199ebd1913b403a3005dece.tar.gz 00:00:28.485 [Pipeline] httpRequest 00:00:28.514 [Pipeline] echo 00:00:28.516 Sorcerer 10.211.164.101 is alive 00:00:28.525 [Pipeline] httpRequest 00:00:28.530 HttpMethod: GET 00:00:28.531 URL: http://10.211.164.101/packages/spdk_70425709083377aa0c23e3a0918902ddf3d34357.tar.gz 00:00:28.532 Sending request to url: http://10.211.164.101/packages/spdk_70425709083377aa0c23e3a0918902ddf3d34357.tar.gz 00:00:28.544 Response Code: HTTP/1.1 200 OK 00:00:28.544 Success: Status code 200 is in the accepted range: 200,404 00:00:28.545 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_70425709083377aa0c23e3a0918902ddf3d34357.tar.gz 00:01:11.208 [Pipeline] sh 00:01:11.496 + tar --no-same-owner -xf spdk_70425709083377aa0c23e3a0918902ddf3d34357.tar.gz 00:01:14.792 [Pipeline] sh 00:01:15.076 + git -C spdk log --oneline -n5 00:01:15.076 704257090 lib/reduce: fix the incorrect calculation method for the number of io_unit required for metadata. 00:01:15.076 fc2398dfa raid: clear base bdev configure_cb after executing 00:01:15.076 5558f3f50 raid: complete bdev_raid_create after sb is written 00:01:15.076 d005e023b raid: fix empty slot not updated in sb after resize 00:01:15.076 f41dbc235 nvme: always specify CC_CSS_NVM when CAP_CSS_IOCS is not set 00:01:15.090 [Pipeline] withCredentials 00:01:15.100 > git --version # timeout=10 00:01:15.111 > git --version # 'git version 2.39.2' 00:01:15.126 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:15.128 [Pipeline] { 00:01:15.137 [Pipeline] retry 00:01:15.138 [Pipeline] { 00:01:15.153 [Pipeline] sh 00:01:15.432 + git ls-remote http://dpdk.org/git/dpdk main 00:01:17.350 [Pipeline] } 00:01:17.372 [Pipeline] // retry 00:01:17.378 [Pipeline] } 00:01:17.399 [Pipeline] // withCredentials 00:01:17.409 [Pipeline] httpRequest 00:01:17.433 [Pipeline] echo 00:01:17.435 Sorcerer 10.211.164.101 is alive 00:01:17.444 [Pipeline] httpRequest 00:01:17.449 HttpMethod: GET 00:01:17.450 URL: http://10.211.164.101/packages/dpdk_82c47f005b9a0a1e3a649664b7713443d18abe43.tar.gz 00:01:17.450 Sending request to url: http://10.211.164.101/packages/dpdk_82c47f005b9a0a1e3a649664b7713443d18abe43.tar.gz 00:01:17.453 Response Code: HTTP/1.1 200 OK 00:01:17.453 Success: Status code 200 is in the accepted range: 200,404 00:01:17.454 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk_82c47f005b9a0a1e3a649664b7713443d18abe43.tar.gz 00:01:22.354 [Pipeline] sh 00:01:22.637 + tar --no-same-owner -xf dpdk_82c47f005b9a0a1e3a649664b7713443d18abe43.tar.gz 00:01:24.554 [Pipeline] sh 00:01:24.856 + git -C dpdk log --oneline -n5 00:01:24.856 82c47f005b version: 24.07-rc3 00:01:24.856 d9d1be537e doc: remove reference to mbuf pkt field 00:01:24.856 52c7393a03 doc: set required MinGW version in Windows guide 00:01:24.856 92439dc9ac dts: improve starting and stopping interactive shells 00:01:24.856 2b648cd4e4 dts: add context manager for interactive shells 00:01:24.867 [Pipeline] } 00:01:24.884 [Pipeline] // stage 00:01:24.893 [Pipeline] stage 00:01:24.895 [Pipeline] { (Prepare) 00:01:24.918 [Pipeline] writeFile 00:01:24.935 [Pipeline] sh 00:01:25.219 + logger -p user.info -t JENKINS-CI 00:01:25.230 [Pipeline] sh 00:01:25.512 + logger -p user.info -t JENKINS-CI 00:01:25.525 [Pipeline] sh 00:01:25.809 + cat autorun-spdk.conf 00:01:25.809 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:25.809 SPDK_TEST_NVMF=1 00:01:25.809 SPDK_TEST_NVME_CLI=1 00:01:25.809 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:25.809 SPDK_TEST_NVMF_NICS=e810 00:01:25.809 SPDK_TEST_VFIOUSER=1 00:01:25.809 SPDK_RUN_UBSAN=1 00:01:25.809 NET_TYPE=phy 00:01:25.809 SPDK_TEST_NATIVE_DPDK=main 00:01:25.809 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:25.817 RUN_NIGHTLY=1 00:01:25.821 [Pipeline] readFile 00:01:25.844 [Pipeline] withEnv 00:01:25.846 [Pipeline] { 00:01:25.857 [Pipeline] sh 00:01:26.139 + set -ex 00:01:26.139 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:26.139 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:26.139 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:26.139 ++ SPDK_TEST_NVMF=1 00:01:26.139 ++ SPDK_TEST_NVME_CLI=1 00:01:26.139 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:26.139 ++ SPDK_TEST_NVMF_NICS=e810 00:01:26.139 ++ SPDK_TEST_VFIOUSER=1 00:01:26.139 ++ SPDK_RUN_UBSAN=1 00:01:26.139 ++ NET_TYPE=phy 00:01:26.139 ++ SPDK_TEST_NATIVE_DPDK=main 00:01:26.139 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:26.139 ++ RUN_NIGHTLY=1 00:01:26.139 + case $SPDK_TEST_NVMF_NICS in 00:01:26.139 + DRIVERS=ice 00:01:26.139 + [[ tcp == \r\d\m\a ]] 00:01:26.139 + [[ -n ice ]] 00:01:26.139 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:26.139 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:26.139 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:26.139 rmmod: ERROR: Module irdma is not currently loaded 00:01:26.139 rmmod: ERROR: Module i40iw is not currently loaded 00:01:26.139 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:26.139 + true 00:01:26.139 + for D in $DRIVERS 00:01:26.139 + sudo modprobe ice 00:01:26.139 + exit 0 00:01:26.149 [Pipeline] } 00:01:26.163 [Pipeline] // withEnv 00:01:26.167 [Pipeline] } 00:01:26.180 [Pipeline] // stage 00:01:26.189 [Pipeline] catchError 00:01:26.191 [Pipeline] { 00:01:26.203 [Pipeline] timeout 00:01:26.203 Timeout set to expire in 50 min 00:01:26.204 [Pipeline] { 00:01:26.216 [Pipeline] stage 00:01:26.218 [Pipeline] { (Tests) 00:01:26.230 [Pipeline] sh 00:01:26.514 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:26.514 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:26.514 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:26.514 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:26.514 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:26.514 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:26.514 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:26.514 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:26.514 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:26.514 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:26.514 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:26.514 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:26.514 + source /etc/os-release 00:01:26.514 ++ NAME='Fedora Linux' 00:01:26.514 ++ VERSION='38 (Cloud Edition)' 00:01:26.514 ++ ID=fedora 00:01:26.514 ++ VERSION_ID=38 00:01:26.514 ++ VERSION_CODENAME= 00:01:26.514 ++ PLATFORM_ID=platform:f38 00:01:26.514 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:26.514 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:26.514 ++ LOGO=fedora-logo-icon 00:01:26.514 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:26.514 ++ HOME_URL=https://fedoraproject.org/ 00:01:26.515 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:26.515 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:26.515 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:26.515 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:26.515 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:26.515 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:26.515 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:26.515 ++ SUPPORT_END=2024-05-14 00:01:26.515 ++ VARIANT='Cloud Edition' 00:01:26.515 ++ VARIANT_ID=cloud 00:01:26.515 + uname -a 00:01:26.515 Linux spdk-gp-11 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:26.515 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:27.452 Hugepages 00:01:27.452 node hugesize free / total 00:01:27.452 node0 1048576kB 0 / 0 00:01:27.452 node0 2048kB 0 / 0 00:01:27.452 node1 1048576kB 0 / 0 00:01:27.452 node1 2048kB 0 / 0 00:01:27.452 00:01:27.452 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:27.452 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:01:27.452 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:01:27.452 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:01:27.452 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:01:27.452 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:01:27.452 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:01:27.452 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:01:27.452 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:01:27.452 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:01:27.452 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:01:27.452 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:01:27.452 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:01:27.452 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:01:27.452 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:01:27.452 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:01:27.452 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:01:27.452 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:01:27.452 + rm -f /tmp/spdk-ld-path 00:01:27.452 + source autorun-spdk.conf 00:01:27.452 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:27.452 ++ SPDK_TEST_NVMF=1 00:01:27.452 ++ SPDK_TEST_NVME_CLI=1 00:01:27.452 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:27.452 ++ SPDK_TEST_NVMF_NICS=e810 00:01:27.452 ++ SPDK_TEST_VFIOUSER=1 00:01:27.452 ++ SPDK_RUN_UBSAN=1 00:01:27.452 ++ NET_TYPE=phy 00:01:27.452 ++ SPDK_TEST_NATIVE_DPDK=main 00:01:27.452 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:27.452 ++ RUN_NIGHTLY=1 00:01:27.452 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:27.452 + [[ -n '' ]] 00:01:27.452 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:27.711 + for M in /var/spdk/build-*-manifest.txt 00:01:27.711 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:27.711 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:27.711 + for M in /var/spdk/build-*-manifest.txt 00:01:27.711 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:27.711 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:27.711 ++ uname 00:01:27.711 + [[ Linux == \L\i\n\u\x ]] 00:01:27.711 + sudo dmesg -T 00:01:27.711 + sudo dmesg --clear 00:01:27.711 + dmesg_pid=731087 00:01:27.711 + [[ Fedora Linux == FreeBSD ]] 00:01:27.711 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:27.711 + sudo dmesg -Tw 00:01:27.711 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:27.711 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:27.711 + [[ -x /usr/src/fio-static/fio ]] 00:01:27.711 + export FIO_BIN=/usr/src/fio-static/fio 00:01:27.711 + FIO_BIN=/usr/src/fio-static/fio 00:01:27.711 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:27.711 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:27.711 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:27.711 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:27.711 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:27.711 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:27.711 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:27.711 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:27.711 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:27.711 Test configuration: 00:01:27.711 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:27.711 SPDK_TEST_NVMF=1 00:01:27.711 SPDK_TEST_NVME_CLI=1 00:01:27.711 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:27.711 SPDK_TEST_NVMF_NICS=e810 00:01:27.711 SPDK_TEST_VFIOUSER=1 00:01:27.711 SPDK_RUN_UBSAN=1 00:01:27.711 NET_TYPE=phy 00:01:27.711 SPDK_TEST_NATIVE_DPDK=main 00:01:27.711 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:27.711 RUN_NIGHTLY=1 08:34:46 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:27.711 08:34:46 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:27.711 08:34:46 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:27.711 08:34:46 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:27.711 08:34:46 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:27.711 08:34:46 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:27.711 08:34:46 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:27.711 08:34:46 -- paths/export.sh@5 -- $ export PATH 00:01:27.711 08:34:46 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:27.711 08:34:46 -- common/autobuild_common.sh@446 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:27.711 08:34:46 -- common/autobuild_common.sh@447 -- $ date +%s 00:01:27.711 08:34:46 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721975686.XXXXXX 00:01:27.711 08:34:46 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721975686.wK42sU 00:01:27.711 08:34:46 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:01:27.711 08:34:46 -- common/autobuild_common.sh@453 -- $ '[' -n main ']' 00:01:27.711 08:34:46 -- common/autobuild_common.sh@454 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:27.711 08:34:46 -- common/autobuild_common.sh@454 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:01:27.711 08:34:46 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:27.711 08:34:46 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:27.711 08:34:46 -- common/autobuild_common.sh@463 -- $ get_config_params 00:01:27.711 08:34:46 -- common/autotest_common.sh@398 -- $ xtrace_disable 00:01:27.711 08:34:46 -- common/autotest_common.sh@10 -- $ set +x 00:01:27.711 08:34:46 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:01:27.711 08:34:46 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:01:27.711 08:34:46 -- pm/common@17 -- $ local monitor 00:01:27.711 08:34:46 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:27.711 08:34:46 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:27.711 08:34:46 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:27.711 08:34:46 -- pm/common@21 -- $ date +%s 00:01:27.711 08:34:46 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:27.711 08:34:46 -- pm/common@21 -- $ date +%s 00:01:27.711 08:34:46 -- pm/common@25 -- $ sleep 1 00:01:27.711 08:34:46 -- pm/common@21 -- $ date +%s 00:01:27.711 08:34:46 -- pm/common@21 -- $ date +%s 00:01:27.712 08:34:46 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721975686 00:01:27.712 08:34:46 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721975686 00:01:27.712 08:34:46 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721975686 00:01:27.712 08:34:46 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721975686 00:01:27.712 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721975686_collect-vmstat.pm.log 00:01:27.712 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721975686_collect-cpu-load.pm.log 00:01:27.712 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721975686_collect-cpu-temp.pm.log 00:01:27.712 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721975686_collect-bmc-pm.bmc.pm.log 00:01:28.697 08:34:47 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:01:28.697 08:34:47 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:28.697 08:34:47 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:28.697 08:34:47 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:28.697 08:34:47 -- spdk/autobuild.sh@16 -- $ date -u 00:01:28.697 Fri Jul 26 06:34:47 AM UTC 2024 00:01:28.697 08:34:47 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:28.697 v24.09-pre-321-g704257090 00:01:28.697 08:34:47 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:28.697 08:34:47 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:28.697 08:34:47 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:28.697 08:34:47 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:28.697 08:34:47 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:28.697 08:34:47 -- common/autotest_common.sh@10 -- $ set +x 00:01:28.697 ************************************ 00:01:28.697 START TEST ubsan 00:01:28.697 ************************************ 00:01:28.697 08:34:47 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:01:28.697 using ubsan 00:01:28.697 00:01:28.697 real 0m0.000s 00:01:28.697 user 0m0.000s 00:01:28.697 sys 0m0.000s 00:01:28.697 08:34:47 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:01:28.697 08:34:47 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:28.697 ************************************ 00:01:28.697 END TEST ubsan 00:01:28.697 ************************************ 00:01:28.697 08:34:47 -- spdk/autobuild.sh@27 -- $ '[' -n main ']' 00:01:28.697 08:34:47 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:01:28.697 08:34:47 -- common/autobuild_common.sh@439 -- $ run_test build_native_dpdk _build_native_dpdk 00:01:28.697 08:34:47 -- common/autotest_common.sh@1101 -- $ '[' 2 -le 1 ']' 00:01:28.697 08:34:47 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:28.697 08:34:47 -- common/autotest_common.sh@10 -- $ set +x 00:01:28.697 ************************************ 00:01:28.697 START TEST build_native_dpdk 00:01:28.697 ************************************ 00:01:28.697 08:34:47 build_native_dpdk -- common/autotest_common.sh@1125 -- $ _build_native_dpdk 00:01:28.697 08:34:47 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:01:28.697 08:34:47 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:01:28.697 08:34:47 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:01:28.697 08:34:47 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:01:28.697 08:34:47 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:01:28.697 08:34:47 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:01:28.697 08:34:47 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:01:28.697 08:34:47 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:01:28.697 08:34:47 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:01:28.697 08:34:47 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:01:28.697 08:34:47 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:01:28.697 08:34:47 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:01:28.697 08:34:47 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:01:28.697 08:34:47 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:01:28.697 08:34:47 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:28.698 08:34:47 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:28.698 08:34:47 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:28.698 08:34:47 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk ]] 00:01:28.698 08:34:47 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:28.698 08:34:47 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk log --oneline -n 5 00:01:28.698 82c47f005b version: 24.07-rc3 00:01:28.698 d9d1be537e doc: remove reference to mbuf pkt field 00:01:28.698 52c7393a03 doc: set required MinGW version in Windows guide 00:01:28.698 92439dc9ac dts: improve starting and stopping interactive shells 00:01:28.698 2b648cd4e4 dts: add context manager for interactive shells 00:01:28.698 08:34:47 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:01:28.698 08:34:47 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:01:28.698 08:34:47 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=24.07.0-rc3 00:01:28.698 08:34:47 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:01:28.698 08:34:47 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:01:28.698 08:34:47 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:01:28.698 08:34:47 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:01:28.698 08:34:47 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:01:28.698 08:34:47 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:01:28.698 08:34:47 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:01:28.698 08:34:47 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:01:28.698 08:34:47 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:28.698 08:34:47 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:28.698 08:34:47 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:01:28.698 08:34:47 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:28.698 08:34:47 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:01:28.698 08:34:47 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:01:28.698 08:34:47 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 24.07.0-rc3 21.11.0 00:01:28.698 08:34:47 build_native_dpdk -- scripts/common.sh@370 -- $ cmp_versions 24.07.0-rc3 '<' 21.11.0 00:01:28.698 08:34:47 build_native_dpdk -- scripts/common.sh@330 -- $ local ver1 ver1_l 00:01:28.698 08:34:47 build_native_dpdk -- scripts/common.sh@331 -- $ local ver2 ver2_l 00:01:28.698 08:34:47 build_native_dpdk -- scripts/common.sh@333 -- $ IFS=.-: 00:01:28.698 08:34:47 build_native_dpdk -- scripts/common.sh@333 -- $ read -ra ver1 00:01:28.698 08:34:47 build_native_dpdk -- scripts/common.sh@334 -- $ IFS=.-: 00:01:28.698 08:34:47 build_native_dpdk -- scripts/common.sh@334 -- $ read -ra ver2 00:01:28.698 08:34:47 build_native_dpdk -- scripts/common.sh@335 -- $ local 'op=<' 00:01:28.698 08:34:47 build_native_dpdk -- scripts/common.sh@337 -- $ ver1_l=4 00:01:28.698 08:34:47 build_native_dpdk -- scripts/common.sh@338 -- $ ver2_l=3 00:01:28.698 08:34:47 build_native_dpdk -- scripts/common.sh@340 -- $ local lt=0 gt=0 eq=0 v 00:01:28.698 08:34:47 build_native_dpdk -- scripts/common.sh@341 -- $ case "$op" in 00:01:28.698 08:34:47 build_native_dpdk -- scripts/common.sh@342 -- $ : 1 00:01:28.698 08:34:47 build_native_dpdk -- scripts/common.sh@361 -- $ (( v = 0 )) 00:01:28.698 08:34:47 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:28.698 08:34:47 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 24 00:01:28.698 08:34:47 build_native_dpdk -- scripts/common.sh@350 -- $ local d=24 00:01:28.698 08:34:47 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:01:28.698 08:34:47 build_native_dpdk -- scripts/common.sh@352 -- $ echo 24 00:01:28.698 08:34:47 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=24 00:01:28.698 08:34:47 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 21 00:01:28.698 08:34:47 build_native_dpdk -- scripts/common.sh@350 -- $ local d=21 00:01:28.698 08:34:47 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:01:28.698 08:34:47 build_native_dpdk -- scripts/common.sh@352 -- $ echo 21 00:01:28.698 08:34:47 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=21 00:01:28.698 08:34:47 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:01:28.698 08:34:47 build_native_dpdk -- scripts/common.sh@364 -- $ return 1 00:01:28.698 08:34:47 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:01:28.698 patching file config/rte_config.h 00:01:28.698 Hunk #1 succeeded at 70 (offset 11 lines). 00:01:28.958 08:34:47 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 24.07.0-rc3 24.07.0 00:01:28.958 08:34:47 build_native_dpdk -- scripts/common.sh@370 -- $ cmp_versions 24.07.0-rc3 '<' 24.07.0 00:01:28.958 08:34:47 build_native_dpdk -- scripts/common.sh@330 -- $ local ver1 ver1_l 00:01:28.958 08:34:47 build_native_dpdk -- scripts/common.sh@331 -- $ local ver2 ver2_l 00:01:28.958 08:34:47 build_native_dpdk -- scripts/common.sh@333 -- $ IFS=.-: 00:01:28.958 08:34:47 build_native_dpdk -- scripts/common.sh@333 -- $ read -ra ver1 00:01:28.959 08:34:47 build_native_dpdk -- scripts/common.sh@334 -- $ IFS=.-: 00:01:28.959 08:34:47 build_native_dpdk -- scripts/common.sh@334 -- $ read -ra ver2 00:01:28.959 08:34:47 build_native_dpdk -- scripts/common.sh@335 -- $ local 'op=<' 00:01:28.959 08:34:47 build_native_dpdk -- scripts/common.sh@337 -- $ ver1_l=4 00:01:28.959 08:34:47 build_native_dpdk -- scripts/common.sh@338 -- $ ver2_l=3 00:01:28.959 08:34:47 build_native_dpdk -- scripts/common.sh@340 -- $ local lt=0 gt=0 eq=0 v 00:01:28.959 08:34:47 build_native_dpdk -- scripts/common.sh@341 -- $ case "$op" in 00:01:28.959 08:34:47 build_native_dpdk -- scripts/common.sh@342 -- $ : 1 00:01:28.959 08:34:47 build_native_dpdk -- scripts/common.sh@361 -- $ (( v = 0 )) 00:01:28.959 08:34:47 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:28.959 08:34:47 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 24 00:01:28.959 08:34:47 build_native_dpdk -- scripts/common.sh@350 -- $ local d=24 00:01:28.959 08:34:47 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:01:28.959 08:34:47 build_native_dpdk -- scripts/common.sh@352 -- $ echo 24 00:01:28.959 08:34:47 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=24 00:01:28.959 08:34:47 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 24 00:01:28.959 08:34:47 build_native_dpdk -- scripts/common.sh@350 -- $ local d=24 00:01:28.959 08:34:47 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:01:28.959 08:34:47 build_native_dpdk -- scripts/common.sh@352 -- $ echo 24 00:01:28.959 08:34:47 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=24 00:01:28.959 08:34:47 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:01:28.959 08:34:47 build_native_dpdk -- scripts/common.sh@365 -- $ (( ver1[v] < ver2[v] )) 00:01:28.959 08:34:47 build_native_dpdk -- scripts/common.sh@361 -- $ (( v++ )) 00:01:28.959 08:34:47 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:28.959 08:34:47 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 07 00:01:28.959 08:34:47 build_native_dpdk -- scripts/common.sh@350 -- $ local d=07 00:01:28.959 08:34:47 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 07 =~ ^[0-9]+$ ]] 00:01:28.959 08:34:47 build_native_dpdk -- scripts/common.sh@352 -- $ echo 7 00:01:28.959 08:34:47 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=7 00:01:28.959 08:34:47 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 07 00:01:28.959 08:34:47 build_native_dpdk -- scripts/common.sh@350 -- $ local d=07 00:01:28.959 08:34:47 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 07 =~ ^[0-9]+$ ]] 00:01:28.959 08:34:47 build_native_dpdk -- scripts/common.sh@352 -- $ echo 7 00:01:28.959 08:34:47 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=7 00:01:28.959 08:34:47 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:01:28.959 08:34:47 build_native_dpdk -- scripts/common.sh@365 -- $ (( ver1[v] < ver2[v] )) 00:01:28.959 08:34:47 build_native_dpdk -- scripts/common.sh@361 -- $ (( v++ )) 00:01:28.959 08:34:47 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:28.959 08:34:47 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 0 00:01:28.959 08:34:47 build_native_dpdk -- scripts/common.sh@350 -- $ local d=0 00:01:28.959 08:34:47 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 0 =~ ^[0-9]+$ ]] 00:01:28.959 08:34:47 build_native_dpdk -- scripts/common.sh@352 -- $ echo 0 00:01:28.959 08:34:47 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=0 00:01:28.959 08:34:47 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 0 00:01:28.959 08:34:47 build_native_dpdk -- scripts/common.sh@350 -- $ local d=0 00:01:28.959 08:34:47 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 0 =~ ^[0-9]+$ ]] 00:01:28.959 08:34:47 build_native_dpdk -- scripts/common.sh@352 -- $ echo 0 00:01:28.959 08:34:47 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=0 00:01:28.959 08:34:47 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:01:28.959 08:34:47 build_native_dpdk -- scripts/common.sh@365 -- $ (( ver1[v] < ver2[v] )) 00:01:28.959 08:34:47 build_native_dpdk -- scripts/common.sh@361 -- $ (( v++ )) 00:01:28.959 08:34:47 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:28.959 08:34:47 build_native_dpdk -- scripts/common.sh@362 -- $ decimal rc3 00:01:28.959 08:34:47 build_native_dpdk -- scripts/common.sh@350 -- $ local d=rc3 00:01:28.959 08:34:47 build_native_dpdk -- scripts/common.sh@351 -- $ [[ rc3 =~ ^[0-9]+$ ]] 00:01:28.959 08:34:47 build_native_dpdk -- scripts/common.sh@353 -- $ [[ rc3 =~ ^0x ]] 00:01:28.959 08:34:47 build_native_dpdk -- scripts/common.sh@353 -- $ [[ rc3 =~ ^[a-f0-9]+$ ]] 00:01:28.959 08:34:47 build_native_dpdk -- scripts/common.sh@357 -- $ echo 0 00:01:28.959 08:34:47 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=0 00:01:28.959 08:34:47 build_native_dpdk -- scripts/common.sh@363 -- $ decimal '' 00:01:28.959 08:34:47 build_native_dpdk -- scripts/common.sh@350 -- $ local d= 00:01:28.959 08:34:47 build_native_dpdk -- scripts/common.sh@351 -- $ [[ '' =~ ^[0-9]+$ ]] 00:01:28.959 08:34:47 build_native_dpdk -- scripts/common.sh@353 -- $ [[ '' =~ ^0x ]] 00:01:28.959 08:34:47 build_native_dpdk -- scripts/common.sh@353 -- $ [[ '' =~ ^[a-f0-9]+$ ]] 00:01:28.959 08:34:47 build_native_dpdk -- scripts/common.sh@357 -- $ echo 0 00:01:28.959 08:34:47 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=0 00:01:28.959 08:34:47 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:01:28.959 08:34:47 build_native_dpdk -- scripts/common.sh@365 -- $ (( ver1[v] < ver2[v] )) 00:01:28.959 08:34:47 build_native_dpdk -- scripts/common.sh@361 -- $ (( v++ )) 00:01:28.959 08:34:47 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:28.959 08:34:47 build_native_dpdk -- scripts/common.sh@367 -- $ [[ 24 7 0 0 == \2\4\ \7\ \0\ \0 ]] 00:01:28.959 08:34:47 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:01:28.959 08:34:47 build_native_dpdk -- common/autobuild_common.sh@180 -- $ dpdk_kmods=false 00:01:28.959 08:34:47 build_native_dpdk -- common/autobuild_common.sh@181 -- $ uname -s 00:01:28.959 08:34:47 build_native_dpdk -- common/autobuild_common.sh@181 -- $ '[' Linux = FreeBSD ']' 00:01:28.959 08:34:47 build_native_dpdk -- common/autobuild_common.sh@185 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:01:28.959 08:34:47 build_native_dpdk -- common/autobuild_common.sh@185 -- $ meson build-tmp --prefix=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:33.159 The Meson build system 00:01:33.159 Version: 1.3.1 00:01:33.159 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:33.159 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp 00:01:33.159 Build type: native build 00:01:33.159 Program cat found: YES (/usr/bin/cat) 00:01:33.159 Project name: DPDK 00:01:33.159 Project version: 24.07.0-rc3 00:01:33.159 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:33.159 C linker for the host machine: gcc ld.bfd 2.39-16 00:01:33.159 Host machine cpu family: x86_64 00:01:33.159 Host machine cpu: x86_64 00:01:33.159 Message: ## Building in Developer Mode ## 00:01:33.159 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:33.159 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/check-symbols.sh) 00:01:33.159 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/options-ibverbs-static.sh) 00:01:33.159 Program python3 (elftools) found: YES (/usr/bin/python3) modules: elftools 00:01:33.159 Program cat found: YES (/usr/bin/cat) 00:01:33.159 config/meson.build:120: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:01:33.159 Compiler for C supports arguments -march=native: YES 00:01:33.159 Checking for size of "void *" : 8 00:01:33.159 Checking for size of "void *" : 8 (cached) 00:01:33.159 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:01:33.159 Library m found: YES 00:01:33.159 Library numa found: YES 00:01:33.159 Has header "numaif.h" : YES 00:01:33.159 Library fdt found: NO 00:01:33.159 Library execinfo found: NO 00:01:33.159 Has header "execinfo.h" : YES 00:01:33.159 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:33.159 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:33.160 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:33.160 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:33.160 Run-time dependency openssl found: YES 3.0.9 00:01:33.160 Run-time dependency libpcap found: YES 1.10.4 00:01:33.160 Has header "pcap.h" with dependency libpcap: YES 00:01:33.160 Compiler for C supports arguments -Wcast-qual: YES 00:01:33.160 Compiler for C supports arguments -Wdeprecated: YES 00:01:33.160 Compiler for C supports arguments -Wformat: YES 00:01:33.160 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:33.160 Compiler for C supports arguments -Wformat-security: NO 00:01:33.160 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:33.160 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:33.160 Compiler for C supports arguments -Wnested-externs: YES 00:01:33.160 Compiler for C supports arguments -Wold-style-definition: YES 00:01:33.160 Compiler for C supports arguments -Wpointer-arith: YES 00:01:33.160 Compiler for C supports arguments -Wsign-compare: YES 00:01:33.160 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:33.160 Compiler for C supports arguments -Wundef: YES 00:01:33.160 Compiler for C supports arguments -Wwrite-strings: YES 00:01:33.160 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:33.160 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:33.160 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:33.160 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:33.160 Program objdump found: YES (/usr/bin/objdump) 00:01:33.160 Compiler for C supports arguments -mavx512f: YES 00:01:33.160 Checking if "AVX512 checking" compiles: YES 00:01:33.160 Fetching value of define "__SSE4_2__" : 1 00:01:33.160 Fetching value of define "__AES__" : 1 00:01:33.160 Fetching value of define "__AVX__" : 1 00:01:33.160 Fetching value of define "__AVX2__" : (undefined) 00:01:33.160 Fetching value of define "__AVX512BW__" : (undefined) 00:01:33.160 Fetching value of define "__AVX512CD__" : (undefined) 00:01:33.160 Fetching value of define "__AVX512DQ__" : (undefined) 00:01:33.160 Fetching value of define "__AVX512F__" : (undefined) 00:01:33.160 Fetching value of define "__AVX512VL__" : (undefined) 00:01:33.160 Fetching value of define "__PCLMUL__" : 1 00:01:33.160 Fetching value of define "__RDRND__" : 1 00:01:33.160 Fetching value of define "__RDSEED__" : (undefined) 00:01:33.160 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:33.160 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:33.160 Message: lib/log: Defining dependency "log" 00:01:33.160 Message: lib/kvargs: Defining dependency "kvargs" 00:01:33.160 Message: lib/argparse: Defining dependency "argparse" 00:01:33.160 Message: lib/telemetry: Defining dependency "telemetry" 00:01:33.160 Checking for function "getentropy" : NO 00:01:33.160 Message: lib/eal: Defining dependency "eal" 00:01:33.160 Message: lib/ptr_compress: Defining dependency "ptr_compress" 00:01:33.160 Message: lib/ring: Defining dependency "ring" 00:01:33.160 Message: lib/rcu: Defining dependency "rcu" 00:01:33.160 Message: lib/mempool: Defining dependency "mempool" 00:01:33.160 Message: lib/mbuf: Defining dependency "mbuf" 00:01:33.160 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:33.160 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:33.160 Compiler for C supports arguments -mpclmul: YES 00:01:33.160 Compiler for C supports arguments -maes: YES 00:01:33.160 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:33.160 Compiler for C supports arguments -mavx512bw: YES 00:01:33.160 Compiler for C supports arguments -mavx512dq: YES 00:01:33.160 Compiler for C supports arguments -mavx512vl: YES 00:01:33.160 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:33.160 Compiler for C supports arguments -mavx2: YES 00:01:33.160 Compiler for C supports arguments -mavx: YES 00:01:33.160 Message: lib/net: Defining dependency "net" 00:01:33.160 Message: lib/meter: Defining dependency "meter" 00:01:33.160 Message: lib/ethdev: Defining dependency "ethdev" 00:01:33.160 Message: lib/pci: Defining dependency "pci" 00:01:33.160 Message: lib/cmdline: Defining dependency "cmdline" 00:01:33.160 Message: lib/metrics: Defining dependency "metrics" 00:01:33.160 Message: lib/hash: Defining dependency "hash" 00:01:33.160 Message: lib/timer: Defining dependency "timer" 00:01:33.160 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:33.160 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:01:33.160 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:01:33.160 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:01:33.160 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:01:33.160 Message: lib/acl: Defining dependency "acl" 00:01:33.160 Message: lib/bbdev: Defining dependency "bbdev" 00:01:33.160 Message: lib/bitratestats: Defining dependency "bitratestats" 00:01:33.160 Run-time dependency libelf found: YES 0.190 00:01:33.160 Message: lib/bpf: Defining dependency "bpf" 00:01:33.160 Message: lib/cfgfile: Defining dependency "cfgfile" 00:01:33.160 Message: lib/compressdev: Defining dependency "compressdev" 00:01:33.160 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:33.160 Message: lib/distributor: Defining dependency "distributor" 00:01:33.160 Message: lib/dmadev: Defining dependency "dmadev" 00:01:33.160 Message: lib/efd: Defining dependency "efd" 00:01:33.160 Message: lib/eventdev: Defining dependency "eventdev" 00:01:33.160 Message: lib/dispatcher: Defining dependency "dispatcher" 00:01:33.160 Message: lib/gpudev: Defining dependency "gpudev" 00:01:33.160 Message: lib/gro: Defining dependency "gro" 00:01:33.160 Message: lib/gso: Defining dependency "gso" 00:01:33.160 Message: lib/ip_frag: Defining dependency "ip_frag" 00:01:33.160 Message: lib/jobstats: Defining dependency "jobstats" 00:01:33.160 Message: lib/latencystats: Defining dependency "latencystats" 00:01:33.160 Message: lib/lpm: Defining dependency "lpm" 00:01:33.160 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:33.160 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:01:33.160 Fetching value of define "__AVX512IFMA__" : (undefined) 00:01:33.160 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:01:33.160 Message: lib/member: Defining dependency "member" 00:01:33.160 Message: lib/pcapng: Defining dependency "pcapng" 00:01:33.160 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:33.160 Message: lib/power: Defining dependency "power" 00:01:33.160 Message: lib/rawdev: Defining dependency "rawdev" 00:01:33.160 Message: lib/regexdev: Defining dependency "regexdev" 00:01:33.160 Message: lib/mldev: Defining dependency "mldev" 00:01:33.160 Message: lib/rib: Defining dependency "rib" 00:01:33.160 Message: lib/reorder: Defining dependency "reorder" 00:01:33.160 Message: lib/sched: Defining dependency "sched" 00:01:33.160 Message: lib/security: Defining dependency "security" 00:01:33.160 Message: lib/stack: Defining dependency "stack" 00:01:33.160 Has header "linux/userfaultfd.h" : YES 00:01:33.160 Has header "linux/vduse.h" : YES 00:01:33.160 Message: lib/vhost: Defining dependency "vhost" 00:01:33.160 Message: lib/ipsec: Defining dependency "ipsec" 00:01:33.160 Message: lib/pdcp: Defining dependency "pdcp" 00:01:33.160 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:33.160 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:01:33.160 Compiler for C supports arguments -mavx512f -mavx512dq: YES 00:01:33.160 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:33.160 Message: lib/fib: Defining dependency "fib" 00:01:33.160 Message: lib/port: Defining dependency "port" 00:01:33.160 Message: lib/pdump: Defining dependency "pdump" 00:01:33.160 Message: lib/table: Defining dependency "table" 00:01:33.160 Message: lib/pipeline: Defining dependency "pipeline" 00:01:33.160 Message: lib/graph: Defining dependency "graph" 00:01:33.160 Message: lib/node: Defining dependency "node" 00:01:34.106 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:34.106 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:34.106 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:34.106 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:34.106 Compiler for C supports arguments -Wno-sign-compare: YES 00:01:34.106 Compiler for C supports arguments -Wno-unused-value: YES 00:01:34.106 Compiler for C supports arguments -Wno-format: YES 00:01:34.106 Compiler for C supports arguments -Wno-format-security: YES 00:01:34.106 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:01:34.106 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:01:34.106 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:01:34.106 Compiler for C supports arguments -Wno-unused-parameter: YES 00:01:34.106 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:34.106 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:34.106 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:34.106 Compiler for C supports arguments -march=skylake-avx512: YES 00:01:34.106 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:01:34.106 Has header "sys/epoll.h" : YES 00:01:34.106 Program doxygen found: YES (/usr/bin/doxygen) 00:01:34.106 Configuring doxy-api-html.conf using configuration 00:01:34.106 Configuring doxy-api-man.conf using configuration 00:01:34.106 Program mandb found: YES (/usr/bin/mandb) 00:01:34.106 Program sphinx-build found: NO 00:01:34.106 Configuring rte_build_config.h using configuration 00:01:34.106 Message: 00:01:34.106 ================= 00:01:34.106 Applications Enabled 00:01:34.106 ================= 00:01:34.106 00:01:34.106 apps: 00:01:34.106 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:01:34.106 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:01:34.106 test-pmd, test-regex, test-sad, test-security-perf, 00:01:34.106 00:01:34.106 Message: 00:01:34.106 ================= 00:01:34.106 Libraries Enabled 00:01:34.106 ================= 00:01:34.106 00:01:34.106 libs: 00:01:34.106 log, kvargs, argparse, telemetry, eal, ptr_compress, ring, rcu, 00:01:34.106 mempool, mbuf, net, meter, ethdev, pci, cmdline, metrics, 00:01:34.106 hash, timer, acl, bbdev, bitratestats, bpf, cfgfile, compressdev, 00:01:34.106 cryptodev, distributor, dmadev, efd, eventdev, dispatcher, gpudev, gro, 00:01:34.106 gso, ip_frag, jobstats, latencystats, lpm, member, pcapng, power, 00:01:34.106 rawdev, regexdev, mldev, rib, reorder, sched, security, stack, 00:01:34.106 vhost, ipsec, pdcp, fib, port, pdump, table, pipeline, 00:01:34.106 graph, node, 00:01:34.106 00:01:34.106 Message: 00:01:34.106 =============== 00:01:34.106 Drivers Enabled 00:01:34.106 =============== 00:01:34.106 00:01:34.106 common: 00:01:34.106 00:01:34.106 bus: 00:01:34.106 pci, vdev, 00:01:34.106 mempool: 00:01:34.106 ring, 00:01:34.106 dma: 00:01:34.106 00:01:34.106 net: 00:01:34.106 i40e, 00:01:34.106 raw: 00:01:34.106 00:01:34.106 crypto: 00:01:34.106 00:01:34.106 compress: 00:01:34.106 00:01:34.106 regex: 00:01:34.106 00:01:34.106 ml: 00:01:34.106 00:01:34.106 vdpa: 00:01:34.106 00:01:34.106 event: 00:01:34.106 00:01:34.106 baseband: 00:01:34.106 00:01:34.106 gpu: 00:01:34.106 00:01:34.106 00:01:34.106 Message: 00:01:34.106 ================= 00:01:34.106 Content Skipped 00:01:34.106 ================= 00:01:34.106 00:01:34.106 apps: 00:01:34.106 00:01:34.106 libs: 00:01:34.106 00:01:34.106 drivers: 00:01:34.106 common/cpt: not in enabled drivers build config 00:01:34.106 common/dpaax: not in enabled drivers build config 00:01:34.106 common/iavf: not in enabled drivers build config 00:01:34.106 common/idpf: not in enabled drivers build config 00:01:34.106 common/ionic: not in enabled drivers build config 00:01:34.106 common/mvep: not in enabled drivers build config 00:01:34.106 common/octeontx: not in enabled drivers build config 00:01:34.106 bus/auxiliary: not in enabled drivers build config 00:01:34.106 bus/cdx: not in enabled drivers build config 00:01:34.106 bus/dpaa: not in enabled drivers build config 00:01:34.106 bus/fslmc: not in enabled drivers build config 00:01:34.106 bus/ifpga: not in enabled drivers build config 00:01:34.106 bus/platform: not in enabled drivers build config 00:01:34.106 bus/uacce: not in enabled drivers build config 00:01:34.106 bus/vmbus: not in enabled drivers build config 00:01:34.106 common/cnxk: not in enabled drivers build config 00:01:34.106 common/mlx5: not in enabled drivers build config 00:01:34.106 common/nfp: not in enabled drivers build config 00:01:34.106 common/nitrox: not in enabled drivers build config 00:01:34.106 common/qat: not in enabled drivers build config 00:01:34.106 common/sfc_efx: not in enabled drivers build config 00:01:34.106 mempool/bucket: not in enabled drivers build config 00:01:34.106 mempool/cnxk: not in enabled drivers build config 00:01:34.106 mempool/dpaa: not in enabled drivers build config 00:01:34.106 mempool/dpaa2: not in enabled drivers build config 00:01:34.106 mempool/octeontx: not in enabled drivers build config 00:01:34.106 mempool/stack: not in enabled drivers build config 00:01:34.106 dma/cnxk: not in enabled drivers build config 00:01:34.106 dma/dpaa: not in enabled drivers build config 00:01:34.106 dma/dpaa2: not in enabled drivers build config 00:01:34.106 dma/hisilicon: not in enabled drivers build config 00:01:34.106 dma/idxd: not in enabled drivers build config 00:01:34.106 dma/ioat: not in enabled drivers build config 00:01:34.106 dma/odm: not in enabled drivers build config 00:01:34.106 dma/skeleton: not in enabled drivers build config 00:01:34.106 net/af_packet: not in enabled drivers build config 00:01:34.106 net/af_xdp: not in enabled drivers build config 00:01:34.106 net/ark: not in enabled drivers build config 00:01:34.106 net/atlantic: not in enabled drivers build config 00:01:34.106 net/avp: not in enabled drivers build config 00:01:34.106 net/axgbe: not in enabled drivers build config 00:01:34.106 net/bnx2x: not in enabled drivers build config 00:01:34.106 net/bnxt: not in enabled drivers build config 00:01:34.106 net/bonding: not in enabled drivers build config 00:01:34.106 net/cnxk: not in enabled drivers build config 00:01:34.106 net/cpfl: not in enabled drivers build config 00:01:34.106 net/cxgbe: not in enabled drivers build config 00:01:34.106 net/dpaa: not in enabled drivers build config 00:01:34.106 net/dpaa2: not in enabled drivers build config 00:01:34.106 net/e1000: not in enabled drivers build config 00:01:34.106 net/ena: not in enabled drivers build config 00:01:34.106 net/enetc: not in enabled drivers build config 00:01:34.106 net/enetfec: not in enabled drivers build config 00:01:34.106 net/enic: not in enabled drivers build config 00:01:34.106 net/failsafe: not in enabled drivers build config 00:01:34.106 net/fm10k: not in enabled drivers build config 00:01:34.106 net/gve: not in enabled drivers build config 00:01:34.106 net/hinic: not in enabled drivers build config 00:01:34.106 net/hns3: not in enabled drivers build config 00:01:34.106 net/iavf: not in enabled drivers build config 00:01:34.106 net/ice: not in enabled drivers build config 00:01:34.106 net/idpf: not in enabled drivers build config 00:01:34.106 net/igc: not in enabled drivers build config 00:01:34.106 net/ionic: not in enabled drivers build config 00:01:34.106 net/ipn3ke: not in enabled drivers build config 00:01:34.106 net/ixgbe: not in enabled drivers build config 00:01:34.106 net/mana: not in enabled drivers build config 00:01:34.106 net/memif: not in enabled drivers build config 00:01:34.106 net/mlx4: not in enabled drivers build config 00:01:34.106 net/mlx5: not in enabled drivers build config 00:01:34.106 net/mvneta: not in enabled drivers build config 00:01:34.106 net/mvpp2: not in enabled drivers build config 00:01:34.106 net/netvsc: not in enabled drivers build config 00:01:34.106 net/nfb: not in enabled drivers build config 00:01:34.106 net/nfp: not in enabled drivers build config 00:01:34.106 net/ngbe: not in enabled drivers build config 00:01:34.106 net/ntnic: not in enabled drivers build config 00:01:34.106 net/null: not in enabled drivers build config 00:01:34.106 net/octeontx: not in enabled drivers build config 00:01:34.106 net/octeon_ep: not in enabled drivers build config 00:01:34.106 net/pcap: not in enabled drivers build config 00:01:34.106 net/pfe: not in enabled drivers build config 00:01:34.106 net/qede: not in enabled drivers build config 00:01:34.106 net/ring: not in enabled drivers build config 00:01:34.106 net/sfc: not in enabled drivers build config 00:01:34.106 net/softnic: not in enabled drivers build config 00:01:34.106 net/tap: not in enabled drivers build config 00:01:34.106 net/thunderx: not in enabled drivers build config 00:01:34.106 net/txgbe: not in enabled drivers build config 00:01:34.106 net/vdev_netvsc: not in enabled drivers build config 00:01:34.106 net/vhost: not in enabled drivers build config 00:01:34.106 net/virtio: not in enabled drivers build config 00:01:34.107 net/vmxnet3: not in enabled drivers build config 00:01:34.107 raw/cnxk_bphy: not in enabled drivers build config 00:01:34.107 raw/cnxk_gpio: not in enabled drivers build config 00:01:34.107 raw/dpaa2_cmdif: not in enabled drivers build config 00:01:34.107 raw/ifpga: not in enabled drivers build config 00:01:34.107 raw/ntb: not in enabled drivers build config 00:01:34.107 raw/skeleton: not in enabled drivers build config 00:01:34.107 crypto/armv8: not in enabled drivers build config 00:01:34.107 crypto/bcmfs: not in enabled drivers build config 00:01:34.107 crypto/caam_jr: not in enabled drivers build config 00:01:34.107 crypto/ccp: not in enabled drivers build config 00:01:34.107 crypto/cnxk: not in enabled drivers build config 00:01:34.107 crypto/dpaa_sec: not in enabled drivers build config 00:01:34.107 crypto/dpaa2_sec: not in enabled drivers build config 00:01:34.107 crypto/ionic: not in enabled drivers build config 00:01:34.107 crypto/ipsec_mb: not in enabled drivers build config 00:01:34.107 crypto/mlx5: not in enabled drivers build config 00:01:34.107 crypto/mvsam: not in enabled drivers build config 00:01:34.107 crypto/nitrox: not in enabled drivers build config 00:01:34.107 crypto/null: not in enabled drivers build config 00:01:34.107 crypto/octeontx: not in enabled drivers build config 00:01:34.107 crypto/openssl: not in enabled drivers build config 00:01:34.107 crypto/scheduler: not in enabled drivers build config 00:01:34.107 crypto/uadk: not in enabled drivers build config 00:01:34.107 crypto/virtio: not in enabled drivers build config 00:01:34.107 compress/isal: not in enabled drivers build config 00:01:34.107 compress/mlx5: not in enabled drivers build config 00:01:34.107 compress/nitrox: not in enabled drivers build config 00:01:34.107 compress/octeontx: not in enabled drivers build config 00:01:34.107 compress/uadk: not in enabled drivers build config 00:01:34.107 compress/zlib: not in enabled drivers build config 00:01:34.107 regex/mlx5: not in enabled drivers build config 00:01:34.107 regex/cn9k: not in enabled drivers build config 00:01:34.107 ml/cnxk: not in enabled drivers build config 00:01:34.107 vdpa/ifc: not in enabled drivers build config 00:01:34.107 vdpa/mlx5: not in enabled drivers build config 00:01:34.107 vdpa/nfp: not in enabled drivers build config 00:01:34.107 vdpa/sfc: not in enabled drivers build config 00:01:34.107 event/cnxk: not in enabled drivers build config 00:01:34.107 event/dlb2: not in enabled drivers build config 00:01:34.107 event/dpaa: not in enabled drivers build config 00:01:34.107 event/dpaa2: not in enabled drivers build config 00:01:34.107 event/dsw: not in enabled drivers build config 00:01:34.107 event/opdl: not in enabled drivers build config 00:01:34.107 event/skeleton: not in enabled drivers build config 00:01:34.107 event/sw: not in enabled drivers build config 00:01:34.107 event/octeontx: not in enabled drivers build config 00:01:34.107 baseband/acc: not in enabled drivers build config 00:01:34.107 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:01:34.107 baseband/fpga_lte_fec: not in enabled drivers build config 00:01:34.107 baseband/la12xx: not in enabled drivers build config 00:01:34.107 baseband/null: not in enabled drivers build config 00:01:34.107 baseband/turbo_sw: not in enabled drivers build config 00:01:34.107 gpu/cuda: not in enabled drivers build config 00:01:34.107 00:01:34.107 00:01:34.107 Build targets in project: 224 00:01:34.107 00:01:34.107 DPDK 24.07.0-rc3 00:01:34.107 00:01:34.107 User defined options 00:01:34.107 libdir : lib 00:01:34.107 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:34.107 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:01:34.107 c_link_args : 00:01:34.107 enable_docs : false 00:01:34.107 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:34.107 enable_kmods : false 00:01:34.107 machine : native 00:01:34.107 tests : false 00:01:34.107 00:01:34.107 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:34.107 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:01:34.107 08:34:52 build_native_dpdk -- common/autobuild_common.sh@189 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j48 00:01:34.107 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:01:34.107 [1/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:34.107 [2/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:34.366 [3/723] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:34.366 [4/723] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:34.366 [5/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:34.366 [6/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:34.366 [7/723] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:34.366 [8/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:34.366 [9/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:34.366 [10/723] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:34.366 [11/723] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:34.366 [12/723] Linking static target lib/librte_kvargs.a 00:01:34.366 [13/723] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:34.366 [14/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:34.630 [15/723] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:34.630 [16/723] Linking static target lib/librte_log.a 00:01:34.630 [17/723] Compiling C object lib/librte_argparse.a.p/argparse_rte_argparse.c.o 00:01:34.630 [18/723] Linking static target lib/librte_argparse.a 00:01:34.889 [19/723] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.153 [20/723] Generating lib/argparse.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.153 [21/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:35.153 [22/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:35.415 [23/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:35.415 [24/723] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:35.415 [25/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:35.415 [26/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:35.415 [27/723] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:35.415 [28/723] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:35.415 [29/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:35.415 [30/723] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:35.415 [31/723] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.415 [32/723] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:35.415 [33/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:35.415 [34/723] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:35.415 [35/723] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:35.415 [36/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:35.415 [37/723] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:35.415 [38/723] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:35.415 [39/723] Linking target lib/librte_log.so.24.2 00:01:35.415 [40/723] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:35.415 [41/723] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:35.415 [42/723] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:35.415 [43/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:35.415 [44/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:35.415 [45/723] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:35.415 [46/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:35.415 [47/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:35.415 [48/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:35.416 [49/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:35.416 [50/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:35.678 [51/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:35.678 [52/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:35.678 [53/723] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:35.678 [54/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:35.678 [55/723] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:35.678 [56/723] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:35.678 [57/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:35.678 [58/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:35.678 [59/723] Generating symbol file lib/librte_log.so.24.2.p/librte_log.so.24.2.symbols 00:01:35.678 [60/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:35.678 [61/723] Linking target lib/librte_kvargs.so.24.2 00:01:35.678 [62/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:35.678 [63/723] Linking target lib/librte_argparse.so.24.2 00:01:35.936 [64/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:35.936 [65/723] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:35.936 [66/723] Generating symbol file lib/librte_kvargs.so.24.2.p/librte_kvargs.so.24.2.symbols 00:01:35.936 [67/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:35.936 [68/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:35.936 [69/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:36.198 [70/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:36.198 [71/723] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:36.198 [72/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:36.198 [73/723] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:36.478 [74/723] Compiling C object lib/librte_eal.a.p/eal_x86_rte_mmu.c.o 00:01:36.478 [75/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:36.478 [76/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:36.478 [77/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:36.478 [78/723] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:36.478 [79/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:36.478 [80/723] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:36.478 [81/723] Linking static target lib/librte_pci.a 00:01:36.478 [82/723] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:36.478 [83/723] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:36.478 [84/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:36.740 [85/723] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:36.740 [86/723] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:36.740 [87/723] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:36.740 [88/723] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:36.740 [89/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:36.740 [90/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:36.740 [91/723] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:36.740 [92/723] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:36.740 [93/723] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:36.740 [94/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:36.740 [95/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:36.740 [96/723] Linking static target lib/librte_ring.a 00:01:36.740 [97/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:36.740 [98/723] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:36.740 [99/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:36.740 [100/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:36.740 [101/723] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.740 [102/723] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:36.740 [103/723] Linking static target lib/librte_meter.a 00:01:36.740 [104/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:36.740 [105/723] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:36.740 [106/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:36.740 [107/723] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:36.740 [108/723] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:36.740 [109/723] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:36.740 [110/723] Linking static target lib/librte_telemetry.a 00:01:37.005 [111/723] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:37.005 [112/723] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:37.005 [113/723] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:37.005 [114/723] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:37.005 [115/723] Linking static target lib/librte_net.a 00:01:37.005 [116/723] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:37.005 [117/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:37.005 [118/723] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.264 [119/723] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.264 [120/723] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:37.264 [121/723] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:37.264 [122/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:37.264 [123/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:37.264 [124/723] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:37.264 [125/723] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:37.532 [126/723] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:37.532 [127/723] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.532 [128/723] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.532 [129/723] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:37.532 [130/723] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:37.532 [131/723] Linking static target lib/librte_mempool.a 00:01:37.532 [132/723] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:37.532 [133/723] Linking target lib/librte_telemetry.so.24.2 00:01:37.532 [134/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:37.532 [135/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:37.532 [136/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:37.794 [137/723] Linking static target lib/librte_cmdline.a 00:01:37.794 [138/723] Linking static target lib/librte_eal.a 00:01:37.794 [139/723] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:37.794 [140/723] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:37.794 [141/723] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:37.794 [142/723] Generating symbol file lib/librte_telemetry.so.24.2.p/librte_telemetry.so.24.2.symbols 00:01:37.794 [143/723] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:37.794 [144/723] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:01:37.794 [145/723] Linking static target lib/librte_cfgfile.a 00:01:37.794 [146/723] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:38.057 [147/723] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:01:38.057 [148/723] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:01:38.057 [149/723] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:01:38.057 [150/723] Linking static target lib/librte_metrics.a 00:01:38.057 [151/723] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:38.057 [152/723] Linking static target lib/librte_rcu.a 00:01:38.057 [153/723] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:38.057 [154/723] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:01:38.319 [155/723] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:01:38.319 [156/723] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:01:38.319 [157/723] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:01:38.319 [158/723] Linking static target lib/librte_bitratestats.a 00:01:38.319 [159/723] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:01:38.319 [160/723] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:01:38.319 [161/723] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.319 [162/723] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:01:38.581 [163/723] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:38.581 [164/723] Linking static target lib/librte_mbuf.a 00:01:38.581 [165/723] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.581 [166/723] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:01:38.581 [167/723] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.581 [168/723] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:38.581 [169/723] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:38.581 [170/723] Linking static target lib/librte_timer.a 00:01:38.581 [171/723] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.581 [172/723] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:01:38.581 [173/723] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.581 [174/723] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:38.841 [175/723] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:01:38.841 [176/723] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:01:38.841 [177/723] Linking static target lib/librte_bbdev.a 00:01:38.841 [178/723] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:38.841 [179/723] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:01:38.841 [180/723] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:38.841 [181/723] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:38.841 [182/723] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.102 [183/723] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:39.102 [184/723] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:01:39.103 [185/723] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:39.103 [186/723] Linking static target lib/librte_compressdev.a 00:01:39.103 [187/723] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:39.103 [188/723] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:01:39.103 [189/723] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:01:39.103 [190/723] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.371 [191/723] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:01:39.371 [192/723] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:39.371 [193/723] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.637 [194/723] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:01:39.637 [195/723] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:01:39.637 [196/723] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.902 [197/723] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:39.903 [198/723] Linking static target lib/librte_dmadev.a 00:01:39.903 [199/723] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.903 [200/723] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:01:39.903 [201/723] Linking static target lib/librte_distributor.a 00:01:39.903 [202/723] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:01:39.903 [203/723] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:01:39.903 [204/723] Linking static target lib/librte_dispatcher.a 00:01:39.903 [205/723] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:01:40.166 [206/723] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:01:40.166 [207/723] Linking static target lib/librte_bpf.a 00:01:40.166 [208/723] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:01:40.166 [209/723] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:01:40.166 [210/723] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:01:40.166 [211/723] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:01:40.166 [212/723] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:01:40.166 [213/723] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:01:40.166 [214/723] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:40.166 [215/723] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:01:40.166 [216/723] Linking static target lib/librte_gpudev.a 00:01:40.166 [217/723] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:40.166 [218/723] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:01:40.166 [219/723] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:01:40.427 [220/723] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:40.427 [221/723] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:01:40.427 [222/723] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.427 [223/723] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:01:40.427 [224/723] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:01:40.427 [225/723] Linking static target lib/librte_gro.a 00:01:40.427 [226/723] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:01:40.427 [227/723] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:40.427 [228/723] Linking static target lib/librte_jobstats.a 00:01:40.427 [229/723] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:01:40.427 [230/723] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:01:40.427 [231/723] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:01:40.427 [232/723] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.427 [233/723] Linking static target lib/librte_gso.a 00:01:40.427 [234/723] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:01:40.688 [235/723] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.688 [236/723] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:01:40.688 [237/723] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.688 [238/723] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.688 [239/723] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:01:40.688 [240/723] Linking static target lib/librte_latencystats.a 00:01:40.953 [241/723] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:01:40.953 [242/723] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.953 [243/723] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:01:40.953 [244/723] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:01:40.953 [245/723] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.953 [246/723] Linking static target lib/librte_ip_frag.a 00:01:40.953 [247/723] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:01:40.953 [248/723] Linking static target lib/member/libsketch_avx512_tmp.a 00:01:40.953 [249/723] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:01:40.953 [250/723] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:01:40.953 [251/723] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:01:41.215 [252/723] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:01:41.215 [253/723] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:01:41.215 [254/723] Linking static target lib/librte_efd.a 00:01:41.215 [255/723] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.215 [256/723] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:01:41.215 [257/723] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:01:41.479 [258/723] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:01:41.479 [259/723] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.479 [260/723] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:41.479 [261/723] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:41.479 [262/723] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:01:41.479 [263/723] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:01:41.479 [264/723] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.479 [265/723] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.742 [266/723] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:41.742 [267/723] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:41.742 [268/723] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:41.742 [269/723] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:41.742 [270/723] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:01:41.742 [271/723] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:01:41.742 [272/723] Linking static target lib/librte_regexdev.a 00:01:42.000 [273/723] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:01:42.000 [274/723] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:01:42.000 [275/723] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:42.000 [276/723] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:01:42.000 [277/723] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:01:42.000 [278/723] Linking static target lib/librte_rawdev.a 00:01:42.000 [279/723] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:01:42.000 [280/723] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:01:42.000 [281/723] Linking static target lib/librte_pcapng.a 00:01:42.000 [282/723] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:01:42.000 [283/723] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:01:42.000 [284/723] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:42.000 [285/723] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:01:42.000 [286/723] Linking static target lib/librte_power.a 00:01:42.000 [287/723] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:01:42.262 [288/723] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:01:42.262 [289/723] Linking static target lib/librte_mldev.a 00:01:42.262 [290/723] Linking static target lib/librte_lpm.a 00:01:42.262 [291/723] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:01:42.262 [292/723] Linking static target lib/librte_stack.a 00:01:42.262 [293/723] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:01:42.262 [294/723] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:01:42.262 [295/723] Linking static target lib/acl/libavx2_tmp.a 00:01:42.262 [296/723] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:01:42.262 [297/723] Compiling C object lib/librte_port.a.p/port_port_log.c.o 00:01:42.262 [298/723] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:42.526 [299/723] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.526 [300/723] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:42.526 [301/723] Linking static target lib/librte_reorder.a 00:01:42.526 [302/723] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.526 [303/723] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:42.526 [304/723] Linking static target lib/librte_security.a 00:01:42.526 [305/723] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:42.526 [306/723] Linking static target lib/librte_cryptodev.a 00:01:42.526 [307/723] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:42.790 [308/723] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:42.790 [309/723] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.790 [310/723] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.790 [311/723] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:42.790 [312/723] Linking static target lib/librte_hash.a 00:01:42.790 [313/723] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:01:43.058 [314/723] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.058 [315/723] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.058 [316/723] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:01:43.058 [317/723] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:43.058 [318/723] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:01:43.058 [319/723] Linking static target lib/acl/libavx512_tmp.a 00:01:43.058 [320/723] Linking static target lib/librte_rib.a 00:01:43.058 [321/723] Linking static target lib/librte_acl.a 00:01:43.058 [322/723] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:01:43.058 [323/723] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:43.058 [324/723] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:43.058 [325/723] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:01:43.058 [326/723] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.058 [327/723] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:01:43.058 [328/723] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:01:43.058 [329/723] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:01:43.323 [330/723] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.323 [331/723] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:01:43.323 [332/723] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:01:43.323 [333/723] Linking static target lib/fib/libtrie_avx512_tmp.a 00:01:43.323 [334/723] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:01:43.323 [335/723] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:01:43.323 [336/723] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:01:43.323 [337/723] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:01:43.323 [338/723] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:01:43.585 [339/723] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.585 [340/723] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:01:43.585 [341/723] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.850 [342/723] Compiling C object lib/librte_table.a.p/table_table_log.c.o 00:01:43.850 [343/723] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.850 [344/723] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:01:44.110 [345/723] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:01:44.110 [346/723] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:01:44.370 [347/723] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:01:44.370 [348/723] Linking static target lib/librte_eventdev.a 00:01:44.370 [349/723] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:01:44.370 [350/723] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:44.370 [351/723] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:01:44.370 [352/723] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:01:44.370 [353/723] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:01:44.370 [354/723] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:01:44.370 [355/723] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:01:44.370 [356/723] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.370 [357/723] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:01:44.631 [358/723] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:01:44.631 [359/723] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:01:44.631 [360/723] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.631 [361/723] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:01:44.631 [362/723] Linking static target lib/librte_sched.a 00:01:44.631 [363/723] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:44.631 [364/723] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:01:44.631 [365/723] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:01:44.631 [366/723] Linking static target lib/librte_member.a 00:01:44.631 [367/723] Linking static target lib/librte_fib.a 00:01:44.631 [368/723] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:01:44.631 [369/723] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:01:44.631 [370/723] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:01:44.895 [371/723] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:01:44.895 [372/723] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:01:44.895 [373/723] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:01:44.895 [374/723] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:01:44.895 [375/723] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:01:44.895 [376/723] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:01:44.895 [377/723] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:01:44.895 [378/723] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:44.895 [379/723] Linking static target lib/librte_ethdev.a 00:01:45.157 [380/723] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:01:45.157 [381/723] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.157 [382/723] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:01:45.157 [383/723] Compiling C object lib/librte_node.a.p/node_null.c.o 00:01:45.157 [384/723] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:01:45.157 [385/723] Linking static target lib/librte_ipsec.a 00:01:45.157 [386/723] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.157 [387/723] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:01:45.423 [388/723] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.423 [389/723] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:01:45.423 [390/723] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:45.423 [391/723] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:01:45.682 [392/723] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:01:45.682 [393/723] Linking static target lib/librte_pdump.a 00:01:45.682 [394/723] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:01:45.682 [395/723] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:01:45.682 [396/723] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:01:45.682 [397/723] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.682 [398/723] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:01:45.682 [399/723] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:01:45.682 [400/723] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:45.682 [401/723] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:01:45.944 [402/723] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:01:45.944 [403/723] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:01:45.944 [404/723] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:01:45.944 [405/723] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:01:45.944 [406/723] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:01:45.944 [407/723] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.944 [408/723] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:01:46.209 [409/723] Compiling C object lib/librte_node.a.p/node_log.c.o 00:01:46.209 [410/723] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:01:46.209 [411/723] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:01:46.209 [412/723] Linking static target lib/librte_pdcp.a 00:01:46.209 [413/723] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:01:46.209 [414/723] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:01:46.209 [415/723] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:01:46.209 [416/723] Linking static target lib/librte_table.a 00:01:46.209 [417/723] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:01:46.209 [418/723] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:01:46.469 [419/723] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:01:46.469 [420/723] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:01:46.469 [421/723] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:46.736 [422/723] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:01:46.736 [423/723] Linking static target lib/librte_graph.a 00:01:46.736 [424/723] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.736 [425/723] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:46.736 [426/723] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:46.736 [427/723] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:47.000 [428/723] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:01:47.000 [429/723] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:01:47.000 [430/723] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:01:47.000 [431/723] Generating app/graph/commands_hdr with a custom command (wrapped by meson to capture output) 00:01:47.000 [432/723] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:01:47.000 [433/723] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:01:47.000 [434/723] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:01:47.264 [435/723] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:01:47.264 [436/723] Linking static target lib/librte_port.a 00:01:47.264 [437/723] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:47.264 [438/723] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:47.264 [439/723] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:01:47.264 [440/723] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:01:47.264 [441/723] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:47.264 [442/723] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:47.528 [443/723] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.528 [444/723] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.528 [445/723] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:47.528 [446/723] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:01:47.528 [447/723] Compiling C object drivers/librte_bus_vdev.so.24.2.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:47.528 [448/723] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:47.528 [449/723] Linking static target drivers/librte_bus_vdev.a 00:01:47.528 [450/723] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.793 [451/723] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:01:47.793 [452/723] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:47.793 [453/723] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:01:47.793 [454/723] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:47.793 [455/723] Linking static target drivers/librte_bus_pci.a 00:01:47.793 [456/723] Compiling C object drivers/librte_bus_pci.so.24.2.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:47.793 [457/723] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:01:47.793 [458/723] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:01:47.793 [459/723] Linking static target lib/librte_node.a 00:01:47.793 [460/723] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:01:48.054 [461/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:01:48.054 [462/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:01:48.054 [463/723] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:01:48.054 [464/723] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.054 [465/723] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.054 [466/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:01:48.054 [467/723] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:01:48.054 [468/723] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:01:48.319 [469/723] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:01:48.319 [470/723] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:01:48.319 [471/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:01:48.319 [472/723] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:48.319 [473/723] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:48.319 [474/723] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:01:48.319 [475/723] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.635 [476/723] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:01:48.635 [477/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:01:48.635 [478/723] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:01:48.635 [479/723] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.635 [480/723] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.635 [481/723] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:01:48.635 [482/723] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:48.635 [483/723] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:48.635 [484/723] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:01:48.635 [485/723] Linking static target drivers/librte_mempool_ring.a 00:01:48.635 [486/723] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:01:48.635 [487/723] Compiling C object drivers/librte_mempool_ring.so.24.2.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:48.635 [488/723] Linking target lib/librte_eal.so.24.2 00:01:48.897 [489/723] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:01:48.897 [490/723] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:01:48.897 [491/723] Compiling C object app/dpdk-graph.p/graph_l2fwd.c.o 00:01:48.897 [492/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:01:48.897 [493/723] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:01:48.897 [494/723] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:01:48.897 [495/723] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:01:48.897 [496/723] Generating symbol file lib/librte_eal.so.24.2.p/librte_eal.so.24.2.symbols 00:01:49.162 [497/723] Linking target lib/librte_ring.so.24.2 00:01:49.162 [498/723] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:01:49.162 [499/723] Linking target lib/librte_meter.so.24.2 00:01:49.162 [500/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:01:49.425 [501/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:01:49.425 [502/723] Linking target lib/librte_pci.so.24.2 00:01:49.425 [503/723] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:01:49.425 [504/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:01:49.425 [505/723] Linking target lib/librte_timer.so.24.2 00:01:49.425 [506/723] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:01:49.425 [507/723] Generating symbol file lib/librte_ring.so.24.2.p/librte_ring.so.24.2.symbols 00:01:49.425 [508/723] Linking target lib/librte_acl.so.24.2 00:01:49.425 [509/723] Linking target lib/librte_cfgfile.so.24.2 00:01:49.425 [510/723] Linking target lib/librte_dmadev.so.24.2 00:01:49.425 [511/723] Linking target lib/librte_rcu.so.24.2 00:01:49.425 [512/723] Generating symbol file lib/librte_meter.so.24.2.p/librte_meter.so.24.2.symbols 00:01:49.425 [513/723] Linking target lib/librte_mempool.so.24.2 00:01:49.425 [514/723] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:01:49.425 [515/723] Linking target lib/librte_jobstats.so.24.2 00:01:49.689 [516/723] Generating symbol file lib/librte_pci.so.24.2.p/librte_pci.so.24.2.symbols 00:01:49.689 [517/723] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:01:49.689 [518/723] Generating symbol file lib/librte_timer.so.24.2.p/librte_timer.so.24.2.symbols 00:01:49.689 [519/723] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:01:49.689 [520/723] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:01:49.689 [521/723] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:01:49.689 [522/723] Linking target lib/librte_rawdev.so.24.2 00:01:49.689 [523/723] Linking target lib/librte_stack.so.24.2 00:01:49.689 [524/723] Linking target drivers/librte_bus_pci.so.24.2 00:01:49.689 [525/723] Linking target drivers/librte_bus_vdev.so.24.2 00:01:49.689 [526/723] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:01:49.690 [527/723] Generating symbol file lib/librte_rcu.so.24.2.p/librte_rcu.so.24.2.symbols 00:01:49.690 [528/723] Generating symbol file lib/librte_acl.so.24.2.p/librte_acl.so.24.2.symbols 00:01:49.690 [529/723] Generating symbol file lib/librte_dmadev.so.24.2.p/librte_dmadev.so.24.2.symbols 00:01:49.690 [530/723] Generating symbol file lib/librte_mempool.so.24.2.p/librte_mempool.so.24.2.symbols 00:01:49.690 [531/723] Linking target lib/librte_mbuf.so.24.2 00:01:49.958 [532/723] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:01:49.958 [533/723] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:01:49.958 [534/723] Generating symbol file drivers/librte_bus_vdev.so.24.2.p/librte_bus_vdev.so.24.2.symbols 00:01:49.958 [535/723] Generating symbol file drivers/librte_bus_pci.so.24.2.p/librte_bus_pci.so.24.2.symbols 00:01:49.958 [536/723] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:01:49.958 [537/723] Linking target lib/librte_rib.so.24.2 00:01:49.958 [538/723] Linking target drivers/librte_mempool_ring.so.24.2 00:01:49.958 [539/723] Generating symbol file lib/librte_mbuf.so.24.2.p/librte_mbuf.so.24.2.symbols 00:01:50.223 [540/723] Linking target lib/librte_net.so.24.2 00:01:50.223 [541/723] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:01:50.223 [542/723] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:01:50.223 [543/723] Generating symbol file lib/librte_rib.so.24.2.p/librte_rib.so.24.2.symbols 00:01:50.223 [544/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:01:50.223 [545/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:01:50.223 [546/723] Linking target lib/librte_bbdev.so.24.2 00:01:50.223 [547/723] Linking target lib/librte_compressdev.so.24.2 00:01:50.223 [548/723] Linking target lib/librte_cryptodev.so.24.2 00:01:50.223 [549/723] Linking target lib/librte_distributor.so.24.2 00:01:50.223 [550/723] Linking target lib/librte_gpudev.so.24.2 00:01:50.224 [551/723] Linking target lib/librte_regexdev.so.24.2 00:01:50.224 [552/723] Linking target lib/librte_mldev.so.24.2 00:01:50.224 [553/723] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:01:50.224 [554/723] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:01:50.224 [555/723] Linking target lib/librte_reorder.so.24.2 00:01:50.224 [556/723] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:01:50.224 [557/723] Linking static target drivers/net/i40e/base/libi40e_base.a 00:01:50.224 [558/723] Linking target lib/librte_sched.so.24.2 00:01:50.487 [559/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:01:50.487 [560/723] Generating symbol file lib/librte_net.so.24.2.p/librte_net.so.24.2.symbols 00:01:50.487 [561/723] Linking target lib/librte_fib.so.24.2 00:01:50.487 [562/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:01:50.487 [563/723] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:01:50.487 [564/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:01:50.487 [565/723] Linking target lib/librte_hash.so.24.2 00:01:50.487 [566/723] Linking target lib/librte_cmdline.so.24.2 00:01:50.487 [567/723] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:01:50.487 [568/723] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:01:50.487 [569/723] Generating symbol file lib/librte_cryptodev.so.24.2.p/librte_cryptodev.so.24.2.symbols 00:01:50.487 [570/723] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:01:50.488 [571/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:01:50.488 [572/723] Generating symbol file lib/librte_reorder.so.24.2.p/librte_reorder.so.24.2.symbols 00:01:50.488 [573/723] Linking target lib/librte_security.so.24.2 00:01:50.488 [574/723] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:01:50.488 [575/723] Generating symbol file lib/librte_sched.so.24.2.p/librte_sched.so.24.2.symbols 00:01:50.488 [576/723] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:01:50.757 [577/723] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:01:50.757 [578/723] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:01:50.757 [579/723] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:01:50.757 [580/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:01:50.757 [581/723] Generating symbol file lib/librte_hash.so.24.2.p/librte_hash.so.24.2.symbols 00:01:50.757 [582/723] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:01:50.757 [583/723] Linking target lib/librte_efd.so.24.2 00:01:50.757 [584/723] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:01:50.757 [585/723] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:01:50.757 [586/723] Linking target lib/librte_lpm.so.24.2 00:01:50.757 [587/723] Linking target lib/librte_member.so.24.2 00:01:50.757 [588/723] Generating symbol file lib/librte_security.so.24.2.p/librte_security.so.24.2.symbols 00:01:50.757 [589/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:01:50.757 [590/723] Linking target lib/librte_ipsec.so.24.2 00:01:51.016 [591/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:01:51.016 [592/723] Linking target lib/librte_pdcp.so.24.2 00:01:51.016 [593/723] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:01:51.016 [594/723] Generating symbol file lib/librte_lpm.so.24.2.p/librte_lpm.so.24.2.symbols 00:01:51.016 [595/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:01:51.016 [596/723] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:01:51.016 [597/723] Generating symbol file lib/librte_ipsec.so.24.2.p/librte_ipsec.so.24.2.symbols 00:01:51.303 [598/723] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:01:51.303 [599/723] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:01:51.303 [600/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:01:51.618 [601/723] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:01:51.618 [602/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:01:51.618 [603/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:01:51.618 [604/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:01:51.618 [605/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:01:51.618 [606/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:01:51.618 [607/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:01:51.895 [608/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:01:51.895 [609/723] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:01:51.895 [610/723] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:01:51.895 [611/723] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:01:51.895 [612/723] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:01:51.895 [613/723] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:01:51.895 [614/723] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:01:51.895 [615/723] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:01:51.895 [616/723] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:01:52.155 [617/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:01:52.155 [618/723] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:01:52.155 [619/723] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:01:52.155 [620/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:01:52.155 [621/723] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:01:52.155 [622/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:01:52.413 [623/723] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:01:52.672 [624/723] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:01:52.672 [625/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:01:52.672 [626/723] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:01:52.672 [627/723] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:01:52.672 [628/723] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:01:52.672 [629/723] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:01:52.672 [630/723] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:01:52.931 [631/723] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:01:52.931 [632/723] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:01:52.931 [633/723] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:01:52.931 [634/723] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:01:52.931 [635/723] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:01:52.931 [636/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:01:52.931 [637/723] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:01:52.931 [638/723] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.931 [639/723] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:01:53.197 [640/723] Linking target lib/librte_ethdev.so.24.2 00:01:53.197 [641/723] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:01:53.197 [642/723] Compiling C object app/dpdk-test-security-perf.p/test_test_security_proto.c.o 00:01:53.197 [643/723] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:01:53.197 [644/723] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:01:53.197 [645/723] Generating symbol file lib/librte_ethdev.so.24.2.p/librte_ethdev.so.24.2.symbols 00:01:53.197 [646/723] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:01:53.197 [647/723] Linking target lib/librte_pcapng.so.24.2 00:01:53.197 [648/723] Linking target lib/librte_gso.so.24.2 00:01:53.197 [649/723] Linking target lib/librte_gro.so.24.2 00:01:53.197 [650/723] Linking target lib/librte_ip_frag.so.24.2 00:01:53.197 [651/723] Linking target lib/librte_bpf.so.24.2 00:01:53.197 [652/723] Linking target lib/librte_metrics.so.24.2 00:01:53.463 [653/723] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:01:53.463 [654/723] Linking target lib/librte_power.so.24.2 00:01:53.463 [655/723] Linking target lib/librte_eventdev.so.24.2 00:01:53.463 [656/723] Generating symbol file lib/librte_bpf.so.24.2.p/librte_bpf.so.24.2.symbols 00:01:53.463 [657/723] Generating symbol file lib/librte_ip_frag.so.24.2.p/librte_ip_frag.so.24.2.symbols 00:01:53.463 [658/723] Generating symbol file lib/librte_pcapng.so.24.2.p/librte_pcapng.so.24.2.symbols 00:01:53.463 [659/723] Generating symbol file lib/librte_eventdev.so.24.2.p/librte_eventdev.so.24.2.symbols 00:01:53.463 [660/723] Generating symbol file lib/librte_metrics.so.24.2.p/librte_metrics.so.24.2.symbols 00:01:53.463 [661/723] Linking target lib/librte_pdump.so.24.2 00:01:53.463 [662/723] Linking target lib/librte_dispatcher.so.24.2 00:01:53.463 [663/723] Linking target lib/librte_graph.so.24.2 00:01:53.463 [664/723] Linking target lib/librte_port.so.24.2 00:01:53.463 [665/723] Linking target lib/librte_latencystats.so.24.2 00:01:53.463 [666/723] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:01:53.463 [667/723] Linking target lib/librte_bitratestats.so.24.2 00:01:53.721 [668/723] Generating symbol file lib/librte_port.so.24.2.p/librte_port.so.24.2.symbols 00:01:53.721 [669/723] Generating symbol file lib/librte_graph.so.24.2.p/librte_graph.so.24.2.symbols 00:01:53.721 [670/723] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:01:53.721 [671/723] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:01:53.721 [672/723] Linking target lib/librte_table.so.24.2 00:01:53.721 [673/723] Linking target lib/librte_node.so.24.2 00:01:53.721 [674/723] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:01:53.721 [675/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:01:53.978 [676/723] Generating symbol file lib/librte_table.so.24.2.p/librte_table.so.24.2.symbols 00:01:53.978 [677/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:01:54.236 [678/723] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:01:54.236 [679/723] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:54.494 [680/723] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:01:54.494 [681/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:01:54.494 [682/723] Linking static target drivers/libtmp_rte_net_i40e.a 00:01:54.494 [683/723] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:01:54.752 [684/723] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:01:55.011 [685/723] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:01:55.011 [686/723] Compiling C object drivers/librte_net_i40e.so.24.2.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:01:55.011 [687/723] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:01:55.011 [688/723] Linking static target drivers/librte_net_i40e.a 00:01:55.269 [689/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:01:55.269 [690/723] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:01:55.526 [691/723] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.526 [692/723] Linking target drivers/librte_net_i40e.so.24.2 00:01:56.091 [693/723] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:01:56.349 [694/723] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:01:56.914 [695/723] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:02:05.023 [696/723] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:05.023 [697/723] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:02:05.023 [698/723] Linking static target lib/librte_vhost.a 00:02:05.023 [699/723] Linking static target lib/librte_pipeline.a 00:02:05.281 [700/723] Linking target app/dpdk-test-dma-perf 00:02:05.281 [701/723] Linking target app/dpdk-proc-info 00:02:05.281 [702/723] Linking target app/dpdk-test-compress-perf 00:02:05.281 [703/723] Linking target app/dpdk-dumpcap 00:02:05.281 [704/723] Linking target app/dpdk-test-pipeline 00:02:05.281 [705/723] Linking target app/dpdk-test-gpudev 00:02:05.281 [706/723] Linking target app/dpdk-test-security-perf 00:02:05.281 [707/723] Linking target app/dpdk-test-acl 00:02:05.539 [708/723] Linking target app/dpdk-test-flow-perf 00:02:05.539 [709/723] Linking target app/dpdk-test-crypto-perf 00:02:05.539 [710/723] Linking target app/dpdk-pdump 00:02:05.539 [711/723] Linking target app/dpdk-test-sad 00:02:05.539 [712/723] Linking target app/dpdk-test-regex 00:02:05.539 [713/723] Linking target app/dpdk-graph 00:02:05.539 [714/723] Linking target app/dpdk-test-mldev 00:02:05.539 [715/723] Linking target app/dpdk-test-fib 00:02:05.539 [716/723] Linking target app/dpdk-test-bbdev 00:02:05.539 [717/723] Linking target app/dpdk-test-cmdline 00:02:05.539 [718/723] Linking target app/dpdk-test-eventdev 00:02:05.539 [719/723] Linking target app/dpdk-testpmd 00:02:05.796 [720/723] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.054 [721/723] Linking target lib/librte_vhost.so.24.2 00:02:07.429 [722/723] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.429 [723/723] Linking target lib/librte_pipeline.so.24.2 00:02:07.429 08:35:25 build_native_dpdk -- common/autobuild_common.sh@191 -- $ uname -s 00:02:07.429 08:35:25 build_native_dpdk -- common/autobuild_common.sh@191 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:02:07.429 08:35:25 build_native_dpdk -- common/autobuild_common.sh@204 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j48 install 00:02:07.687 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:02:07.687 [0/1] Installing files. 00:02:07.947 Installing subdir /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/telemetry-endpoints to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/telemetry-endpoints 00:02:07.948 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/telemetry-endpoints/memory.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/telemetry-endpoints 00:02:07.948 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/telemetry-endpoints/cpu.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/telemetry-endpoints 00:02:07.948 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/telemetry-endpoints/counters.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/telemetry-endpoints 00:02:07.948 Installing subdir /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples 00:02:07.948 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:07.948 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:07.948 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:07.948 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:07.948 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:07.948 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_red.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:07.948 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:07.948 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:07.948 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/app_thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:07.948 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:07.948 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cmdline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:07.948 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_pie.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:07.948 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:07.948 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/stats.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:07.948 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:07.948 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:07.948 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:07.948 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:07.948 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_ov.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:07.948 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:02:07.948 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:02:07.948 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:07.948 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:07.948 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:07.948 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:07.948 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:07.948 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:07.948 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:02:07.948 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:02:07.948 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:07.948 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/virtio_net.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:07.948 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:07.948 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:07.948 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-macsec/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:02:07.948 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-macsec/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:02:07.948 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:07.948 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:07.948 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:02:07.948 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:02:07.948 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:07.948 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:07.948 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd 00:02:07.948 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:07.948 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:07.948 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:07.948 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:07.948 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:07.948 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:07.948 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:02:07.948 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_node/node.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:02:07.948 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_node/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:02:07.948 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:07.948 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:07.948 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:07.948 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:07.948 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:07.948 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:07.948 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:07.948 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_xts.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:07.948 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:07.948 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_tdes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:07.948 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ccm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:07.948 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_aes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:07.948 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:07.948 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:07.949 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_gcm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:07.949 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_cmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:07.949 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_hmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:07.949 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:07.949 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:07.949 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_rsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:07.949 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_sha.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:07.949 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:07.949 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:02:07.949 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:02:07.949 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:02:07.949 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:02:07.949 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:07.949 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:07.949 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_route.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:07.949 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:07.949 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:07.949 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:07.949 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:07.949 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:07.949 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:07.949 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:07.949 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:07.949 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:07.949 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_fib.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:07.949 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:07.949 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:07.949 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:07.949 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:07.949 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:07.949 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:07.949 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:07.949 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:07.949 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:07.949 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:07.949 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:07.949 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:07.949 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:07.949 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:07.949 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:07.949 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:07.949 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:07.949 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:07.949 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:07.949 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:07.949 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:02:07.949 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:02:07.949 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t2.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:07.949 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t1.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:07.949 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/README to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:07.949 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/dummy.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:07.949 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t3.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:07.949 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:07.949 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:07.949 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:07.949 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:07.949 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:07.949 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:07.949 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:07.949 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:07.949 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:07.949 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:07.949 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:07.949 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:07.949 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:07.949 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:07.949 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:07.949 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:07.949 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:07.949 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:07.950 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:07.950 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:07.950 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:07.950 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:07.950 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/ntb_fwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:07.950 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:07.950 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:07.950 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:07.950 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:07.950 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:07.950 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:07.950 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:07.950 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:07.950 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:07.950 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:07.950 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:07.950 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:07.950 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:07.950 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:07.950 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:07.950 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:07.950 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:07.950 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:07.950 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ethdev.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:07.950 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:07.950 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:07.950 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:07.950 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:07.950 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipv6_addr_swap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:07.950 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipv6_addr_swap.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:07.950 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:07.950 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:07.950 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:07.950 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:07.950 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:07.950 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/rss.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:07.950 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:07.950 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_routing_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:07.950 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:07.950 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:07.950 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:07.950 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:07.950 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:07.950 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:07.950 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/pcap.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:07.950 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:07.950 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:07.950 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:07.950 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:07.950 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/packet.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:07.950 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:07.950 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec_sa.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:07.950 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:07.950 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:07.950 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:07.950 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:07.950 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:07.950 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:07.950 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:07.950 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:07.950 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/flow_blocks.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:07.950 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:07.950 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:07.950 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:07.950 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:07.950 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:07.950 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:07.950 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:07.950 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:07.951 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:07.951 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process 00:02:07.951 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:07.951 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:07.951 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:02:07.951 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:02:07.951 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:07.951 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:07.951 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:07.951 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:07.951 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:07.951 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:07.951 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:07.951 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:07.951 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:07.951 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:07.951 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:07.951 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:07.951 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:07.951 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:07.951 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:07.951 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:07.951 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:07.951 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:07.951 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:07.951 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:07.951 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:07.951 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:07.951 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:07.951 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:07.951 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:07.951 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:07.951 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:07.951 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:07.951 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:07.951 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:07.951 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:07.951 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:07.951 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:07.951 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:07.951 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:07.951 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:07.951 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:07.951 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:07.951 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:07.951 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:07.951 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:07.951 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:07.951 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:07.951 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:07.951 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:07.951 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:07.951 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:07.951 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:07.951 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:07.951 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:07.951 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:07.951 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/firewall.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:07.951 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/tap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:07.951 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:07.951 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:07.951 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:07.951 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:07.951 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:07.951 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk_spec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:07.951 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:07.952 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:07.952 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk_compat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:07.952 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:07.952 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:07.952 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/pkt_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common 00:02:07.952 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/altivec/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/altivec 00:02:07.952 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/sse/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/sse 00:02:07.952 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/neon/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/neon 00:02:07.952 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/basicfwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:02:07.952 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:02:07.952 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:07.952 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:07.952 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:07.952 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:07.952 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:07.952 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:07.952 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:07.952 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:02:07.952 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:02:07.952 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:07.952 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:07.952 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/dmafwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:02:07.952 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:02:07.952 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:02:07.952 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:02:07.952 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:07.952 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:07.952 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:07.952 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:07.952 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:07.952 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:07.952 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:07.952 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:07.952 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:07.952 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:07.952 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/vdpa_blk_compact.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:07.952 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:07.952 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:07.952 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool 00:02:07.952 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:07.952 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:07.952 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:07.952 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:07.952 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:07.952 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:07.952 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:07.952 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/ptpclient.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:02:07.952 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:02:07.952 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:07.952 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:07.953 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep0.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:07.953 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:07.953 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:07.953 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:07.953 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:07.953 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:07.953 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep1.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:07.953 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:07.953 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp6.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:07.953 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:07.953 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:07.953 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp4.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:07.953 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:07.953 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:07.953 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_process.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:07.953 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:07.953 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:07.953 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:07.953 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:07.953 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:07.953 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:07.953 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:07.953 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:07.953 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:07.953 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/rt.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:07.953 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:07.953 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:07.953 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:07.953 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/load_env.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:07.953 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:07.953 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:07.953 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:07.953 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:07.953 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:07.953 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:07.953 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:07.953 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:07.953 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:07.953 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:07.953 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:07.953 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:07.953 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:07.953 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:07.953 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/linux_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:07.953 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:07.953 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:07.953 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:07.953 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:07.953 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:07.953 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:07.953 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:07.953 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/run_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:07.953 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:07.953 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:07.953 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:07.953 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:07.953 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:07.953 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:07.953 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:07.953 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:07.953 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:07.953 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:07.953 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:07.953 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:07.953 Installing lib/librte_log.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.953 Installing lib/librte_log.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.953 Installing lib/librte_kvargs.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.953 Installing lib/librte_kvargs.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.953 Installing lib/librte_argparse.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.953 Installing lib/librte_argparse.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.953 Installing lib/librte_telemetry.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.953 Installing lib/librte_telemetry.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.954 Installing lib/librte_eal.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.954 Installing lib/librte_eal.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.954 Installing lib/librte_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.954 Installing lib/librte_ring.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.954 Installing lib/librte_rcu.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.954 Installing lib/librte_rcu.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.954 Installing lib/librte_mempool.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.954 Installing lib/librte_mempool.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.954 Installing lib/librte_mbuf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.954 Installing lib/librte_mbuf.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.954 Installing lib/librte_net.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.954 Installing lib/librte_net.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.954 Installing lib/librte_meter.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.954 Installing lib/librte_meter.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.954 Installing lib/librte_ethdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.954 Installing lib/librte_ethdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.954 Installing lib/librte_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.954 Installing lib/librte_pci.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.954 Installing lib/librte_cmdline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.954 Installing lib/librte_cmdline.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.954 Installing lib/librte_metrics.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.954 Installing lib/librte_metrics.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.954 Installing lib/librte_hash.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.954 Installing lib/librte_hash.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.954 Installing lib/librte_timer.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.954 Installing lib/librte_timer.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.954 Installing lib/librte_acl.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.954 Installing lib/librte_acl.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.954 Installing lib/librte_bbdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.954 Installing lib/librte_bbdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.954 Installing lib/librte_bitratestats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.954 Installing lib/librte_bitratestats.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.954 Installing lib/librte_bpf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.954 Installing lib/librte_bpf.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.954 Installing lib/librte_cfgfile.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.954 Installing lib/librte_cfgfile.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.954 Installing lib/librte_compressdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.954 Installing lib/librte_compressdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.954 Installing lib/librte_cryptodev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.954 Installing lib/librte_cryptodev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.954 Installing lib/librte_distributor.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.954 Installing lib/librte_distributor.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.954 Installing lib/librte_dmadev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.954 Installing lib/librte_dmadev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.954 Installing lib/librte_efd.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.954 Installing lib/librte_efd.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.954 Installing lib/librte_eventdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.954 Installing lib/librte_eventdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.954 Installing lib/librte_dispatcher.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.954 Installing lib/librte_dispatcher.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.954 Installing lib/librte_gpudev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.954 Installing lib/librte_gpudev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.954 Installing lib/librte_gro.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.954 Installing lib/librte_gro.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.954 Installing lib/librte_gso.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.954 Installing lib/librte_gso.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.954 Installing lib/librte_ip_frag.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.954 Installing lib/librte_ip_frag.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.954 Installing lib/librte_jobstats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.954 Installing lib/librte_jobstats.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.954 Installing lib/librte_latencystats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.954 Installing lib/librte_latencystats.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.954 Installing lib/librte_lpm.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.954 Installing lib/librte_lpm.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.954 Installing lib/librte_member.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.954 Installing lib/librte_member.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:08.213 Installing lib/librte_pcapng.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:08.213 Installing lib/librte_pcapng.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:08.213 Installing lib/librte_power.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:08.213 Installing lib/librte_power.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:08.213 Installing lib/librte_rawdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:08.213 Installing lib/librte_rawdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:08.213 Installing lib/librte_regexdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:08.213 Installing lib/librte_regexdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:08.213 Installing lib/librte_mldev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:08.213 Installing lib/librte_mldev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:08.213 Installing lib/librte_rib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:08.213 Installing lib/librte_rib.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:08.213 Installing lib/librte_reorder.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:08.213 Installing lib/librte_reorder.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:08.213 Installing lib/librte_sched.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:08.213 Installing lib/librte_sched.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:08.213 Installing lib/librte_security.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:08.213 Installing lib/librte_security.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:08.213 Installing lib/librte_stack.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:08.213 Installing lib/librte_stack.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:08.213 Installing lib/librte_vhost.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:08.213 Installing lib/librte_vhost.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:08.213 Installing lib/librte_ipsec.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:08.213 Installing lib/librte_ipsec.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:08.213 Installing lib/librte_pdcp.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:08.213 Installing lib/librte_pdcp.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:08.213 Installing lib/librte_fib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:08.213 Installing lib/librte_fib.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:08.213 Installing lib/librte_port.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:08.214 Installing lib/librte_port.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:08.214 Installing lib/librte_pdump.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:08.214 Installing lib/librte_pdump.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:08.214 Installing lib/librte_table.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:08.214 Installing lib/librte_table.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:08.214 Installing lib/librte_pipeline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:08.214 Installing lib/librte_pipeline.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:08.214 Installing lib/librte_graph.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:08.214 Installing lib/librte_graph.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:08.214 Installing lib/librte_node.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:08.214 Installing lib/librte_node.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:08.214 Installing drivers/librte_bus_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:08.214 Installing drivers/librte_bus_pci.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2 00:02:08.214 Installing drivers/librte_bus_vdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:08.214 Installing drivers/librte_bus_vdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2 00:02:08.214 Installing drivers/librte_mempool_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:08.214 Installing drivers/librte_mempool_ring.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2 00:02:08.214 Installing drivers/librte_net_i40e.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:08.214 Installing drivers/librte_net_i40e.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2 00:02:08.214 Installing app/dpdk-dumpcap to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:08.214 Installing app/dpdk-graph to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:08.214 Installing app/dpdk-pdump to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:08.214 Installing app/dpdk-proc-info to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:08.214 Installing app/dpdk-test-acl to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:08.214 Installing app/dpdk-test-bbdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:08.214 Installing app/dpdk-test-cmdline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:08.214 Installing app/dpdk-test-compress-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:08.214 Installing app/dpdk-test-crypto-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:08.214 Installing app/dpdk-test-dma-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:08.214 Installing app/dpdk-test-eventdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:08.214 Installing app/dpdk-test-fib to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:08.214 Installing app/dpdk-test-flow-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:08.214 Installing app/dpdk-test-gpudev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:08.214 Installing app/dpdk-test-mldev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:08.214 Installing app/dpdk-test-pipeline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:08.214 Installing app/dpdk-testpmd to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:08.214 Installing app/dpdk-test-regex to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:08.214 Installing app/dpdk-test-sad to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:08.214 Installing app/dpdk-test-security-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:08.214 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/rte_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.214 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/log/rte_log.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.214 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/kvargs/rte_kvargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.214 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/argparse/rte_argparse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.214 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/telemetry/rte_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.214 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:08.214 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:08.214 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:08.214 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:08.474 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:08.474 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:08.474 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:08.474 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:08.474 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:08.474 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:08.474 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:08.474 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:08.474 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.474 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.474 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.474 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.474 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.475 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.475 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.475 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.475 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.475 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rtm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.475 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.475 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.475 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.475 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.475 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.475 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.475 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.475 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_alarm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.475 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitmap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.475 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.475 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_branch_prediction.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.475 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bus.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.475 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_class.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.475 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.475 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_compat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.475 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_debug.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.475 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_dev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.475 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_devargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.475 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.475 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_memconfig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.475 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.475 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_errno.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.475 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_epoll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.475 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_fbarray.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.475 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hexdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.475 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hypervisor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.475 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_interrupts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.475 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_keepalive.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.475 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_launch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.475 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.475 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lock_annotations.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.475 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_malloc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.475 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_mcslock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.475 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memory.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.475 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memzone.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.475 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.475 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_features.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.475 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_per_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.475 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pflock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.475 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_random.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.475 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_reciprocal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.475 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqcount.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.475 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.475 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.475 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service_component.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.475 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_stdatomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.475 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_string_fns.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.475 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_tailq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.475 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.475 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_ticketlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.475 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_time.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.475 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.475 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.475 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point_register.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.475 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_uuid.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.475 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_version.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.475 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_vfio.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.475 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/linux/include/rte_os.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.475 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ptr_compress/rte_ptr_compress.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.475 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.475 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.475 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.475 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.475 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_c11_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.475 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_generic_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.475 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.476 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.476 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.476 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.476 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_zc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.476 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.476 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.476 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rcu/rte_rcu_qsbr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.476 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.476 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.476 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.476 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.476 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_ptype.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.476 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.476 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_dyn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.476 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.476 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.476 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_udp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.476 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.476 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_dtls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.476 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.476 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_sctp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.476 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_icmp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.476 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_arp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.476 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ether.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.476 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_macsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.476 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_vxlan.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.476 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gre.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.476 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gtp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.476 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.476 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.476 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_mpls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.476 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_higig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.476 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ecpri.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.476 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_pdcp_hdr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.476 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_geneve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.476 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_l2tpv2.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.476 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ppp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.476 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.476 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/meter/rte_meter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.476 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_cman.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.476 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.476 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.476 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_dev_info.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.476 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.476 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.476 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.476 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.476 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.476 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.476 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.476 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_eth_ctrl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.476 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pci/rte_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.476 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.476 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.476 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_num.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.476 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.476 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.476 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_string.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.476 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_rdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.476 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_vt100.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.476 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_socket.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.476 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_cirbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.476 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_portlist.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.476 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.476 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.476 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_fbk_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.476 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.476 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.476 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_jhash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.476 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.476 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.476 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.476 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.476 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_sw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.476 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.476 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_x86_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.476 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/timer/rte_timer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.476 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.476 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl_osdep.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.477 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.477 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.477 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_op.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.477 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bitratestats/rte_bitrate.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.477 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/bpf_def.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.477 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.477 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.477 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cfgfile/rte_cfgfile.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.477 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_compressdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.477 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_comp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.477 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.477 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.477 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.477 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_sym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.477 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_asym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.477 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.477 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/distributor/rte_distributor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.477 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.477 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.477 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.477 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/efd/rte_efd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.477 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.477 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_dma_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.477 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.477 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.477 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.477 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_timer_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.477 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.477 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.477 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.477 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dispatcher/rte_dispatcher.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.477 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gpudev/rte_gpudev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.477 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gro/rte_gro.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.477 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gso/rte_gso.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.477 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ip_frag/rte_ip_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.477 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/jobstats/rte_jobstats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.477 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/latencystats/rte_latencystats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.477 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.477 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.477 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.477 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.477 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.477 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.477 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.477 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/member/rte_member.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.477 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pcapng/rte_pcapng.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.477 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.477 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_guest_channel.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.477 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_pmd_mgmt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.477 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_uncore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.477 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.477 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.477 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.477 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.477 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.477 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mldev/rte_mldev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.477 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mldev/rte_mldev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.477 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.477 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.477 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/reorder/rte_reorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.477 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_approx.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.477 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_red.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.477 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.477 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.477 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_pie.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.477 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.477 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.477 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.477 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_std.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.477 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.477 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.477 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_c11.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.477 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_stubs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.477 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vdpa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.477 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.477 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_async.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.477 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.477 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.477 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.477 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.477 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.477 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdcp/rte_pdcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.478 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdcp/rte_pdcp_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.478 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.478 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.478 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.478 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.478 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.478 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ras.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.478 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.478 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.478 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.478 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.478 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sym_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.478 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.478 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.478 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.478 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.478 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.478 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.478 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdump/rte_pdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.478 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.478 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.478 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.478 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.478 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_learner.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.478 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_selector.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.478 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_wm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.478 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.478 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.478 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_array.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.478 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.478 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_cuckoo.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.478 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.478 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.478 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm_ipv6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.478 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_stub.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.478 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.478 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.478 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.478 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.478 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_port_in_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.478 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_table_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.478 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.478 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.478 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_extern.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.478 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ctl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.478 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.478 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.478 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.478 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_model_rtc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.478 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.478 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_eth_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.478 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip4_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.478 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip6_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.478 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_udp4_input_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.478 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/pci/rte_bus_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.478 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.478 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.478 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/dpdk-cmdline-gen.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:08.478 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-devbind.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:08.478 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-pmdinfo.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:08.478 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-telemetry.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:08.478 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-hugepages.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:08.478 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-rss-flows.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:08.478 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-telemetry-exporter.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:08.478 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/rte_build_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.478 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:02:08.478 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:02:08.478 Installing symlink pointing to librte_log.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_log.so.24 00:02:08.478 Installing symlink pointing to librte_log.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_log.so 00:02:08.478 Installing symlink pointing to librte_kvargs.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so.24 00:02:08.478 Installing symlink pointing to librte_kvargs.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so 00:02:08.478 Installing symlink pointing to librte_argparse.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_argparse.so.24 00:02:08.478 Installing symlink pointing to librte_argparse.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_argparse.so 00:02:08.478 Installing symlink pointing to librte_telemetry.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so.24 00:02:08.478 Installing symlink pointing to librte_telemetry.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so 00:02:08.478 Installing symlink pointing to librte_eal.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so.24 00:02:08.478 Installing symlink pointing to librte_eal.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so 00:02:08.478 Installing symlink pointing to librte_ring.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so.24 00:02:08.478 Installing symlink pointing to librte_ring.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so 00:02:08.478 Installing symlink pointing to librte_rcu.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so.24 00:02:08.478 Installing symlink pointing to librte_rcu.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so 00:02:08.478 Installing symlink pointing to librte_mempool.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so.24 00:02:08.478 Installing symlink pointing to librte_mempool.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so 00:02:08.479 Installing symlink pointing to librte_mbuf.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so.24 00:02:08.479 Installing symlink pointing to librte_mbuf.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so 00:02:08.479 Installing symlink pointing to librte_net.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so.24 00:02:08.479 Installing symlink pointing to librte_net.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so 00:02:08.479 Installing symlink pointing to librte_meter.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so.24 00:02:08.479 Installing symlink pointing to librte_meter.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so 00:02:08.479 Installing symlink pointing to librte_ethdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so.24 00:02:08.479 Installing symlink pointing to librte_ethdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so 00:02:08.479 Installing symlink pointing to librte_pci.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so.24 00:02:08.479 Installing symlink pointing to librte_pci.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so 00:02:08.479 Installing symlink pointing to librte_cmdline.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so.24 00:02:08.479 Installing symlink pointing to librte_cmdline.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so 00:02:08.479 Installing symlink pointing to librte_metrics.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so.24 00:02:08.479 Installing symlink pointing to librte_metrics.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so 00:02:08.479 Installing symlink pointing to librte_hash.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so.24 00:02:08.479 Installing symlink pointing to librte_hash.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so 00:02:08.479 Installing symlink pointing to librte_timer.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so.24 00:02:08.479 Installing symlink pointing to librte_timer.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so 00:02:08.479 Installing symlink pointing to librte_acl.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so.24 00:02:08.479 Installing symlink pointing to librte_acl.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so 00:02:08.479 Installing symlink pointing to librte_bbdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so.24 00:02:08.479 Installing symlink pointing to librte_bbdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so 00:02:08.479 Installing symlink pointing to librte_bitratestats.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so.24 00:02:08.479 Installing symlink pointing to librte_bitratestats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so 00:02:08.479 Installing symlink pointing to librte_bpf.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so.24 00:02:08.479 Installing symlink pointing to librte_bpf.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so 00:02:08.479 Installing symlink pointing to librte_cfgfile.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so.24 00:02:08.479 Installing symlink pointing to librte_cfgfile.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so 00:02:08.479 Installing symlink pointing to librte_compressdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so.24 00:02:08.479 Installing symlink pointing to librte_compressdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so 00:02:08.479 Installing symlink pointing to librte_cryptodev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so.24 00:02:08.479 Installing symlink pointing to librte_cryptodev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so 00:02:08.479 Installing symlink pointing to librte_distributor.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so.24 00:02:08.479 Installing symlink pointing to librte_distributor.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so 00:02:08.479 Installing symlink pointing to librte_dmadev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so.24 00:02:08.479 Installing symlink pointing to librte_dmadev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so 00:02:08.479 Installing symlink pointing to librte_efd.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so.24 00:02:08.479 Installing symlink pointing to librte_efd.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so 00:02:08.479 Installing symlink pointing to librte_eventdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so.24 00:02:08.479 Installing symlink pointing to librte_eventdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so 00:02:08.479 Installing symlink pointing to librte_dispatcher.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dispatcher.so.24 00:02:08.479 Installing symlink pointing to librte_dispatcher.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dispatcher.so 00:02:08.479 Installing symlink pointing to librte_gpudev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so.24 00:02:08.479 Installing symlink pointing to librte_gpudev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so 00:02:08.479 Installing symlink pointing to librte_gro.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so.24 00:02:08.479 Installing symlink pointing to librte_gro.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so 00:02:08.479 Installing symlink pointing to librte_gso.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so.24 00:02:08.479 Installing symlink pointing to librte_gso.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so 00:02:08.479 Installing symlink pointing to librte_ip_frag.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so.24 00:02:08.479 Installing symlink pointing to librte_ip_frag.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so 00:02:08.479 Installing symlink pointing to librte_jobstats.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so.24 00:02:08.479 Installing symlink pointing to librte_jobstats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so 00:02:08.479 Installing symlink pointing to librte_latencystats.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so.24 00:02:08.479 Installing symlink pointing to librte_latencystats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so 00:02:08.479 Installing symlink pointing to librte_lpm.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so.24 00:02:08.479 Installing symlink pointing to librte_lpm.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so 00:02:08.479 Installing symlink pointing to librte_member.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so.24 00:02:08.479 Installing symlink pointing to librte_member.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so 00:02:08.479 Installing symlink pointing to librte_pcapng.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so.24 00:02:08.479 Installing symlink pointing to librte_pcapng.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so 00:02:08.479 Installing symlink pointing to librte_power.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so.24 00:02:08.479 Installing symlink pointing to librte_power.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so 00:02:08.479 Installing symlink pointing to librte_rawdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so.24 00:02:08.479 Installing symlink pointing to librte_rawdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so 00:02:08.479 Installing symlink pointing to librte_regexdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so.24 00:02:08.479 Installing symlink pointing to librte_regexdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so 00:02:08.479 Installing symlink pointing to librte_mldev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mldev.so.24 00:02:08.479 Installing symlink pointing to librte_mldev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mldev.so 00:02:08.479 Installing symlink pointing to librte_rib.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so.24 00:02:08.479 Installing symlink pointing to librte_rib.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so 00:02:08.479 Installing symlink pointing to librte_reorder.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so.24 00:02:08.479 Installing symlink pointing to librte_reorder.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so 00:02:08.479 Installing symlink pointing to librte_sched.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so.24 00:02:08.480 Installing symlink pointing to librte_sched.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so 00:02:08.480 Installing symlink pointing to librte_security.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so.24 00:02:08.480 Installing symlink pointing to librte_security.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so 00:02:08.480 Installing symlink pointing to librte_stack.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so.24 00:02:08.480 Installing symlink pointing to librte_stack.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so 00:02:08.480 Installing symlink pointing to librte_vhost.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so.24 00:02:08.480 Installing symlink pointing to librte_vhost.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so 00:02:08.480 Installing symlink pointing to librte_ipsec.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so.24 00:02:08.480 Installing symlink pointing to librte_ipsec.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so 00:02:08.480 Installing symlink pointing to librte_pdcp.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdcp.so.24 00:02:08.480 Installing symlink pointing to librte_pdcp.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdcp.so 00:02:08.480 Installing symlink pointing to librte_fib.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so.24 00:02:08.480 Installing symlink pointing to librte_fib.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so 00:02:08.480 Installing symlink pointing to librte_port.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so.24 00:02:08.480 Installing symlink pointing to librte_port.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so 00:02:08.480 Installing symlink pointing to librte_pdump.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so.24 00:02:08.480 Installing symlink pointing to librte_pdump.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so 00:02:08.480 Installing symlink pointing to librte_table.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so.24 00:02:08.480 Installing symlink pointing to librte_table.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so 00:02:08.480 Installing symlink pointing to librte_pipeline.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so.24 00:02:08.480 Installing symlink pointing to librte_pipeline.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so 00:02:08.480 Installing symlink pointing to librte_graph.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so.24 00:02:08.480 Installing symlink pointing to librte_graph.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so 00:02:08.480 Installing symlink pointing to librte_node.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so.24 00:02:08.480 Installing symlink pointing to librte_node.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so 00:02:08.480 Installing symlink pointing to librte_bus_pci.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_pci.so.24 00:02:08.480 Installing symlink pointing to librte_bus_pci.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_pci.so 00:02:08.480 Installing symlink pointing to librte_bus_vdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_vdev.so.24 00:02:08.480 Installing symlink pointing to librte_bus_vdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_vdev.so 00:02:08.480 Installing symlink pointing to librte_mempool_ring.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_mempool_ring.so.24 00:02:08.480 Installing symlink pointing to librte_mempool_ring.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_mempool_ring.so 00:02:08.480 Installing symlink pointing to librte_net_i40e.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_net_i40e.so.24 00:02:08.480 './librte_bus_pci.so' -> 'dpdk/pmds-24.2/librte_bus_pci.so' 00:02:08.480 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.2/librte_bus_pci.so.24' 00:02:08.480 './librte_bus_pci.so.24.2' -> 'dpdk/pmds-24.2/librte_bus_pci.so.24.2' 00:02:08.480 './librte_bus_vdev.so' -> 'dpdk/pmds-24.2/librte_bus_vdev.so' 00:02:08.480 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.2/librte_bus_vdev.so.24' 00:02:08.480 './librte_bus_vdev.so.24.2' -> 'dpdk/pmds-24.2/librte_bus_vdev.so.24.2' 00:02:08.480 './librte_mempool_ring.so' -> 'dpdk/pmds-24.2/librte_mempool_ring.so' 00:02:08.480 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.2/librte_mempool_ring.so.24' 00:02:08.480 './librte_mempool_ring.so.24.2' -> 'dpdk/pmds-24.2/librte_mempool_ring.so.24.2' 00:02:08.480 './librte_net_i40e.so' -> 'dpdk/pmds-24.2/librte_net_i40e.so' 00:02:08.480 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.2/librte_net_i40e.so.24' 00:02:08.480 './librte_net_i40e.so.24.2' -> 'dpdk/pmds-24.2/librte_net_i40e.so.24.2' 00:02:08.480 Installing symlink pointing to librte_net_i40e.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_net_i40e.so 00:02:08.480 Running custom install script '/bin/sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.2' 00:02:08.480 08:35:26 build_native_dpdk -- common/autobuild_common.sh@210 -- $ cat 00:02:08.480 08:35:26 build_native_dpdk -- common/autobuild_common.sh@215 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:08.480 00:02:08.480 real 0m39.654s 00:02:08.480 user 13m55.380s 00:02:08.480 sys 1m59.811s 00:02:08.480 08:35:26 build_native_dpdk -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:08.480 08:35:26 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:02:08.480 ************************************ 00:02:08.480 END TEST build_native_dpdk 00:02:08.480 ************************************ 00:02:08.480 08:35:26 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:08.480 08:35:26 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:08.480 08:35:26 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:08.480 08:35:26 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:08.480 08:35:26 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:08.480 08:35:26 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:08.480 08:35:26 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:08.480 08:35:26 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --with-shared 00:02:08.480 Using /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig for additional libs... 00:02:08.738 DPDK libraries: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:08.738 DPDK includes: //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:08.738 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:02:08.995 Using 'verbs' RDMA provider 00:02:19.570 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:02:27.681 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:02:27.940 Creating mk/config.mk...done. 00:02:27.940 Creating mk/cc.flags.mk...done. 00:02:27.940 Type 'make' to build. 00:02:27.940 08:35:46 -- spdk/autobuild.sh@69 -- $ run_test make make -j48 00:02:27.940 08:35:46 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:27.940 08:35:46 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:27.940 08:35:46 -- common/autotest_common.sh@10 -- $ set +x 00:02:27.940 ************************************ 00:02:27.940 START TEST make 00:02:27.940 ************************************ 00:02:27.940 08:35:46 make -- common/autotest_common.sh@1125 -- $ make -j48 00:02:28.198 make[1]: Nothing to be done for 'all'. 00:02:30.121 The Meson build system 00:02:30.121 Version: 1.3.1 00:02:30.121 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:02:30.121 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:30.121 Build type: native build 00:02:30.121 Project name: libvfio-user 00:02:30.121 Project version: 0.0.1 00:02:30.121 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:30.121 C linker for the host machine: gcc ld.bfd 2.39-16 00:02:30.121 Host machine cpu family: x86_64 00:02:30.121 Host machine cpu: x86_64 00:02:30.121 Run-time dependency threads found: YES 00:02:30.121 Library dl found: YES 00:02:30.121 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:30.121 Run-time dependency json-c found: YES 0.17 00:02:30.121 Run-time dependency cmocka found: YES 1.1.7 00:02:30.121 Program pytest-3 found: NO 00:02:30.121 Program flake8 found: NO 00:02:30.121 Program misspell-fixer found: NO 00:02:30.121 Program restructuredtext-lint found: NO 00:02:30.121 Program valgrind found: YES (/usr/bin/valgrind) 00:02:30.121 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:30.121 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:30.121 Compiler for C supports arguments -Wwrite-strings: YES 00:02:30.121 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:30.121 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:02:30.121 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:02:30.121 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:30.122 Build targets in project: 8 00:02:30.122 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:02:30.122 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:02:30.122 00:02:30.122 libvfio-user 0.0.1 00:02:30.122 00:02:30.122 User defined options 00:02:30.122 buildtype : debug 00:02:30.122 default_library: shared 00:02:30.122 libdir : /usr/local/lib 00:02:30.122 00:02:30.122 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:30.389 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:30.651 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:02:30.651 [2/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:02:30.651 [3/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:02:30.651 [4/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:02:30.651 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:02:30.651 [6/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:02:30.651 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:02:30.911 [8/37] Compiling C object samples/lspci.p/lspci.c.o 00:02:30.911 [9/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:02:30.911 [10/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:02:30.911 [11/37] Compiling C object samples/null.p/null.c.o 00:02:30.911 [12/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:02:30.911 [13/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:02:30.911 [14/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:02:30.911 [15/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:02:30.911 [16/37] Compiling C object test/unit_tests.p/mocks.c.o 00:02:30.911 [17/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:02:30.911 [18/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:02:30.911 [19/37] Compiling C object samples/server.p/server.c.o 00:02:30.911 [20/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:02:30.911 [21/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:02:30.911 [22/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:02:30.911 [23/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:02:30.911 [24/37] Compiling C object samples/client.p/client.c.o 00:02:30.911 [25/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:02:30.911 [26/37] Linking target samples/client 00:02:30.911 [27/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:02:30.911 [28/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:02:30.911 [29/37] Linking target lib/libvfio-user.so.0.0.1 00:02:31.172 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:02:31.172 [31/37] Linking target test/unit_tests 00:02:31.172 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:02:31.172 [33/37] Linking target samples/null 00:02:31.172 [34/37] Linking target samples/server 00:02:31.172 [35/37] Linking target samples/gpio-pci-idio-16 00:02:31.172 [36/37] Linking target samples/lspci 00:02:31.172 [37/37] Linking target samples/shadow_ioeventfd_server 00:02:31.437 INFO: autodetecting backend as ninja 00:02:31.437 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:31.437 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:32.006 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:32.006 ninja: no work to do. 00:02:44.208 CC lib/ut_mock/mock.o 00:02:44.208 CC lib/ut/ut.o 00:02:44.208 CC lib/log/log.o 00:02:44.208 CC lib/log/log_flags.o 00:02:44.208 CC lib/log/log_deprecated.o 00:02:44.208 LIB libspdk_ut_mock.a 00:02:44.208 LIB libspdk_ut.a 00:02:44.208 LIB libspdk_log.a 00:02:44.208 SO libspdk_ut.so.2.0 00:02:44.208 SO libspdk_ut_mock.so.6.0 00:02:44.208 SO libspdk_log.so.7.0 00:02:44.208 SYMLINK libspdk_ut.so 00:02:44.208 SYMLINK libspdk_ut_mock.so 00:02:44.208 SYMLINK libspdk_log.so 00:02:44.208 CC lib/ioat/ioat.o 00:02:44.208 CXX lib/trace_parser/trace.o 00:02:44.208 CC lib/util/base64.o 00:02:44.208 CC lib/dma/dma.o 00:02:44.208 CC lib/util/bit_array.o 00:02:44.208 CC lib/util/cpuset.o 00:02:44.208 CC lib/util/crc16.o 00:02:44.208 CC lib/util/crc32.o 00:02:44.208 CC lib/util/crc32c.o 00:02:44.208 CC lib/util/crc32_ieee.o 00:02:44.208 CC lib/util/crc64.o 00:02:44.208 CC lib/util/dif.o 00:02:44.208 CC lib/util/fd.o 00:02:44.208 CC lib/util/fd_group.o 00:02:44.208 CC lib/util/file.o 00:02:44.208 CC lib/util/hexlify.o 00:02:44.208 CC lib/util/iov.o 00:02:44.208 CC lib/util/math.o 00:02:44.208 CC lib/util/net.o 00:02:44.208 CC lib/util/pipe.o 00:02:44.208 CC lib/util/strerror_tls.o 00:02:44.208 CC lib/util/string.o 00:02:44.208 CC lib/util/uuid.o 00:02:44.208 CC lib/util/xor.o 00:02:44.208 CC lib/util/zipf.o 00:02:44.208 CC lib/vfio_user/host/vfio_user_pci.o 00:02:44.208 CC lib/vfio_user/host/vfio_user.o 00:02:44.208 LIB libspdk_dma.a 00:02:44.467 SO libspdk_dma.so.4.0 00:02:44.467 SYMLINK libspdk_dma.so 00:02:44.467 LIB libspdk_ioat.a 00:02:44.467 SO libspdk_ioat.so.7.0 00:02:44.467 LIB libspdk_vfio_user.a 00:02:44.467 SYMLINK libspdk_ioat.so 00:02:44.467 SO libspdk_vfio_user.so.5.0 00:02:44.725 SYMLINK libspdk_vfio_user.so 00:02:44.725 LIB libspdk_util.a 00:02:44.725 SO libspdk_util.so.10.0 00:02:44.984 SYMLINK libspdk_util.so 00:02:44.984 CC lib/idxd/idxd.o 00:02:44.984 CC lib/vmd/vmd.o 00:02:44.984 CC lib/conf/conf.o 00:02:44.984 CC lib/vmd/led.o 00:02:44.984 CC lib/json/json_parse.o 00:02:44.984 CC lib/env_dpdk/env.o 00:02:44.984 CC lib/idxd/idxd_user.o 00:02:44.984 CC lib/json/json_util.o 00:02:44.984 CC lib/env_dpdk/memory.o 00:02:44.984 CC lib/json/json_write.o 00:02:44.984 CC lib/idxd/idxd_kernel.o 00:02:44.984 CC lib/env_dpdk/pci.o 00:02:44.984 CC lib/env_dpdk/init.o 00:02:44.984 CC lib/rdma_utils/rdma_utils.o 00:02:44.984 CC lib/rdma_provider/common.o 00:02:44.984 CC lib/env_dpdk/threads.o 00:02:44.984 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:44.984 CC lib/env_dpdk/pci_ioat.o 00:02:44.984 CC lib/env_dpdk/pci_virtio.o 00:02:44.984 CC lib/env_dpdk/pci_vmd.o 00:02:44.984 CC lib/env_dpdk/pci_idxd.o 00:02:44.984 CC lib/env_dpdk/pci_event.o 00:02:44.984 CC lib/env_dpdk/sigbus_handler.o 00:02:44.984 CC lib/env_dpdk/pci_dpdk.o 00:02:44.984 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:44.984 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:44.984 LIB libspdk_trace_parser.a 00:02:45.242 SO libspdk_trace_parser.so.5.0 00:02:45.242 SYMLINK libspdk_trace_parser.so 00:02:45.242 LIB libspdk_conf.a 00:02:45.242 SO libspdk_conf.so.6.0 00:02:45.242 LIB libspdk_rdma_provider.a 00:02:45.242 SO libspdk_rdma_provider.so.6.0 00:02:45.499 SYMLINK libspdk_conf.so 00:02:45.499 LIB libspdk_json.a 00:02:45.499 SO libspdk_json.so.6.0 00:02:45.499 SYMLINK libspdk_rdma_provider.so 00:02:45.499 LIB libspdk_rdma_utils.a 00:02:45.499 SYMLINK libspdk_json.so 00:02:45.499 SO libspdk_rdma_utils.so.1.0 00:02:45.499 SYMLINK libspdk_rdma_utils.so 00:02:45.757 CC lib/jsonrpc/jsonrpc_server.o 00:02:45.757 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:45.757 CC lib/jsonrpc/jsonrpc_client.o 00:02:45.757 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:45.757 LIB libspdk_idxd.a 00:02:45.757 SO libspdk_idxd.so.12.0 00:02:45.757 SYMLINK libspdk_idxd.so 00:02:45.757 LIB libspdk_vmd.a 00:02:45.757 SO libspdk_vmd.so.6.0 00:02:45.757 SYMLINK libspdk_vmd.so 00:02:46.015 LIB libspdk_jsonrpc.a 00:02:46.015 SO libspdk_jsonrpc.so.6.0 00:02:46.015 SYMLINK libspdk_jsonrpc.so 00:02:46.278 CC lib/rpc/rpc.o 00:02:46.581 LIB libspdk_rpc.a 00:02:46.581 SO libspdk_rpc.so.6.0 00:02:46.581 SYMLINK libspdk_rpc.so 00:02:46.581 LIB libspdk_env_dpdk.a 00:02:46.581 SO libspdk_env_dpdk.so.15.0 00:02:46.581 CC lib/keyring/keyring.o 00:02:46.581 CC lib/notify/notify.o 00:02:46.581 CC lib/trace/trace.o 00:02:46.581 CC lib/keyring/keyring_rpc.o 00:02:46.581 CC lib/notify/notify_rpc.o 00:02:46.581 CC lib/trace/trace_flags.o 00:02:46.581 CC lib/trace/trace_rpc.o 00:02:46.840 SYMLINK libspdk_env_dpdk.so 00:02:46.840 LIB libspdk_notify.a 00:02:46.840 SO libspdk_notify.so.6.0 00:02:46.840 LIB libspdk_keyring.a 00:02:46.840 SYMLINK libspdk_notify.so 00:02:46.840 LIB libspdk_trace.a 00:02:46.840 SO libspdk_keyring.so.1.0 00:02:46.840 SO libspdk_trace.so.10.0 00:02:47.098 SYMLINK libspdk_keyring.so 00:02:47.098 SYMLINK libspdk_trace.so 00:02:47.098 CC lib/thread/thread.o 00:02:47.098 CC lib/thread/iobuf.o 00:02:47.098 CC lib/sock/sock.o 00:02:47.098 CC lib/sock/sock_rpc.o 00:02:47.664 LIB libspdk_sock.a 00:02:47.664 SO libspdk_sock.so.10.0 00:02:47.664 SYMLINK libspdk_sock.so 00:02:47.923 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:47.923 CC lib/nvme/nvme_ctrlr.o 00:02:47.923 CC lib/nvme/nvme_fabric.o 00:02:47.923 CC lib/nvme/nvme_ns_cmd.o 00:02:47.923 CC lib/nvme/nvme_ns.o 00:02:47.923 CC lib/nvme/nvme_pcie_common.o 00:02:47.923 CC lib/nvme/nvme_pcie.o 00:02:47.923 CC lib/nvme/nvme_qpair.o 00:02:47.923 CC lib/nvme/nvme.o 00:02:47.923 CC lib/nvme/nvme_quirks.o 00:02:47.923 CC lib/nvme/nvme_transport.o 00:02:47.923 CC lib/nvme/nvme_discovery.o 00:02:47.923 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:47.923 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:47.923 CC lib/nvme/nvme_tcp.o 00:02:47.923 CC lib/nvme/nvme_opal.o 00:02:47.923 CC lib/nvme/nvme_io_msg.o 00:02:47.923 CC lib/nvme/nvme_poll_group.o 00:02:47.923 CC lib/nvme/nvme_zns.o 00:02:47.923 CC lib/nvme/nvme_stubs.o 00:02:47.923 CC lib/nvme/nvme_auth.o 00:02:47.923 CC lib/nvme/nvme_cuse.o 00:02:47.923 CC lib/nvme/nvme_vfio_user.o 00:02:47.923 CC lib/nvme/nvme_rdma.o 00:02:48.858 LIB libspdk_thread.a 00:02:48.858 SO libspdk_thread.so.10.1 00:02:48.858 SYMLINK libspdk_thread.so 00:02:49.117 CC lib/blob/blobstore.o 00:02:49.117 CC lib/virtio/virtio.o 00:02:49.117 CC lib/vfu_tgt/tgt_endpoint.o 00:02:49.117 CC lib/init/json_config.o 00:02:49.117 CC lib/accel/accel.o 00:02:49.117 CC lib/virtio/virtio_vhost_user.o 00:02:49.117 CC lib/accel/accel_rpc.o 00:02:49.117 CC lib/vfu_tgt/tgt_rpc.o 00:02:49.117 CC lib/init/subsystem.o 00:02:49.117 CC lib/blob/request.o 00:02:49.117 CC lib/virtio/virtio_vfio_user.o 00:02:49.117 CC lib/init/subsystem_rpc.o 00:02:49.117 CC lib/accel/accel_sw.o 00:02:49.117 CC lib/blob/zeroes.o 00:02:49.117 CC lib/virtio/virtio_pci.o 00:02:49.117 CC lib/init/rpc.o 00:02:49.117 CC lib/blob/blob_bs_dev.o 00:02:49.375 LIB libspdk_init.a 00:02:49.375 SO libspdk_init.so.5.0 00:02:49.375 LIB libspdk_virtio.a 00:02:49.375 LIB libspdk_vfu_tgt.a 00:02:49.375 SYMLINK libspdk_init.so 00:02:49.375 SO libspdk_virtio.so.7.0 00:02:49.375 SO libspdk_vfu_tgt.so.3.0 00:02:49.375 SYMLINK libspdk_vfu_tgt.so 00:02:49.375 SYMLINK libspdk_virtio.so 00:02:49.634 CC lib/event/app.o 00:02:49.634 CC lib/event/reactor.o 00:02:49.634 CC lib/event/log_rpc.o 00:02:49.634 CC lib/event/app_rpc.o 00:02:49.634 CC lib/event/scheduler_static.o 00:02:49.892 LIB libspdk_event.a 00:02:49.892 SO libspdk_event.so.14.0 00:02:50.151 LIB libspdk_accel.a 00:02:50.151 SYMLINK libspdk_event.so 00:02:50.151 SO libspdk_accel.so.16.0 00:02:50.151 SYMLINK libspdk_accel.so 00:02:50.409 LIB libspdk_nvme.a 00:02:50.409 CC lib/bdev/bdev.o 00:02:50.409 CC lib/bdev/bdev_rpc.o 00:02:50.409 CC lib/bdev/bdev_zone.o 00:02:50.409 CC lib/bdev/part.o 00:02:50.409 CC lib/bdev/scsi_nvme.o 00:02:50.409 SO libspdk_nvme.so.13.1 00:02:50.668 SYMLINK libspdk_nvme.so 00:02:52.044 LIB libspdk_blob.a 00:02:52.044 SO libspdk_blob.so.11.0 00:02:52.044 SYMLINK libspdk_blob.so 00:02:52.302 CC lib/blobfs/blobfs.o 00:02:52.302 CC lib/blobfs/tree.o 00:02:52.302 CC lib/lvol/lvol.o 00:02:52.868 LIB libspdk_bdev.a 00:02:52.868 SO libspdk_bdev.so.16.0 00:02:53.138 SYMLINK libspdk_bdev.so 00:02:53.138 LIB libspdk_blobfs.a 00:02:53.138 SO libspdk_blobfs.so.10.0 00:02:53.138 SYMLINK libspdk_blobfs.so 00:02:53.138 LIB libspdk_lvol.a 00:02:53.138 SO libspdk_lvol.so.10.0 00:02:53.138 CC lib/scsi/dev.o 00:02:53.138 CC lib/ublk/ublk.o 00:02:53.138 CC lib/nbd/nbd.o 00:02:53.138 CC lib/nvmf/ctrlr.o 00:02:53.138 CC lib/scsi/lun.o 00:02:53.138 CC lib/nbd/nbd_rpc.o 00:02:53.138 CC lib/ftl/ftl_core.o 00:02:53.138 CC lib/nvmf/ctrlr_discovery.o 00:02:53.138 CC lib/scsi/port.o 00:02:53.138 CC lib/ftl/ftl_init.o 00:02:53.138 CC lib/ublk/ublk_rpc.o 00:02:53.138 CC lib/nvmf/ctrlr_bdev.o 00:02:53.138 CC lib/scsi/scsi.o 00:02:53.138 CC lib/ftl/ftl_layout.o 00:02:53.138 CC lib/nvmf/subsystem.o 00:02:53.138 CC lib/scsi/scsi_bdev.o 00:02:53.138 CC lib/ftl/ftl_debug.o 00:02:53.138 CC lib/nvmf/nvmf.o 00:02:53.138 CC lib/nvmf/nvmf_rpc.o 00:02:53.138 CC lib/scsi/scsi_pr.o 00:02:53.138 CC lib/ftl/ftl_io.o 00:02:53.138 CC lib/scsi/scsi_rpc.o 00:02:53.138 CC lib/nvmf/transport.o 00:02:53.138 CC lib/ftl/ftl_sb.o 00:02:53.138 CC lib/scsi/task.o 00:02:53.138 CC lib/nvmf/tcp.o 00:02:53.138 CC lib/ftl/ftl_l2p.o 00:02:53.138 CC lib/nvmf/stubs.o 00:02:53.138 CC lib/ftl/ftl_l2p_flat.o 00:02:53.138 CC lib/ftl/ftl_nv_cache.o 00:02:53.138 CC lib/nvmf/mdns_server.o 00:02:53.138 CC lib/nvmf/vfio_user.o 00:02:53.138 CC lib/ftl/ftl_band.o 00:02:53.138 CC lib/nvmf/rdma.o 00:02:53.138 CC lib/ftl/ftl_band_ops.o 00:02:53.138 CC lib/ftl/ftl_writer.o 00:02:53.138 CC lib/nvmf/auth.o 00:02:53.138 CC lib/ftl/ftl_rq.o 00:02:53.138 CC lib/ftl/ftl_reloc.o 00:02:53.138 CC lib/ftl/ftl_l2p_cache.o 00:02:53.138 CC lib/ftl/ftl_p2l.o 00:02:53.138 CC lib/ftl/mngt/ftl_mngt.o 00:02:53.138 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:53.138 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:53.138 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:53.138 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:53.138 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:53.403 SYMLINK libspdk_lvol.so 00:02:53.403 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:53.665 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:53.665 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:53.665 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:53.665 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:53.665 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:53.665 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:53.665 CC lib/ftl/utils/ftl_conf.o 00:02:53.665 CC lib/ftl/utils/ftl_md.o 00:02:53.665 CC lib/ftl/utils/ftl_mempool.o 00:02:53.665 CC lib/ftl/utils/ftl_bitmap.o 00:02:53.665 CC lib/ftl/utils/ftl_property.o 00:02:53.665 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:53.665 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:53.665 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:53.665 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:53.665 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:53.665 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:53.665 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:53.665 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:53.923 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:53.923 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:53.923 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:53.923 CC lib/ftl/base/ftl_base_dev.o 00:02:53.923 CC lib/ftl/base/ftl_base_bdev.o 00:02:53.923 CC lib/ftl/ftl_trace.o 00:02:53.923 LIB libspdk_nbd.a 00:02:53.923 SO libspdk_nbd.so.7.0 00:02:54.182 SYMLINK libspdk_nbd.so 00:02:54.182 LIB libspdk_scsi.a 00:02:54.182 SO libspdk_scsi.so.9.0 00:02:54.440 SYMLINK libspdk_scsi.so 00:02:54.440 LIB libspdk_ublk.a 00:02:54.440 SO libspdk_ublk.so.3.0 00:02:54.440 SYMLINK libspdk_ublk.so 00:02:54.440 CC lib/vhost/vhost.o 00:02:54.440 CC lib/iscsi/conn.o 00:02:54.440 CC lib/vhost/vhost_rpc.o 00:02:54.440 CC lib/iscsi/init_grp.o 00:02:54.440 CC lib/vhost/vhost_scsi.o 00:02:54.440 CC lib/iscsi/iscsi.o 00:02:54.440 CC lib/vhost/vhost_blk.o 00:02:54.440 CC lib/iscsi/md5.o 00:02:54.440 CC lib/vhost/rte_vhost_user.o 00:02:54.440 CC lib/iscsi/param.o 00:02:54.440 CC lib/iscsi/portal_grp.o 00:02:54.440 CC lib/iscsi/tgt_node.o 00:02:54.440 CC lib/iscsi/iscsi_subsystem.o 00:02:54.440 CC lib/iscsi/iscsi_rpc.o 00:02:54.440 CC lib/iscsi/task.o 00:02:54.698 LIB libspdk_ftl.a 00:02:54.956 SO libspdk_ftl.so.9.0 00:02:55.214 SYMLINK libspdk_ftl.so 00:02:55.780 LIB libspdk_vhost.a 00:02:55.780 SO libspdk_vhost.so.8.0 00:02:55.780 LIB libspdk_nvmf.a 00:02:55.780 SYMLINK libspdk_vhost.so 00:02:55.780 SO libspdk_nvmf.so.19.0 00:02:56.038 LIB libspdk_iscsi.a 00:02:56.038 SO libspdk_iscsi.so.8.0 00:02:56.038 SYMLINK libspdk_nvmf.so 00:02:56.038 SYMLINK libspdk_iscsi.so 00:02:56.297 CC module/vfu_device/vfu_virtio.o 00:02:56.297 CC module/vfu_device/vfu_virtio_blk.o 00:02:56.297 CC module/env_dpdk/env_dpdk_rpc.o 00:02:56.297 CC module/vfu_device/vfu_virtio_scsi.o 00:02:56.297 CC module/vfu_device/vfu_virtio_rpc.o 00:02:56.555 CC module/blob/bdev/blob_bdev.o 00:02:56.555 CC module/sock/posix/posix.o 00:02:56.555 CC module/keyring/linux/keyring.o 00:02:56.555 CC module/scheduler/gscheduler/gscheduler.o 00:02:56.555 CC module/accel/ioat/accel_ioat.o 00:02:56.555 CC module/accel/dsa/accel_dsa.o 00:02:56.555 CC module/keyring/linux/keyring_rpc.o 00:02:56.555 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:56.555 CC module/accel/ioat/accel_ioat_rpc.o 00:02:56.555 CC module/accel/dsa/accel_dsa_rpc.o 00:02:56.555 CC module/accel/error/accel_error.o 00:02:56.555 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:56.555 CC module/accel/iaa/accel_iaa.o 00:02:56.555 CC module/accel/error/accel_error_rpc.o 00:02:56.555 CC module/keyring/file/keyring.o 00:02:56.555 CC module/keyring/file/keyring_rpc.o 00:02:56.555 CC module/accel/iaa/accel_iaa_rpc.o 00:02:56.555 LIB libspdk_env_dpdk_rpc.a 00:02:56.555 SO libspdk_env_dpdk_rpc.so.6.0 00:02:56.555 SYMLINK libspdk_env_dpdk_rpc.so 00:02:56.555 LIB libspdk_keyring_linux.a 00:02:56.555 LIB libspdk_keyring_file.a 00:02:56.555 LIB libspdk_scheduler_gscheduler.a 00:02:56.555 LIB libspdk_scheduler_dpdk_governor.a 00:02:56.814 SO libspdk_keyring_linux.so.1.0 00:02:56.814 SO libspdk_scheduler_gscheduler.so.4.0 00:02:56.814 SO libspdk_keyring_file.so.1.0 00:02:56.814 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:56.814 LIB libspdk_accel_error.a 00:02:56.814 LIB libspdk_accel_ioat.a 00:02:56.814 LIB libspdk_scheduler_dynamic.a 00:02:56.814 SO libspdk_accel_error.so.2.0 00:02:56.814 LIB libspdk_accel_iaa.a 00:02:56.814 SO libspdk_accel_ioat.so.6.0 00:02:56.814 SYMLINK libspdk_scheduler_gscheduler.so 00:02:56.814 SO libspdk_scheduler_dynamic.so.4.0 00:02:56.814 SYMLINK libspdk_keyring_linux.so 00:02:56.814 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:56.814 SYMLINK libspdk_keyring_file.so 00:02:56.814 SO libspdk_accel_iaa.so.3.0 00:02:56.814 LIB libspdk_accel_dsa.a 00:02:56.814 SYMLINK libspdk_accel_error.so 00:02:56.814 SYMLINK libspdk_accel_ioat.so 00:02:56.814 SYMLINK libspdk_scheduler_dynamic.so 00:02:56.814 LIB libspdk_blob_bdev.a 00:02:56.814 SO libspdk_accel_dsa.so.5.0 00:02:56.814 SYMLINK libspdk_accel_iaa.so 00:02:56.814 SO libspdk_blob_bdev.so.11.0 00:02:56.814 SYMLINK libspdk_accel_dsa.so 00:02:56.814 SYMLINK libspdk_blob_bdev.so 00:02:57.073 LIB libspdk_vfu_device.a 00:02:57.073 SO libspdk_vfu_device.so.3.0 00:02:57.073 CC module/blobfs/bdev/blobfs_bdev.o 00:02:57.073 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:57.073 CC module/bdev/null/bdev_null.o 00:02:57.073 CC module/bdev/delay/vbdev_delay.o 00:02:57.073 CC module/bdev/null/bdev_null_rpc.o 00:02:57.073 CC module/bdev/error/vbdev_error.o 00:02:57.073 CC module/bdev/lvol/vbdev_lvol.o 00:02:57.073 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:57.073 CC module/bdev/error/vbdev_error_rpc.o 00:02:57.073 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:57.073 CC module/bdev/malloc/bdev_malloc.o 00:02:57.073 CC module/bdev/gpt/gpt.o 00:02:57.073 CC module/bdev/gpt/vbdev_gpt.o 00:02:57.073 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:57.073 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:57.073 CC module/bdev/ftl/bdev_ftl.o 00:02:57.073 CC module/bdev/split/vbdev_split.o 00:02:57.073 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:57.073 CC module/bdev/split/vbdev_split_rpc.o 00:02:57.073 CC module/bdev/nvme/bdev_nvme.o 00:02:57.073 CC module/bdev/raid/bdev_raid.o 00:02:57.073 CC module/bdev/nvme/nvme_rpc.o 00:02:57.073 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:57.073 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:57.073 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:57.073 CC module/bdev/passthru/vbdev_passthru.o 00:02:57.073 CC module/bdev/iscsi/bdev_iscsi.o 00:02:57.073 CC module/bdev/raid/bdev_raid_rpc.o 00:02:57.073 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:57.073 CC module/bdev/nvme/bdev_mdns_client.o 00:02:57.073 CC module/bdev/raid/raid0.o 00:02:57.073 CC module/bdev/raid/bdev_raid_sb.o 00:02:57.073 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:57.073 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:57.073 CC module/bdev/nvme/vbdev_opal.o 00:02:57.073 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:57.073 CC module/bdev/raid/concat.o 00:02:57.073 CC module/bdev/raid/raid1.o 00:02:57.073 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:57.073 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:57.073 CC module/bdev/aio/bdev_aio.o 00:02:57.073 CC module/bdev/aio/bdev_aio_rpc.o 00:02:57.332 SYMLINK libspdk_vfu_device.so 00:02:57.332 LIB libspdk_sock_posix.a 00:02:57.332 SO libspdk_sock_posix.so.6.0 00:02:57.590 LIB libspdk_blobfs_bdev.a 00:02:57.590 SYMLINK libspdk_sock_posix.so 00:02:57.590 SO libspdk_blobfs_bdev.so.6.0 00:02:57.590 LIB libspdk_bdev_null.a 00:02:57.590 LIB libspdk_bdev_split.a 00:02:57.590 SYMLINK libspdk_blobfs_bdev.so 00:02:57.590 SO libspdk_bdev_null.so.6.0 00:02:57.590 SO libspdk_bdev_split.so.6.0 00:02:57.590 LIB libspdk_bdev_error.a 00:02:57.590 LIB libspdk_bdev_gpt.a 00:02:57.590 SO libspdk_bdev_error.so.6.0 00:02:57.590 LIB libspdk_bdev_zone_block.a 00:02:57.590 LIB libspdk_bdev_aio.a 00:02:57.590 LIB libspdk_bdev_ftl.a 00:02:57.590 SYMLINK libspdk_bdev_split.so 00:02:57.590 SO libspdk_bdev_gpt.so.6.0 00:02:57.590 SYMLINK libspdk_bdev_null.so 00:02:57.590 LIB libspdk_bdev_passthru.a 00:02:57.590 SO libspdk_bdev_zone_block.so.6.0 00:02:57.590 SO libspdk_bdev_aio.so.6.0 00:02:57.590 SO libspdk_bdev_ftl.so.6.0 00:02:57.590 SO libspdk_bdev_passthru.so.6.0 00:02:57.849 LIB libspdk_bdev_delay.a 00:02:57.849 SYMLINK libspdk_bdev_error.so 00:02:57.849 SYMLINK libspdk_bdev_gpt.so 00:02:57.849 LIB libspdk_bdev_iscsi.a 00:02:57.849 SO libspdk_bdev_delay.so.6.0 00:02:57.849 SYMLINK libspdk_bdev_zone_block.so 00:02:57.849 SYMLINK libspdk_bdev_aio.so 00:02:57.849 LIB libspdk_bdev_malloc.a 00:02:57.849 SYMLINK libspdk_bdev_ftl.so 00:02:57.849 SYMLINK libspdk_bdev_passthru.so 00:02:57.849 SO libspdk_bdev_iscsi.so.6.0 00:02:57.849 SO libspdk_bdev_malloc.so.6.0 00:02:57.849 SYMLINK libspdk_bdev_delay.so 00:02:57.849 SYMLINK libspdk_bdev_iscsi.so 00:02:57.849 SYMLINK libspdk_bdev_malloc.so 00:02:57.849 LIB libspdk_bdev_lvol.a 00:02:57.849 LIB libspdk_bdev_virtio.a 00:02:57.849 SO libspdk_bdev_lvol.so.6.0 00:02:57.849 SO libspdk_bdev_virtio.so.6.0 00:02:58.107 SYMLINK libspdk_bdev_lvol.so 00:02:58.107 SYMLINK libspdk_bdev_virtio.so 00:02:58.107 LIB libspdk_bdev_raid.a 00:02:58.365 SO libspdk_bdev_raid.so.6.0 00:02:58.365 SYMLINK libspdk_bdev_raid.so 00:02:59.738 LIB libspdk_bdev_nvme.a 00:02:59.738 SO libspdk_bdev_nvme.so.7.0 00:02:59.738 SYMLINK libspdk_bdev_nvme.so 00:02:59.996 CC module/event/subsystems/sock/sock.o 00:02:59.996 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:59.996 CC module/event/subsystems/vmd/vmd.o 00:02:59.996 CC module/event/subsystems/keyring/keyring.o 00:02:59.996 CC module/event/subsystems/iobuf/iobuf.o 00:02:59.996 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:59.996 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:59.996 CC module/event/subsystems/scheduler/scheduler.o 00:02:59.996 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:59.996 LIB libspdk_event_keyring.a 00:02:59.996 LIB libspdk_event_vhost_blk.a 00:02:59.996 LIB libspdk_event_scheduler.a 00:02:59.996 LIB libspdk_event_vmd.a 00:02:59.996 LIB libspdk_event_vfu_tgt.a 00:02:59.996 LIB libspdk_event_sock.a 00:02:59.996 LIB libspdk_event_iobuf.a 00:02:59.996 SO libspdk_event_keyring.so.1.0 00:03:00.254 SO libspdk_event_vhost_blk.so.3.0 00:03:00.254 SO libspdk_event_scheduler.so.4.0 00:03:00.254 SO libspdk_event_sock.so.5.0 00:03:00.254 SO libspdk_event_vfu_tgt.so.3.0 00:03:00.254 SO libspdk_event_vmd.so.6.0 00:03:00.254 SO libspdk_event_iobuf.so.3.0 00:03:00.254 SYMLINK libspdk_event_keyring.so 00:03:00.254 SYMLINK libspdk_event_vhost_blk.so 00:03:00.254 SYMLINK libspdk_event_scheduler.so 00:03:00.254 SYMLINK libspdk_event_vfu_tgt.so 00:03:00.254 SYMLINK libspdk_event_sock.so 00:03:00.254 SYMLINK libspdk_event_vmd.so 00:03:00.254 SYMLINK libspdk_event_iobuf.so 00:03:00.254 CC module/event/subsystems/accel/accel.o 00:03:00.512 LIB libspdk_event_accel.a 00:03:00.512 SO libspdk_event_accel.so.6.0 00:03:00.512 SYMLINK libspdk_event_accel.so 00:03:00.771 CC module/event/subsystems/bdev/bdev.o 00:03:01.060 LIB libspdk_event_bdev.a 00:03:01.060 SO libspdk_event_bdev.so.6.0 00:03:01.060 SYMLINK libspdk_event_bdev.so 00:03:01.318 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:01.318 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:01.318 CC module/event/subsystems/ublk/ublk.o 00:03:01.318 CC module/event/subsystems/nbd/nbd.o 00:03:01.318 CC module/event/subsystems/scsi/scsi.o 00:03:01.318 LIB libspdk_event_ublk.a 00:03:01.318 LIB libspdk_event_nbd.a 00:03:01.318 LIB libspdk_event_scsi.a 00:03:01.318 SO libspdk_event_nbd.so.6.0 00:03:01.318 SO libspdk_event_ublk.so.3.0 00:03:01.318 SO libspdk_event_scsi.so.6.0 00:03:01.318 SYMLINK libspdk_event_ublk.so 00:03:01.318 SYMLINK libspdk_event_nbd.so 00:03:01.318 LIB libspdk_event_nvmf.a 00:03:01.318 SYMLINK libspdk_event_scsi.so 00:03:01.577 SO libspdk_event_nvmf.so.6.0 00:03:01.577 SYMLINK libspdk_event_nvmf.so 00:03:01.577 CC module/event/subsystems/iscsi/iscsi.o 00:03:01.577 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:01.835 LIB libspdk_event_vhost_scsi.a 00:03:01.835 LIB libspdk_event_iscsi.a 00:03:01.835 SO libspdk_event_vhost_scsi.so.3.0 00:03:01.835 SO libspdk_event_iscsi.so.6.0 00:03:01.835 SYMLINK libspdk_event_vhost_scsi.so 00:03:01.835 SYMLINK libspdk_event_iscsi.so 00:03:01.835 SO libspdk.so.6.0 00:03:01.835 SYMLINK libspdk.so 00:03:02.098 CC app/trace_record/trace_record.o 00:03:02.098 CC app/spdk_top/spdk_top.o 00:03:02.098 CC test/rpc_client/rpc_client_test.o 00:03:02.098 CXX app/trace/trace.o 00:03:02.098 CC app/spdk_nvme_identify/identify.o 00:03:02.098 CC app/spdk_nvme_discover/discovery_aer.o 00:03:02.098 TEST_HEADER include/spdk/accel.h 00:03:02.098 TEST_HEADER include/spdk/accel_module.h 00:03:02.098 TEST_HEADER include/spdk/assert.h 00:03:02.098 TEST_HEADER include/spdk/barrier.h 00:03:02.098 TEST_HEADER include/spdk/base64.h 00:03:02.098 TEST_HEADER include/spdk/bdev.h 00:03:02.098 TEST_HEADER include/spdk/bdev_module.h 00:03:02.098 TEST_HEADER include/spdk/bdev_zone.h 00:03:02.098 CC app/spdk_lspci/spdk_lspci.o 00:03:02.098 TEST_HEADER include/spdk/bit_array.h 00:03:02.098 TEST_HEADER include/spdk/bit_pool.h 00:03:02.098 TEST_HEADER include/spdk/blob_bdev.h 00:03:02.098 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:02.098 CC app/spdk_nvme_perf/perf.o 00:03:02.098 TEST_HEADER include/spdk/blob.h 00:03:02.098 TEST_HEADER include/spdk/blobfs.h 00:03:02.098 TEST_HEADER include/spdk/conf.h 00:03:02.098 TEST_HEADER include/spdk/config.h 00:03:02.098 TEST_HEADER include/spdk/cpuset.h 00:03:02.098 TEST_HEADER include/spdk/crc16.h 00:03:02.098 TEST_HEADER include/spdk/crc64.h 00:03:02.098 TEST_HEADER include/spdk/crc32.h 00:03:02.098 TEST_HEADER include/spdk/dif.h 00:03:02.098 TEST_HEADER include/spdk/dma.h 00:03:02.098 TEST_HEADER include/spdk/endian.h 00:03:02.098 TEST_HEADER include/spdk/env_dpdk.h 00:03:02.098 TEST_HEADER include/spdk/env.h 00:03:02.098 TEST_HEADER include/spdk/event.h 00:03:02.098 TEST_HEADER include/spdk/fd_group.h 00:03:02.098 TEST_HEADER include/spdk/fd.h 00:03:02.098 TEST_HEADER include/spdk/file.h 00:03:02.098 TEST_HEADER include/spdk/ftl.h 00:03:02.098 TEST_HEADER include/spdk/gpt_spec.h 00:03:02.098 TEST_HEADER include/spdk/hexlify.h 00:03:02.098 TEST_HEADER include/spdk/histogram_data.h 00:03:02.098 TEST_HEADER include/spdk/idxd.h 00:03:02.098 TEST_HEADER include/spdk/idxd_spec.h 00:03:02.098 TEST_HEADER include/spdk/init.h 00:03:02.098 TEST_HEADER include/spdk/ioat.h 00:03:02.098 TEST_HEADER include/spdk/iscsi_spec.h 00:03:02.098 TEST_HEADER include/spdk/ioat_spec.h 00:03:02.098 TEST_HEADER include/spdk/jsonrpc.h 00:03:02.098 TEST_HEADER include/spdk/json.h 00:03:02.098 TEST_HEADER include/spdk/keyring.h 00:03:02.098 TEST_HEADER include/spdk/keyring_module.h 00:03:02.098 TEST_HEADER include/spdk/likely.h 00:03:02.098 TEST_HEADER include/spdk/lvol.h 00:03:02.098 TEST_HEADER include/spdk/log.h 00:03:02.098 TEST_HEADER include/spdk/memory.h 00:03:02.098 TEST_HEADER include/spdk/nbd.h 00:03:02.098 TEST_HEADER include/spdk/mmio.h 00:03:02.098 TEST_HEADER include/spdk/net.h 00:03:02.098 TEST_HEADER include/spdk/notify.h 00:03:02.098 TEST_HEADER include/spdk/nvme.h 00:03:02.098 TEST_HEADER include/spdk/nvme_intel.h 00:03:02.098 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:02.098 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:02.098 TEST_HEADER include/spdk/nvme_spec.h 00:03:02.098 TEST_HEADER include/spdk/nvme_zns.h 00:03:02.098 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:02.098 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:02.098 TEST_HEADER include/spdk/nvmf.h 00:03:02.098 TEST_HEADER include/spdk/nvmf_spec.h 00:03:02.098 TEST_HEADER include/spdk/nvmf_transport.h 00:03:02.098 TEST_HEADER include/spdk/opal.h 00:03:02.098 TEST_HEADER include/spdk/pci_ids.h 00:03:02.098 TEST_HEADER include/spdk/opal_spec.h 00:03:02.098 TEST_HEADER include/spdk/pipe.h 00:03:02.098 TEST_HEADER include/spdk/reduce.h 00:03:02.098 TEST_HEADER include/spdk/queue.h 00:03:02.098 TEST_HEADER include/spdk/rpc.h 00:03:02.098 TEST_HEADER include/spdk/scheduler.h 00:03:02.098 TEST_HEADER include/spdk/scsi.h 00:03:02.098 TEST_HEADER include/spdk/sock.h 00:03:02.098 TEST_HEADER include/spdk/scsi_spec.h 00:03:02.098 TEST_HEADER include/spdk/stdinc.h 00:03:02.098 TEST_HEADER include/spdk/string.h 00:03:02.098 TEST_HEADER include/spdk/thread.h 00:03:02.098 TEST_HEADER include/spdk/trace.h 00:03:02.098 TEST_HEADER include/spdk/trace_parser.h 00:03:02.098 CC app/spdk_dd/spdk_dd.o 00:03:02.098 TEST_HEADER include/spdk/tree.h 00:03:02.098 TEST_HEADER include/spdk/ublk.h 00:03:02.098 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:02.098 TEST_HEADER include/spdk/util.h 00:03:02.098 TEST_HEADER include/spdk/uuid.h 00:03:02.098 TEST_HEADER include/spdk/version.h 00:03:02.098 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:02.098 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:02.098 TEST_HEADER include/spdk/vhost.h 00:03:02.098 TEST_HEADER include/spdk/vmd.h 00:03:02.098 TEST_HEADER include/spdk/xor.h 00:03:02.098 TEST_HEADER include/spdk/zipf.h 00:03:02.098 CXX test/cpp_headers/accel.o 00:03:02.098 CXX test/cpp_headers/accel_module.o 00:03:02.098 CXX test/cpp_headers/assert.o 00:03:02.098 CXX test/cpp_headers/barrier.o 00:03:02.098 CXX test/cpp_headers/base64.o 00:03:02.098 CXX test/cpp_headers/bdev.o 00:03:02.098 CXX test/cpp_headers/bdev_module.o 00:03:02.098 CXX test/cpp_headers/bdev_zone.o 00:03:02.098 CXX test/cpp_headers/bit_array.o 00:03:02.098 CXX test/cpp_headers/bit_pool.o 00:03:02.098 CXX test/cpp_headers/blob_bdev.o 00:03:02.098 CXX test/cpp_headers/blobfs_bdev.o 00:03:02.098 CXX test/cpp_headers/blobfs.o 00:03:02.098 CXX test/cpp_headers/blob.o 00:03:02.098 CC app/iscsi_tgt/iscsi_tgt.o 00:03:02.098 CXX test/cpp_headers/conf.o 00:03:02.098 CXX test/cpp_headers/config.o 00:03:02.098 CXX test/cpp_headers/cpuset.o 00:03:02.098 CXX test/cpp_headers/crc16.o 00:03:02.098 CC app/nvmf_tgt/nvmf_main.o 00:03:02.361 CXX test/cpp_headers/crc32.o 00:03:02.361 CC examples/ioat/perf/perf.o 00:03:02.361 CC test/app/jsoncat/jsoncat.o 00:03:02.361 CC app/spdk_tgt/spdk_tgt.o 00:03:02.361 CC examples/util/zipf/zipf.o 00:03:02.361 CC examples/ioat/verify/verify.o 00:03:02.361 CC test/thread/poller_perf/poller_perf.o 00:03:02.361 CC test/env/pci/pci_ut.o 00:03:02.361 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:02.361 CC test/app/histogram_perf/histogram_perf.o 00:03:02.361 CC test/app/stub/stub.o 00:03:02.361 CC test/env/vtophys/vtophys.o 00:03:02.361 CC test/env/memory/memory_ut.o 00:03:02.361 CC app/fio/nvme/fio_plugin.o 00:03:02.361 CC test/dma/test_dma/test_dma.o 00:03:02.361 CC app/fio/bdev/fio_plugin.o 00:03:02.361 CC test/app/bdev_svc/bdev_svc.o 00:03:02.361 CC test/env/mem_callbacks/mem_callbacks.o 00:03:02.361 LINK spdk_lspci 00:03:02.361 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:02.624 LINK rpc_client_test 00:03:02.624 LINK spdk_nvme_discover 00:03:02.624 LINK zipf 00:03:02.624 LINK jsoncat 00:03:02.624 LINK histogram_perf 00:03:02.624 CXX test/cpp_headers/crc64.o 00:03:02.624 CXX test/cpp_headers/dif.o 00:03:02.624 CXX test/cpp_headers/dma.o 00:03:02.624 LINK vtophys 00:03:02.624 LINK poller_perf 00:03:02.624 LINK nvmf_tgt 00:03:02.624 LINK env_dpdk_post_init 00:03:02.624 CXX test/cpp_headers/endian.o 00:03:02.624 CXX test/cpp_headers/env_dpdk.o 00:03:02.624 LINK spdk_trace_record 00:03:02.624 CXX test/cpp_headers/env.o 00:03:02.624 CXX test/cpp_headers/event.o 00:03:02.624 CXX test/cpp_headers/fd_group.o 00:03:02.624 CXX test/cpp_headers/fd.o 00:03:02.624 CXX test/cpp_headers/file.o 00:03:02.624 CXX test/cpp_headers/ftl.o 00:03:02.624 LINK interrupt_tgt 00:03:02.624 LINK stub 00:03:02.624 LINK iscsi_tgt 00:03:02.624 CXX test/cpp_headers/gpt_spec.o 00:03:02.624 CXX test/cpp_headers/hexlify.o 00:03:02.624 LINK ioat_perf 00:03:02.624 CXX test/cpp_headers/histogram_data.o 00:03:02.624 CXX test/cpp_headers/idxd.o 00:03:02.888 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:02.888 LINK verify 00:03:02.888 LINK bdev_svc 00:03:02.888 CXX test/cpp_headers/idxd_spec.o 00:03:02.888 LINK spdk_tgt 00:03:02.888 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:02.888 CXX test/cpp_headers/init.o 00:03:02.888 CXX test/cpp_headers/ioat.o 00:03:02.888 CXX test/cpp_headers/ioat_spec.o 00:03:02.888 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:02.888 CXX test/cpp_headers/iscsi_spec.o 00:03:02.888 CXX test/cpp_headers/json.o 00:03:02.888 LINK spdk_dd 00:03:02.888 CXX test/cpp_headers/jsonrpc.o 00:03:02.888 CXX test/cpp_headers/keyring.o 00:03:03.164 CXX test/cpp_headers/keyring_module.o 00:03:03.164 CXX test/cpp_headers/likely.o 00:03:03.164 CXX test/cpp_headers/log.o 00:03:03.164 CXX test/cpp_headers/lvol.o 00:03:03.164 CXX test/cpp_headers/memory.o 00:03:03.164 CXX test/cpp_headers/mmio.o 00:03:03.164 LINK spdk_trace 00:03:03.164 LINK pci_ut 00:03:03.164 CXX test/cpp_headers/nbd.o 00:03:03.164 CXX test/cpp_headers/net.o 00:03:03.164 CXX test/cpp_headers/notify.o 00:03:03.164 CXX test/cpp_headers/nvme.o 00:03:03.164 CXX test/cpp_headers/nvme_intel.o 00:03:03.164 CXX test/cpp_headers/nvme_ocssd.o 00:03:03.164 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:03.164 LINK test_dma 00:03:03.164 CXX test/cpp_headers/nvme_spec.o 00:03:03.164 CXX test/cpp_headers/nvme_zns.o 00:03:03.164 CXX test/cpp_headers/nvmf_cmd.o 00:03:03.164 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:03.164 CXX test/cpp_headers/nvmf.o 00:03:03.164 CXX test/cpp_headers/nvmf_spec.o 00:03:03.164 CXX test/cpp_headers/nvmf_transport.o 00:03:03.164 CXX test/cpp_headers/opal.o 00:03:03.164 CXX test/cpp_headers/opal_spec.o 00:03:03.164 LINK nvme_fuzz 00:03:03.164 CC examples/sock/hello_world/hello_sock.o 00:03:03.164 CXX test/cpp_headers/pci_ids.o 00:03:03.164 CC examples/vmd/led/led.o 00:03:03.164 CXX test/cpp_headers/pipe.o 00:03:03.164 CC examples/idxd/perf/perf.o 00:03:03.426 CC examples/thread/thread/thread_ex.o 00:03:03.426 CC examples/vmd/lsvmd/lsvmd.o 00:03:03.426 CC test/event/event_perf/event_perf.o 00:03:03.426 CC test/event/reactor/reactor.o 00:03:03.426 LINK spdk_nvme 00:03:03.426 LINK spdk_bdev 00:03:03.426 CC test/event/reactor_perf/reactor_perf.o 00:03:03.426 CXX test/cpp_headers/queue.o 00:03:03.426 CXX test/cpp_headers/reduce.o 00:03:03.426 CXX test/cpp_headers/rpc.o 00:03:03.426 CC test/event/app_repeat/app_repeat.o 00:03:03.426 CXX test/cpp_headers/scheduler.o 00:03:03.426 CXX test/cpp_headers/scsi.o 00:03:03.426 CXX test/cpp_headers/scsi_spec.o 00:03:03.426 CXX test/cpp_headers/sock.o 00:03:03.426 CXX test/cpp_headers/stdinc.o 00:03:03.426 CXX test/cpp_headers/string.o 00:03:03.426 CXX test/cpp_headers/thread.o 00:03:03.426 CXX test/cpp_headers/trace.o 00:03:03.426 CXX test/cpp_headers/trace_parser.o 00:03:03.690 CXX test/cpp_headers/tree.o 00:03:03.690 CXX test/cpp_headers/ublk.o 00:03:03.690 CXX test/cpp_headers/util.o 00:03:03.690 CXX test/cpp_headers/uuid.o 00:03:03.690 CC test/event/scheduler/scheduler.o 00:03:03.690 CXX test/cpp_headers/version.o 00:03:03.690 LINK lsvmd 00:03:03.690 CXX test/cpp_headers/vfio_user_pci.o 00:03:03.690 CXX test/cpp_headers/vfio_user_spec.o 00:03:03.690 CXX test/cpp_headers/vhost.o 00:03:03.690 LINK led 00:03:03.690 CXX test/cpp_headers/vmd.o 00:03:03.690 CXX test/cpp_headers/xor.o 00:03:03.690 CXX test/cpp_headers/zipf.o 00:03:03.690 LINK spdk_nvme_perf 00:03:03.690 LINK event_perf 00:03:03.690 LINK mem_callbacks 00:03:03.690 LINK reactor 00:03:03.690 LINK reactor_perf 00:03:03.690 CC app/vhost/vhost.o 00:03:03.690 LINK app_repeat 00:03:03.690 LINK vhost_fuzz 00:03:03.690 LINK spdk_nvme_identify 00:03:03.690 LINK hello_sock 00:03:03.690 LINK thread 00:03:03.949 CC test/nvme/reset/reset.o 00:03:03.949 CC test/nvme/aer/aer.o 00:03:03.949 CC test/nvme/overhead/overhead.o 00:03:03.949 CC test/nvme/e2edp/nvme_dp.o 00:03:03.949 CC test/nvme/startup/startup.o 00:03:03.949 CC test/nvme/err_injection/err_injection.o 00:03:03.949 CC test/nvme/sgl/sgl.o 00:03:03.949 LINK spdk_top 00:03:03.949 CC test/nvme/reserve/reserve.o 00:03:03.949 CC test/accel/dif/dif.o 00:03:03.949 CC test/blobfs/mkfs/mkfs.o 00:03:03.949 LINK idxd_perf 00:03:03.949 CC test/nvme/simple_copy/simple_copy.o 00:03:03.949 CC test/nvme/connect_stress/connect_stress.o 00:03:03.949 CC test/nvme/boot_partition/boot_partition.o 00:03:03.949 CC test/nvme/compliance/nvme_compliance.o 00:03:03.949 CC test/lvol/esnap/esnap.o 00:03:03.949 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:03.949 CC test/nvme/fdp/fdp.o 00:03:03.949 CC test/nvme/fused_ordering/fused_ordering.o 00:03:03.949 CC test/nvme/cuse/cuse.o 00:03:03.949 LINK scheduler 00:03:03.949 LINK vhost 00:03:04.207 LINK startup 00:03:04.207 LINK mkfs 00:03:04.207 LINK boot_partition 00:03:04.207 LINK err_injection 00:03:04.207 LINK doorbell_aers 00:03:04.207 CC examples/nvme/hello_world/hello_world.o 00:03:04.207 CC examples/nvme/abort/abort.o 00:03:04.207 CC examples/nvme/arbitration/arbitration.o 00:03:04.207 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:04.207 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:04.207 CC examples/nvme/reconnect/reconnect.o 00:03:04.207 CC examples/nvme/hotplug/hotplug.o 00:03:04.207 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:04.207 LINK aer 00:03:04.207 LINK reserve 00:03:04.207 LINK connect_stress 00:03:04.207 LINK overhead 00:03:04.466 LINK memory_ut 00:03:04.466 LINK nvme_compliance 00:03:04.466 LINK simple_copy 00:03:04.466 LINK fused_ordering 00:03:04.466 LINK reset 00:03:04.466 LINK sgl 00:03:04.466 LINK nvme_dp 00:03:04.466 CC examples/accel/perf/accel_perf.o 00:03:04.466 LINK fdp 00:03:04.466 CC examples/blob/cli/blobcli.o 00:03:04.466 CC examples/blob/hello_world/hello_blob.o 00:03:04.466 LINK pmr_persistence 00:03:04.466 LINK dif 00:03:04.466 LINK cmb_copy 00:03:04.466 LINK hotplug 00:03:04.724 LINK hello_world 00:03:04.724 LINK reconnect 00:03:04.724 LINK abort 00:03:04.724 LINK hello_blob 00:03:04.724 LINK arbitration 00:03:04.983 LINK nvme_manage 00:03:04.983 CC test/bdev/bdevio/bdevio.o 00:03:04.983 LINK accel_perf 00:03:04.983 LINK blobcli 00:03:05.241 LINK iscsi_fuzz 00:03:05.241 CC examples/bdev/bdevperf/bdevperf.o 00:03:05.241 CC examples/bdev/hello_world/hello_bdev.o 00:03:05.499 LINK bdevio 00:03:05.499 LINK cuse 00:03:05.499 LINK hello_bdev 00:03:06.077 LINK bdevperf 00:03:06.642 CC examples/nvmf/nvmf/nvmf.o 00:03:06.901 LINK nvmf 00:03:08.800 LINK esnap 00:03:09.366 00:03:09.366 real 0m41.345s 00:03:09.366 user 7m26.507s 00:03:09.366 sys 1m48.141s 00:03:09.366 08:36:27 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:03:09.366 08:36:27 make -- common/autotest_common.sh@10 -- $ set +x 00:03:09.366 ************************************ 00:03:09.366 END TEST make 00:03:09.366 ************************************ 00:03:09.366 08:36:27 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:09.366 08:36:27 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:09.366 08:36:27 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:09.366 08:36:27 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:09.366 08:36:27 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:03:09.367 08:36:27 -- pm/common@44 -- $ pid=731124 00:03:09.367 08:36:27 -- pm/common@50 -- $ kill -TERM 731124 00:03:09.367 08:36:27 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:09.367 08:36:27 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:03:09.367 08:36:27 -- pm/common@44 -- $ pid=731126 00:03:09.367 08:36:27 -- pm/common@50 -- $ kill -TERM 731126 00:03:09.367 08:36:27 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:09.367 08:36:27 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:09.367 08:36:27 -- pm/common@44 -- $ pid=731128 00:03:09.367 08:36:27 -- pm/common@50 -- $ kill -TERM 731128 00:03:09.367 08:36:27 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:09.367 08:36:27 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:09.367 08:36:27 -- pm/common@44 -- $ pid=731156 00:03:09.367 08:36:27 -- pm/common@50 -- $ sudo -E kill -TERM 731156 00:03:09.367 08:36:27 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:09.367 08:36:27 -- nvmf/common.sh@7 -- # uname -s 00:03:09.367 08:36:27 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:09.367 08:36:27 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:09.367 08:36:27 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:09.367 08:36:27 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:09.367 08:36:27 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:09.367 08:36:27 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:09.367 08:36:27 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:09.367 08:36:27 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:09.367 08:36:27 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:09.367 08:36:27 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:09.367 08:36:27 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:03:09.367 08:36:27 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:03:09.367 08:36:27 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:09.367 08:36:27 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:09.367 08:36:27 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:03:09.367 08:36:27 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:09.367 08:36:27 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:09.367 08:36:27 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:09.367 08:36:27 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:09.367 08:36:27 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:09.367 08:36:27 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:09.367 08:36:27 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:09.367 08:36:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:09.367 08:36:27 -- paths/export.sh@5 -- # export PATH 00:03:09.367 08:36:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:09.367 08:36:27 -- nvmf/common.sh@47 -- # : 0 00:03:09.367 08:36:27 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:09.367 08:36:27 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:09.367 08:36:27 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:09.367 08:36:27 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:09.367 08:36:27 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:09.367 08:36:27 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:09.367 08:36:27 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:09.367 08:36:27 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:09.367 08:36:27 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:09.367 08:36:27 -- spdk/autotest.sh@32 -- # uname -s 00:03:09.367 08:36:27 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:09.367 08:36:27 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:09.367 08:36:27 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:09.367 08:36:27 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:09.367 08:36:27 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:09.367 08:36:27 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:09.367 08:36:27 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:09.367 08:36:27 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:09.367 08:36:27 -- spdk/autotest.sh@48 -- # udevadm_pid=802969 00:03:09.367 08:36:27 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:09.367 08:36:27 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:09.367 08:36:27 -- pm/common@17 -- # local monitor 00:03:09.367 08:36:27 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:09.367 08:36:27 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:09.367 08:36:27 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:09.367 08:36:27 -- pm/common@21 -- # date +%s 00:03:09.367 08:36:27 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:09.367 08:36:27 -- pm/common@21 -- # date +%s 00:03:09.367 08:36:27 -- pm/common@25 -- # sleep 1 00:03:09.367 08:36:27 -- pm/common@21 -- # date +%s 00:03:09.367 08:36:27 -- pm/common@21 -- # date +%s 00:03:09.367 08:36:27 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721975787 00:03:09.367 08:36:27 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721975787 00:03:09.367 08:36:27 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721975787 00:03:09.367 08:36:27 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721975787 00:03:09.367 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721975787_collect-vmstat.pm.log 00:03:09.367 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721975787_collect-cpu-load.pm.log 00:03:09.367 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721975787_collect-cpu-temp.pm.log 00:03:09.625 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721975787_collect-bmc-pm.bmc.pm.log 00:03:10.558 08:36:28 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:10.558 08:36:28 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:10.558 08:36:28 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:10.558 08:36:28 -- common/autotest_common.sh@10 -- # set +x 00:03:10.558 08:36:28 -- spdk/autotest.sh@59 -- # create_test_list 00:03:10.558 08:36:28 -- common/autotest_common.sh@748 -- # xtrace_disable 00:03:10.558 08:36:28 -- common/autotest_common.sh@10 -- # set +x 00:03:10.558 08:36:28 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:03:10.558 08:36:28 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:10.558 08:36:28 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:10.558 08:36:28 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:10.558 08:36:28 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:10.558 08:36:28 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:10.558 08:36:28 -- common/autotest_common.sh@1455 -- # uname 00:03:10.558 08:36:28 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:03:10.558 08:36:28 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:10.558 08:36:28 -- common/autotest_common.sh@1475 -- # uname 00:03:10.558 08:36:28 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:03:10.558 08:36:28 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:03:10.558 08:36:28 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:03:10.558 08:36:28 -- spdk/autotest.sh@72 -- # hash lcov 00:03:10.558 08:36:28 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:03:10.558 08:36:28 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:03:10.558 --rc lcov_branch_coverage=1 00:03:10.558 --rc lcov_function_coverage=1 00:03:10.558 --rc genhtml_branch_coverage=1 00:03:10.558 --rc genhtml_function_coverage=1 00:03:10.558 --rc genhtml_legend=1 00:03:10.558 --rc geninfo_all_blocks=1 00:03:10.558 ' 00:03:10.558 08:36:28 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:03:10.558 --rc lcov_branch_coverage=1 00:03:10.558 --rc lcov_function_coverage=1 00:03:10.558 --rc genhtml_branch_coverage=1 00:03:10.558 --rc genhtml_function_coverage=1 00:03:10.558 --rc genhtml_legend=1 00:03:10.558 --rc geninfo_all_blocks=1 00:03:10.558 ' 00:03:10.558 08:36:28 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:03:10.558 --rc lcov_branch_coverage=1 00:03:10.558 --rc lcov_function_coverage=1 00:03:10.558 --rc genhtml_branch_coverage=1 00:03:10.558 --rc genhtml_function_coverage=1 00:03:10.558 --rc genhtml_legend=1 00:03:10.558 --rc geninfo_all_blocks=1 00:03:10.558 --no-external' 00:03:10.558 08:36:28 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:03:10.558 --rc lcov_branch_coverage=1 00:03:10.558 --rc lcov_function_coverage=1 00:03:10.558 --rc genhtml_branch_coverage=1 00:03:10.558 --rc genhtml_function_coverage=1 00:03:10.558 --rc genhtml_legend=1 00:03:10.558 --rc geninfo_all_blocks=1 00:03:10.558 --no-external' 00:03:10.558 08:36:28 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:03:10.558 lcov: LCOV version 1.14 00:03:10.558 08:36:28 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:03:28.625 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:28.625 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:40.856 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:03:40.856 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:03:40.856 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:03:40.856 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:03:40.856 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:03:40.856 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:03:40.856 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:03:40.856 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:03:40.856 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:03:40.856 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:03:40.856 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:03:40.856 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:03:40.856 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:03:40.856 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:03:40.856 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:03:40.856 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:03:40.856 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:03:40.856 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:03:40.856 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:03:40.856 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:03:40.856 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:03:40.856 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:03:40.856 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:03:40.856 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:03:40.856 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:03:40.856 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:03:40.856 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:03:40.856 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:03:40.856 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:03:40.856 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:03:40.856 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:03:40.856 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:03:40.856 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:03:40.856 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:03:40.856 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:03:40.856 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:03:40.856 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:03:40.856 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:03:40.856 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:03:40.856 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:03:40.856 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:03:40.856 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:03:40.857 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:03:40.857 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:03:40.857 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:03:40.857 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:03:40.857 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:03:40.857 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:03:40.857 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:03:40.857 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:03:40.857 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:03:40.857 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:03:40.857 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:03:40.857 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:03:40.857 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:03:40.857 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:03:40.857 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:03:40.857 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:03:40.857 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:03:40.857 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:03:40.857 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:03:40.857 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:03:40.857 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:03:40.857 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:03:40.857 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:03:40.857 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:03:40.857 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:03:40.857 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:03:40.857 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:03:40.857 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:03:40.857 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:03:40.857 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:03:40.857 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:03:40.857 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:03:40.857 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:03:40.857 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:03:40.857 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:03:40.857 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:03:40.857 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:03:40.857 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:03:40.857 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:03:40.857 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:03:40.857 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:03:40.857 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:03:40.857 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:03:40.857 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:03:40.857 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:03:40.857 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:03:40.857 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:03:40.857 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:03:40.857 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:03:40.857 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:03:40.857 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:03:40.857 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:03:40.857 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:03:40.857 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:03:40.857 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:03:40.857 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:03:40.857 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/net.gcno:no functions found 00:03:40.857 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/net.gcno 00:03:40.857 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:03:40.857 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:03:40.857 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:03:40.857 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:03:40.857 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:03:40.857 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:03:40.857 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:03:40.857 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:03:40.857 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:03:40.857 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:03:40.857 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:03:40.857 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:03:40.857 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:03:40.857 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:03:40.857 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:03:40.857 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:03:40.857 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:03:40.857 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:03:40.857 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:03:40.857 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:03:40.857 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:03:40.857 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:03:40.857 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:03:40.857 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:03:40.857 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:03:40.857 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:03:40.857 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:03:40.858 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:03:40.858 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:03:40.858 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:03:40.858 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:03:40.858 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:03:40.858 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:03:40.858 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:03:40.858 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:03:40.858 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:03:40.858 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:03:40.858 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:03:40.858 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:03:40.858 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:03:40.858 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:03:40.858 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:03:40.858 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:03:40.858 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:03:40.858 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:03:40.858 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:03:40.858 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:03:40.858 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:03:40.858 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:03:40.858 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:03:40.858 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:03:40.858 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:03:40.858 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:03:40.858 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:03:40.858 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:03:40.858 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:03:40.858 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:03:40.858 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:03:40.858 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:03:40.858 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:03:40.858 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:03:40.858 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:03:40.858 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:03:40.858 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:03:40.858 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:03:40.858 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:03:40.858 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:03:40.858 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:03:40.858 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:03:40.858 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:03:40.858 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:03:40.858 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:03:40.858 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:03:40.858 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:03:40.858 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:03:40.858 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:03:40.858 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:03:40.858 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:03:44.139 08:37:02 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:03:44.139 08:37:02 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:44.139 08:37:02 -- common/autotest_common.sh@10 -- # set +x 00:03:44.139 08:37:02 -- spdk/autotest.sh@91 -- # rm -f 00:03:44.139 08:37:02 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:45.515 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:03:45.515 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:03:45.515 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:03:45.515 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:03:45.515 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:03:45.515 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:03:45.515 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:03:45.515 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:03:45.515 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:03:45.515 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:03:45.515 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:03:45.515 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:03:45.515 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:03:45.515 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:03:45.515 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:03:45.515 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:03:45.515 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:03:45.773 08:37:04 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:03:45.773 08:37:04 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:45.773 08:37:04 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:45.773 08:37:04 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:45.773 08:37:04 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:45.773 08:37:04 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:45.773 08:37:04 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:45.773 08:37:04 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:45.773 08:37:04 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:45.773 08:37:04 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:03:45.773 08:37:04 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:45.773 08:37:04 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:45.773 08:37:04 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:03:45.773 08:37:04 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:03:45.773 08:37:04 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:45.773 No valid GPT data, bailing 00:03:45.773 08:37:04 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:45.773 08:37:04 -- scripts/common.sh@391 -- # pt= 00:03:45.773 08:37:04 -- scripts/common.sh@392 -- # return 1 00:03:45.773 08:37:04 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:45.773 1+0 records in 00:03:45.773 1+0 records out 00:03:45.773 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00283242 s, 370 MB/s 00:03:45.773 08:37:04 -- spdk/autotest.sh@118 -- # sync 00:03:45.773 08:37:04 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:45.773 08:37:04 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:45.773 08:37:04 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:47.673 08:37:05 -- spdk/autotest.sh@124 -- # uname -s 00:03:47.673 08:37:05 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:03:47.673 08:37:05 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:47.673 08:37:05 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:47.673 08:37:05 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:47.673 08:37:05 -- common/autotest_common.sh@10 -- # set +x 00:03:47.673 ************************************ 00:03:47.673 START TEST setup.sh 00:03:47.673 ************************************ 00:03:47.673 08:37:05 setup.sh -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:47.673 * Looking for test storage... 00:03:47.673 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:47.673 08:37:05 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:03:47.673 08:37:05 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:47.673 08:37:05 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:47.673 08:37:05 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:47.673 08:37:05 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:47.673 08:37:05 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:47.673 ************************************ 00:03:47.673 START TEST acl 00:03:47.673 ************************************ 00:03:47.673 08:37:05 setup.sh.acl -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:47.673 * Looking for test storage... 00:03:47.673 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:47.673 08:37:05 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:03:47.673 08:37:05 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:47.673 08:37:05 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:47.673 08:37:05 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:47.673 08:37:05 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:47.673 08:37:05 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:47.673 08:37:05 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:47.673 08:37:05 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:47.673 08:37:05 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:47.673 08:37:05 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:03:47.673 08:37:05 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:03:47.673 08:37:05 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:03:47.673 08:37:05 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:03:47.673 08:37:05 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:03:47.673 08:37:05 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:47.673 08:37:05 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:49.048 08:37:07 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:03:49.048 08:37:07 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:03:49.048 08:37:07 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:49.048 08:37:07 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:03:49.048 08:37:07 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:03:49.048 08:37:07 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:50.420 Hugepages 00:03:50.420 node hugesize free / total 00:03:50.420 08:37:08 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:50.420 08:37:08 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:50.420 08:37:08 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:50.420 08:37:08 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:50.421 08:37:08 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:50.421 08:37:08 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:50.421 08:37:08 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:50.421 08:37:08 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:50.421 08:37:08 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:50.421 00:03:50.421 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:50.421 08:37:08 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:50.421 08:37:08 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:50.421 08:37:08 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:50.421 08:37:08 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:03:50.421 08:37:08 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:50.421 08:37:08 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:50.421 08:37:08 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:50.421 08:37:08 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:03:50.421 08:37:08 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:50.421 08:37:08 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:50.421 08:37:08 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:50.421 08:37:08 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:03:50.421 08:37:08 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:50.421 08:37:08 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:50.421 08:37:08 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:50.421 08:37:08 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:03:50.421 08:37:08 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:50.421 08:37:08 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:50.421 08:37:08 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:50.421 08:37:08 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:03:50.421 08:37:08 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:50.421 08:37:08 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:50.421 08:37:08 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:50.421 08:37:08 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:03:50.421 08:37:08 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:50.421 08:37:08 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:50.421 08:37:08 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:50.421 08:37:08 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:03:50.421 08:37:08 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:50.421 08:37:08 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:50.421 08:37:08 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:50.421 08:37:08 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:03:50.421 08:37:08 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:50.421 08:37:08 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:50.421 08:37:08 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:50.421 08:37:08 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:03:50.421 08:37:08 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:50.421 08:37:08 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:50.421 08:37:08 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:50.421 08:37:08 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:03:50.421 08:37:08 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:50.421 08:37:08 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:50.421 08:37:08 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:50.421 08:37:08 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:03:50.421 08:37:08 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:50.421 08:37:08 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:50.421 08:37:08 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:50.421 08:37:08 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:03:50.421 08:37:08 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:50.421 08:37:08 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:50.421 08:37:08 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:50.421 08:37:08 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:03:50.421 08:37:08 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:50.421 08:37:08 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:50.421 08:37:08 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:50.421 08:37:08 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:03:50.421 08:37:08 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:50.421 08:37:08 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:50.421 08:37:08 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:50.421 08:37:08 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:03:50.421 08:37:08 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:50.421 08:37:08 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:50.421 08:37:08 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:50.421 08:37:08 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:03:50.421 08:37:08 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:50.421 08:37:08 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:50.421 08:37:08 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:50.421 08:37:08 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:88:00.0 == *:*:*.* ]] 00:03:50.421 08:37:08 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:50.421 08:37:08 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:03:50.421 08:37:08 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:50.421 08:37:08 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:50.421 08:37:08 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:50.421 08:37:08 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:03:50.421 08:37:08 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:03:50.421 08:37:08 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:50.421 08:37:08 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:50.421 08:37:08 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:50.421 ************************************ 00:03:50.421 START TEST denied 00:03:50.421 ************************************ 00:03:50.421 08:37:08 setup.sh.acl.denied -- common/autotest_common.sh@1125 -- # denied 00:03:50.421 08:37:08 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:88:00.0' 00:03:50.421 08:37:08 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:03:50.421 08:37:08 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:88:00.0' 00:03:50.421 08:37:08 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:03:50.421 08:37:08 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:51.795 0000:88:00.0 (8086 0a54): Skipping denied controller at 0000:88:00.0 00:03:51.795 08:37:09 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:88:00.0 00:03:51.795 08:37:09 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:03:51.795 08:37:09 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:03:51.795 08:37:09 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:88:00.0 ]] 00:03:51.795 08:37:09 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:88:00.0/driver 00:03:51.795 08:37:09 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:51.795 08:37:09 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:51.795 08:37:09 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:03:51.795 08:37:09 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:51.795 08:37:09 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:54.329 00:03:54.329 real 0m3.796s 00:03:54.329 user 0m1.081s 00:03:54.329 sys 0m1.795s 00:03:54.329 08:37:12 setup.sh.acl.denied -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:54.329 08:37:12 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:03:54.329 ************************************ 00:03:54.329 END TEST denied 00:03:54.329 ************************************ 00:03:54.329 08:37:12 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:54.329 08:37:12 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:54.329 08:37:12 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:54.329 08:37:12 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:54.329 ************************************ 00:03:54.329 START TEST allowed 00:03:54.329 ************************************ 00:03:54.329 08:37:12 setup.sh.acl.allowed -- common/autotest_common.sh@1125 -- # allowed 00:03:54.329 08:37:12 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:88:00.0 00:03:54.329 08:37:12 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:03:54.329 08:37:12 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:88:00.0 .*: nvme -> .*' 00:03:54.329 08:37:12 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:03:54.329 08:37:12 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:56.257 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:03:56.257 08:37:14 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:03:56.257 08:37:14 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:03:56.257 08:37:14 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:03:56.257 08:37:14 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:56.257 08:37:14 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:58.160 00:03:58.160 real 0m3.809s 00:03:58.160 user 0m1.026s 00:03:58.160 sys 0m1.625s 00:03:58.160 08:37:16 setup.sh.acl.allowed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:58.160 08:37:16 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:03:58.160 ************************************ 00:03:58.160 END TEST allowed 00:03:58.160 ************************************ 00:03:58.160 00:03:58.160 real 0m10.397s 00:03:58.160 user 0m3.254s 00:03:58.160 sys 0m5.133s 00:03:58.160 08:37:16 setup.sh.acl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:58.160 08:37:16 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:58.160 ************************************ 00:03:58.160 END TEST acl 00:03:58.160 ************************************ 00:03:58.160 08:37:16 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:58.160 08:37:16 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:58.160 08:37:16 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:58.161 08:37:16 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:58.161 ************************************ 00:03:58.161 START TEST hugepages 00:03:58.161 ************************************ 00:03:58.161 08:37:16 setup.sh.hugepages -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:58.161 * Looking for test storage... 00:03:58.161 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:58.161 08:37:16 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:58.161 08:37:16 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:58.161 08:37:16 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:58.161 08:37:16 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:58.161 08:37:16 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:58.161 08:37:16 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:58.161 08:37:16 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:58.161 08:37:16 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:03:58.161 08:37:16 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:03:58.161 08:37:16 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:03:58.161 08:37:16 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:58.161 08:37:16 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:58.161 08:37:16 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:58.161 08:37:16 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:03:58.161 08:37:16 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:58.161 08:37:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.161 08:37:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.161 08:37:16 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 42266148 kB' 'MemAvailable: 45773352 kB' 'Buffers: 2704 kB' 'Cached: 11723480 kB' 'SwapCached: 0 kB' 'Active: 8718948 kB' 'Inactive: 3506192 kB' 'Active(anon): 8323452 kB' 'Inactive(anon): 0 kB' 'Active(file): 395496 kB' 'Inactive(file): 3506192 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 502660 kB' 'Mapped: 173276 kB' 'Shmem: 7824496 kB' 'KReclaimable: 198712 kB' 'Slab: 569628 kB' 'SReclaimable: 198712 kB' 'SUnreclaim: 370916 kB' 'KernelStack: 12912 kB' 'PageTables: 8360 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36562308 kB' 'Committed_AS: 9410544 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196080 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 1814108 kB' 'DirectMap2M: 14882816 kB' 'DirectMap1G: 52428800 kB' 00:03:58.161 08:37:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.161 08:37:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.161 08:37:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.161 08:37:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.161 08:37:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.161 08:37:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.161 08:37:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.161 08:37:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.161 08:37:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.161 08:37:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.161 08:37:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.161 08:37:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.161 08:37:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.161 08:37:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.161 08:37:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.161 08:37:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.161 08:37:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.161 08:37:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.161 08:37:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.161 08:37:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.161 08:37:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.161 08:37:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.161 08:37:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.161 08:37:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.161 08:37:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.161 08:37:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.161 08:37:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.161 08:37:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.161 08:37:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.161 08:37:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.161 08:37:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.161 08:37:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.161 08:37:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.161 08:37:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.161 08:37:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.161 08:37:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.161 08:37:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.161 08:37:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.161 08:37:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.161 08:37:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.161 08:37:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.161 08:37:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.161 08:37:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.161 08:37:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.161 08:37:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.161 08:37:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.161 08:37:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.161 08:37:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.161 08:37:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.161 08:37:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.161 08:37:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.161 08:37:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.161 08:37:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.161 08:37:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.161 08:37:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.161 08:37:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.161 08:37:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.161 08:37:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.161 08:37:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.161 08:37:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.161 08:37:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.161 08:37:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.161 08:37:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.161 08:37:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.161 08:37:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.161 08:37:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.161 08:37:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.161 08:37:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.161 08:37:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.161 08:37:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.161 08:37:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.161 08:37:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.161 08:37:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.161 08:37:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.161 08:37:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.161 08:37:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.161 08:37:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.161 08:37:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.161 08:37:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.161 08:37:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.161 08:37:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.161 08:37:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.161 08:37:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.161 08:37:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.161 08:37:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.161 08:37:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.161 08:37:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.161 08:37:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.161 08:37:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.161 08:37:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.161 08:37:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.161 08:37:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.161 08:37:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.161 08:37:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.161 08:37:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.161 08:37:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.162 08:37:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.162 08:37:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.162 08:37:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.162 08:37:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.162 08:37:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.162 08:37:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.162 08:37:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.162 08:37:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.162 08:37:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.162 08:37:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.162 08:37:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.162 08:37:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.162 08:37:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.162 08:37:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.162 08:37:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.162 08:37:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.162 08:37:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.162 08:37:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.162 08:37:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.162 08:37:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.162 08:37:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.162 08:37:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.162 08:37:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.162 08:37:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.162 08:37:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.162 08:37:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.162 08:37:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.162 08:37:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.162 08:37:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.162 08:37:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.162 08:37:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.162 08:37:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.162 08:37:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.162 08:37:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.162 08:37:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.162 08:37:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.162 08:37:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.162 08:37:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.162 08:37:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.162 08:37:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.162 08:37:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.162 08:37:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.162 08:37:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.162 08:37:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.162 08:37:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.162 08:37:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.162 08:37:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.162 08:37:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.162 08:37:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.162 08:37:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.162 08:37:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.162 08:37:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.162 08:37:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.162 08:37:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.162 08:37:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.162 08:37:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.162 08:37:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.162 08:37:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.162 08:37:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.162 08:37:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.162 08:37:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.162 08:37:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.162 08:37:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.162 08:37:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.162 08:37:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.162 08:37:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.162 08:37:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.162 08:37:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.162 08:37:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.162 08:37:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.162 08:37:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.162 08:37:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.162 08:37:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.162 08:37:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.162 08:37:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.162 08:37:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.162 08:37:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.162 08:37:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.162 08:37:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.162 08:37:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.162 08:37:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.162 08:37:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.162 08:37:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.162 08:37:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.162 08:37:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.162 08:37:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.162 08:37:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.162 08:37:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.162 08:37:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.162 08:37:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.162 08:37:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.162 08:37:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.162 08:37:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.162 08:37:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.162 08:37:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.162 08:37:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.162 08:37:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.162 08:37:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.162 08:37:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.162 08:37:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.162 08:37:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.162 08:37:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.162 08:37:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.162 08:37:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.162 08:37:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.162 08:37:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.162 08:37:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.162 08:37:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.162 08:37:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.162 08:37:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.162 08:37:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.162 08:37:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.162 08:37:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.162 08:37:16 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:03:58.162 08:37:16 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:03:58.162 08:37:16 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:58.162 08:37:16 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:58.162 08:37:16 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:58.162 08:37:16 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:58.162 08:37:16 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:58.162 08:37:16 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:58.162 08:37:16 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:58.162 08:37:16 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:03:58.162 08:37:16 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:03:58.162 08:37:16 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:58.162 08:37:16 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:58.162 08:37:16 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:58.162 08:37:16 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:58.162 08:37:16 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:58.162 08:37:16 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:58.162 08:37:16 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:03:58.162 08:37:16 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:58.162 08:37:16 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:58.162 08:37:16 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:58.163 08:37:16 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:58.163 08:37:16 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:58.163 08:37:16 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:58.163 08:37:16 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:58.163 08:37:16 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:58.163 08:37:16 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:58.163 08:37:16 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:58.163 08:37:16 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:58.163 08:37:16 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:58.163 08:37:16 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:58.163 08:37:16 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:58.163 08:37:16 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:58.163 08:37:16 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:58.163 08:37:16 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:58.163 ************************************ 00:03:58.163 START TEST default_setup 00:03:58.163 ************************************ 00:03:58.163 08:37:16 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1125 -- # default_setup 00:03:58.163 08:37:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:58.163 08:37:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:03:58.163 08:37:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:58.163 08:37:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:03:58.163 08:37:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:58.163 08:37:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:03:58.163 08:37:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:58.163 08:37:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:58.163 08:37:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:58.163 08:37:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:58.163 08:37:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:03:58.163 08:37:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:58.163 08:37:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:58.163 08:37:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:58.163 08:37:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:58.163 08:37:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:58.163 08:37:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:58.163 08:37:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:58.163 08:37:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:03:58.163 08:37:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:03:58.163 08:37:16 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:03:58.163 08:37:16 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:59.537 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:59.537 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:59.537 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:59.537 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:59.537 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:59.537 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:59.537 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:59.537 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:59.537 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:59.537 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:59.537 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:59.537 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:59.537 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:59.537 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:59.537 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:59.537 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:00.481 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:04:00.481 08:37:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:00.481 08:37:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:04:00.481 08:37:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:04:00.481 08:37:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:04:00.481 08:37:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:04:00.481 08:37:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:04:00.481 08:37:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:04:00.481 08:37:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:00.481 08:37:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:00.481 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:00.481 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:00.481 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:00.481 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:00.481 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.481 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:00.481 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:00.481 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.481 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.481 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.482 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.482 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 44377088 kB' 'MemAvailable: 47884284 kB' 'Buffers: 2704 kB' 'Cached: 11723568 kB' 'SwapCached: 0 kB' 'Active: 8737872 kB' 'Inactive: 3506192 kB' 'Active(anon): 8342376 kB' 'Inactive(anon): 0 kB' 'Active(file): 395496 kB' 'Inactive(file): 3506192 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 521112 kB' 'Mapped: 173256 kB' 'Shmem: 7824584 kB' 'KReclaimable: 198696 kB' 'Slab: 569024 kB' 'SReclaimable: 198696 kB' 'SUnreclaim: 370328 kB' 'KernelStack: 12784 kB' 'PageTables: 7816 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 9431552 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196016 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1814108 kB' 'DirectMap2M: 14882816 kB' 'DirectMap1G: 52428800 kB' 00:04:00.482 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.482 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.482 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.482 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.482 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.482 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.482 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.482 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.482 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.482 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.482 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.482 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.482 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.482 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.482 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.482 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.482 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.482 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.482 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.482 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.482 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.482 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.482 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.482 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.482 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.482 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.482 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.482 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.482 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.482 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.482 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.482 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.482 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.482 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.482 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.482 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.482 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.482 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.482 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.482 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.482 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.482 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.482 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.482 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.482 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.482 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.482 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.482 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.482 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.482 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.482 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.482 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.482 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.482 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.482 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.482 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.482 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.482 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.482 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.482 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.482 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.482 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.482 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.482 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.482 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.482 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.482 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.482 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.482 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.482 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.482 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.482 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.482 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.482 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.482 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.482 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.482 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.482 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.482 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.482 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.482 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.482 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.482 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.482 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.482 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.482 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.482 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.482 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.482 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.482 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.482 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.482 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.482 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.482 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.482 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.482 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.482 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.482 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.482 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.482 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.482 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.482 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.482 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.482 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.483 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.483 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.483 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.483 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.483 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.483 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.483 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.483 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.483 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.483 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.483 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.483 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.483 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.483 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.483 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.483 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.483 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.483 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.483 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.483 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.483 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.483 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.483 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.483 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.483 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.483 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.483 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.483 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.483 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.483 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.483 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.483 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.483 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.483 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.483 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.483 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.483 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.483 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.483 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.483 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.483 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.483 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.483 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.483 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.483 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.483 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.483 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.483 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.483 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.483 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.483 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.483 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.483 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.483 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.483 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.483 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.483 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.483 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:00.483 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:00.483 08:37:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:04:00.483 08:37:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:00.483 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:00.483 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:00.483 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:00.483 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:00.483 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.483 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:00.483 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:00.483 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.483 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.483 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.483 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.483 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 44377968 kB' 'MemAvailable: 47885164 kB' 'Buffers: 2704 kB' 'Cached: 11723572 kB' 'SwapCached: 0 kB' 'Active: 8737388 kB' 'Inactive: 3506192 kB' 'Active(anon): 8341892 kB' 'Inactive(anon): 0 kB' 'Active(file): 395496 kB' 'Inactive(file): 3506192 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 520660 kB' 'Mapped: 173320 kB' 'Shmem: 7824588 kB' 'KReclaimable: 198696 kB' 'Slab: 569088 kB' 'SReclaimable: 198696 kB' 'SUnreclaim: 370392 kB' 'KernelStack: 12768 kB' 'PageTables: 7732 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 9431572 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196016 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1814108 kB' 'DirectMap2M: 14882816 kB' 'DirectMap1G: 52428800 kB' 00:04:00.483 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.483 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.483 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.483 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.483 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.483 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.483 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.483 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.483 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.483 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.483 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.483 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.483 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.483 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.483 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.483 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.483 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.483 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.483 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.483 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.483 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.483 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.483 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.483 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.483 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.483 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.483 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.483 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.483 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.483 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.483 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.483 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.484 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.484 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.484 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.484 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.484 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.484 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.484 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.484 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.484 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.484 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.484 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.484 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.484 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.484 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.484 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.484 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.484 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.484 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.484 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.484 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.484 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.484 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.484 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.484 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.484 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.484 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.484 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.484 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.484 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.484 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.484 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.484 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.484 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.484 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.484 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.484 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.484 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.484 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.484 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.484 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.484 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.484 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.484 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.484 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.484 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.484 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.484 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.484 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.484 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.484 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.484 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.484 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.484 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.484 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.484 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.484 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.484 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.484 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.484 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.484 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.484 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.484 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.484 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.484 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.484 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.484 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.484 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.484 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.484 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.484 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.484 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.484 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.484 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.484 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.484 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.484 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.484 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.484 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.484 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.484 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.484 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.484 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.484 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.484 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.484 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.484 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.484 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.484 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.484 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.484 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.484 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.484 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.484 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.484 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.484 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.484 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.484 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.484 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.484 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.484 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.484 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.484 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.484 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.484 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.484 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.484 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.484 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.484 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.484 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.484 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.484 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.484 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.484 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.484 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.484 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.484 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.484 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.484 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.484 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.484 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.484 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.485 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.485 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.485 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.485 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.485 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.485 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.485 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.485 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.485 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.485 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.485 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.485 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.485 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.485 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.485 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.485 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.485 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.485 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.485 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.485 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.485 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.485 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.485 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.485 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.485 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.485 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.485 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.485 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.485 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.485 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.485 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.485 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.485 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.485 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.485 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.485 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.485 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.485 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.485 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.485 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.485 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.485 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.485 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.485 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.485 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.485 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.485 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.485 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.485 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.485 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.485 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.485 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.485 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:00.485 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:00.485 08:37:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:04:00.485 08:37:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:00.485 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:00.485 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:00.485 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:00.485 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:00.485 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.485 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:00.485 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:00.485 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.485 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.485 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.485 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.485 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 44378388 kB' 'MemAvailable: 47885584 kB' 'Buffers: 2704 kB' 'Cached: 11723588 kB' 'SwapCached: 0 kB' 'Active: 8737348 kB' 'Inactive: 3506192 kB' 'Active(anon): 8341852 kB' 'Inactive(anon): 0 kB' 'Active(file): 395496 kB' 'Inactive(file): 3506192 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 520552 kB' 'Mapped: 173244 kB' 'Shmem: 7824604 kB' 'KReclaimable: 198696 kB' 'Slab: 569080 kB' 'SReclaimable: 198696 kB' 'SUnreclaim: 370384 kB' 'KernelStack: 12832 kB' 'PageTables: 7912 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 9431592 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196000 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1814108 kB' 'DirectMap2M: 14882816 kB' 'DirectMap1G: 52428800 kB' 00:04:00.485 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.485 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.485 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.485 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.485 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.485 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.485 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.485 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.485 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.485 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.485 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.485 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.485 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.485 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.485 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.485 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.485 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.485 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.485 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.485 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.485 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.485 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.485 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.485 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.485 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.485 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.485 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.485 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.485 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.485 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.485 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.485 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.485 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.485 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.485 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.485 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.485 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.485 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.485 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.485 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.486 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.486 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.486 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.486 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.486 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.486 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.486 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.486 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.486 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.486 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.486 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.486 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.486 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.486 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.486 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.486 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.486 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.486 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.486 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.486 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.486 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.486 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.486 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.486 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.486 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.486 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.486 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.486 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.486 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.486 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.486 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.486 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.486 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.486 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.486 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.486 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.486 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.486 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.486 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.486 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.486 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.486 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.486 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.486 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.486 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.486 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.486 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.486 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.486 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.486 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.486 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.486 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.486 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.486 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.486 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.486 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.486 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.486 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.486 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.486 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.486 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.486 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.486 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.486 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.486 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.486 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.486 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.486 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.486 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.486 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.486 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.486 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.486 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.486 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.486 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.486 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.486 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.486 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.486 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.486 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.486 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.486 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.486 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.486 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.486 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.486 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.486 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.486 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.486 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.486 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.486 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.486 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.486 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.486 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.486 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.486 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.487 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.487 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.487 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.487 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.487 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.487 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.487 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.487 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.487 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.487 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.487 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.487 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.487 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.487 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.487 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.487 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.487 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.487 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.487 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.487 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.487 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.487 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.487 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.487 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.487 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.487 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.487 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.487 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.487 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.487 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.487 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.487 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.487 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.487 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.487 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.487 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.487 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.487 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.487 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.487 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.487 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.487 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.487 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.487 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.487 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.487 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.487 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.487 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.487 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.487 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.487 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.487 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.487 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.487 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.487 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.487 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.487 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.487 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.487 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.487 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.487 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.487 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.487 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.487 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.487 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.487 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:00.487 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:00.487 08:37:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:04:00.487 08:37:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:00.487 nr_hugepages=1024 00:04:00.487 08:37:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:00.487 resv_hugepages=0 00:04:00.487 08:37:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:00.487 surplus_hugepages=0 00:04:00.487 08:37:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:00.487 anon_hugepages=0 00:04:00.487 08:37:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:00.487 08:37:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:00.487 08:37:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:00.487 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:00.487 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:00.487 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:00.487 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:00.487 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.487 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:00.487 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:00.487 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.487 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.487 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.487 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.487 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 44378980 kB' 'MemAvailable: 47886176 kB' 'Buffers: 2704 kB' 'Cached: 11723612 kB' 'SwapCached: 0 kB' 'Active: 8737364 kB' 'Inactive: 3506192 kB' 'Active(anon): 8341868 kB' 'Inactive(anon): 0 kB' 'Active(file): 395496 kB' 'Inactive(file): 3506192 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 520552 kB' 'Mapped: 173244 kB' 'Shmem: 7824628 kB' 'KReclaimable: 198696 kB' 'Slab: 569080 kB' 'SReclaimable: 198696 kB' 'SUnreclaim: 370384 kB' 'KernelStack: 12832 kB' 'PageTables: 7912 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 9431616 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196000 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1814108 kB' 'DirectMap2M: 14882816 kB' 'DirectMap1G: 52428800 kB' 00:04:00.487 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.487 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.487 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.487 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.487 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.487 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.487 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.487 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.487 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.487 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.487 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.487 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.487 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.487 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.487 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.487 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.487 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.487 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.487 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.488 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.488 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.488 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.488 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.488 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.488 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.488 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.488 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.488 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.488 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.488 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.488 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.488 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.488 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.488 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.488 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.488 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.488 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.488 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.488 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.488 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.488 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.488 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.488 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.488 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.488 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.488 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.488 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.488 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.488 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.488 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.488 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.488 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.488 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.488 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.488 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.488 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.488 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.488 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.488 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.488 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.488 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.488 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.488 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.488 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.488 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.488 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.488 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.488 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.488 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.488 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.488 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.488 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.488 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.488 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.488 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.488 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.488 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.488 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.488 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.488 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.488 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.488 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.488 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.488 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.488 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.488 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.488 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.488 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.488 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.488 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.488 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.488 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.488 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.488 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.488 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.488 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.488 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.488 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.488 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.488 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.488 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.488 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.488 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.488 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.488 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.488 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.488 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.488 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.488 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.488 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.488 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.488 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.488 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.488 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.488 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.488 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.488 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.488 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.488 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.488 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.488 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.488 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.488 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.488 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.488 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.488 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.488 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.488 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.488 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.488 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.488 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.488 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.488 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.488 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.488 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.488 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.488 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.488 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.488 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.488 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.489 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.489 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.489 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.489 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.489 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.489 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.489 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.489 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.489 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.489 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.489 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.489 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.489 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.489 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.489 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.489 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.489 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.489 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.489 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.489 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.489 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.489 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.489 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.489 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.489 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.489 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.489 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.489 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.489 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.489 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.489 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.489 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.489 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.489 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.489 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.489 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.489 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.489 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.489 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.489 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.489 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.489 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.489 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.489 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.489 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.489 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.489 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.489 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.489 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.489 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.489 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.489 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.489 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.489 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:04:00.489 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:00.489 08:37:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:00.489 08:37:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:04:00.489 08:37:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:04:00.489 08:37:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:00.489 08:37:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:00.489 08:37:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:00.489 08:37:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:00.489 08:37:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:00.489 08:37:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:00.489 08:37:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:00.489 08:37:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:00.489 08:37:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:00.489 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:00.489 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:04:00.489 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:00.489 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:00.489 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.489 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:00.489 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:00.489 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.489 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.489 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.489 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.489 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 20351968 kB' 'MemUsed: 12524972 kB' 'SwapCached: 0 kB' 'Active: 5919292 kB' 'Inactive: 3357228 kB' 'Active(anon): 5647360 kB' 'Inactive(anon): 0 kB' 'Active(file): 271932 kB' 'Inactive(file): 3357228 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9153372 kB' 'Mapped: 104860 kB' 'AnonPages: 126308 kB' 'Shmem: 5524212 kB' 'KernelStack: 6536 kB' 'PageTables: 3344 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 93984 kB' 'Slab: 310940 kB' 'SReclaimable: 93984 kB' 'SUnreclaim: 216956 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:00.489 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.489 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.489 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.489 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.489 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.489 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.489 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.489 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.489 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.489 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.489 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.489 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.489 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.489 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.489 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.489 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.489 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.489 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.489 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.489 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.489 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.489 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.489 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.489 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.489 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.489 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.489 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.489 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.489 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.489 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.489 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.489 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.490 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.490 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.490 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.490 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.490 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.490 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.490 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.490 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.490 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.490 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.490 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.490 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.490 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.490 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.490 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.490 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.490 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.490 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.490 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.490 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.490 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.490 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.490 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.490 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.490 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.490 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.490 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.490 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.490 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.490 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.490 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.490 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.490 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.490 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.490 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.490 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.490 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.490 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.490 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.490 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.490 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.490 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.490 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.490 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.490 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.490 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.490 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.490 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.490 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.490 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.490 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.490 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.490 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.490 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.490 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.490 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.490 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.490 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.490 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.490 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.490 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.490 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.490 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.490 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.490 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.490 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.490 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.490 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.490 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.490 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.490 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.490 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.490 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.490 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.490 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.490 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.490 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.490 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.490 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.490 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.490 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.490 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.490 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.490 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.490 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.490 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.490 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.490 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.490 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.490 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.490 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.490 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.490 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.490 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.490 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.490 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.490 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.490 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.490 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.490 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.490 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.490 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.490 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.490 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.490 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.490 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.490 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.490 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.490 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.490 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:00.490 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:00.490 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:00.490 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.490 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:00.490 08:37:18 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:00.490 08:37:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:00.490 08:37:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:00.490 08:37:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:00.490 08:37:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:00.490 08:37:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:00.490 node0=1024 expecting 1024 00:04:00.491 08:37:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:00.491 00:04:00.491 real 0m2.468s 00:04:00.491 user 0m0.686s 00:04:00.491 sys 0m0.900s 00:04:00.491 08:37:18 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:00.491 08:37:18 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:04:00.491 ************************************ 00:04:00.491 END TEST default_setup 00:04:00.491 ************************************ 00:04:00.491 08:37:18 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:00.491 08:37:18 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:00.491 08:37:18 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:00.491 08:37:18 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:00.751 ************************************ 00:04:00.751 START TEST per_node_1G_alloc 00:04:00.751 ************************************ 00:04:00.751 08:37:18 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1125 -- # per_node_1G_alloc 00:04:00.751 08:37:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:04:00.751 08:37:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:04:00.751 08:37:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:00.751 08:37:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:04:00.751 08:37:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:04:00.751 08:37:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:04:00.751 08:37:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:00.751 08:37:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:00.751 08:37:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:00.751 08:37:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:04:00.751 08:37:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:04:00.751 08:37:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:00.751 08:37:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:00.751 08:37:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:00.751 08:37:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:00.751 08:37:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:00.751 08:37:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:04:00.751 08:37:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:00.751 08:37:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:00.751 08:37:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:00.751 08:37:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:00.751 08:37:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:00.751 08:37:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:00.751 08:37:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:04:00.751 08:37:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:04:00.751 08:37:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:00.751 08:37:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:01.686 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:01.686 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:01.686 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:01.686 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:01.686 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:01.686 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:01.686 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:01.686 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:01.686 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:01.686 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:01.686 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:01.687 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:01.687 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:01.687 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:01.687 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:01.687 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:01.687 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:01.953 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:04:01.953 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:01.953 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:01.953 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:01.953 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:01.953 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:01.953 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:01.953 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:01.953 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:01.953 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:01.953 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:01.953 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:01.953 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:01.953 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:01.953 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.953 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:01.953 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:01.953 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.953 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.953 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.953 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.953 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 44369780 kB' 'MemAvailable: 47876980 kB' 'Buffers: 2704 kB' 'Cached: 11723680 kB' 'SwapCached: 0 kB' 'Active: 8737308 kB' 'Inactive: 3506192 kB' 'Active(anon): 8341812 kB' 'Inactive(anon): 0 kB' 'Active(file): 395496 kB' 'Inactive(file): 3506192 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 520536 kB' 'Mapped: 173248 kB' 'Shmem: 7824696 kB' 'KReclaimable: 198704 kB' 'Slab: 569376 kB' 'SReclaimable: 198704 kB' 'SUnreclaim: 370672 kB' 'KernelStack: 12784 kB' 'PageTables: 7728 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 9431432 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196064 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1814108 kB' 'DirectMap2M: 14882816 kB' 'DirectMap1G: 52428800 kB' 00:04:01.953 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.953 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.953 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.953 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.953 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.953 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.953 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.953 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.953 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.953 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.953 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.953 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.953 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.953 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.953 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.953 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.953 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.953 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.953 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.953 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.953 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.953 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.953 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.953 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.953 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.953 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.953 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.953 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.953 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.953 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.953 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.954 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.954 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.954 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.954 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.954 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.954 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.954 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.954 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.954 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.954 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.954 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.954 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.954 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.954 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.954 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.954 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.954 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.954 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.954 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.954 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.954 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.954 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.954 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.954 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.954 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.954 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.954 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.954 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.954 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.954 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.954 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.954 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.954 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.954 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.954 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.954 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.954 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.954 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.954 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.954 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.954 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.954 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.954 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.954 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.954 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.954 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.954 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.954 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.954 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.954 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.954 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.954 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.954 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.954 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.954 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.954 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.954 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.954 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.954 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.954 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.954 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.954 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.954 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.954 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.954 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.954 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.954 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.954 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.954 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.954 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.954 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.954 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.954 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.954 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.954 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.954 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.954 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.954 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.954 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.954 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.954 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.954 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.954 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.954 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.954 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.954 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.954 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.954 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.954 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.954 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.954 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.954 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.954 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.954 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.954 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.954 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.954 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.954 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.954 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.954 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.954 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.954 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.954 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.954 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.954 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.954 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.954 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.954 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.954 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.954 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.954 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.954 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.954 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.954 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.954 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.954 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.954 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.955 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.955 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.955 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.955 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.955 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.955 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.955 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.955 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.955 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.955 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.955 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.955 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.955 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.955 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:01.955 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:01.955 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:01.955 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:01.955 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:01.955 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:01.955 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:01.955 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:01.955 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.955 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:01.955 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:01.955 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.955 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.955 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.955 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.955 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 44370996 kB' 'MemAvailable: 47878196 kB' 'Buffers: 2704 kB' 'Cached: 11723680 kB' 'SwapCached: 0 kB' 'Active: 8738204 kB' 'Inactive: 3506192 kB' 'Active(anon): 8342708 kB' 'Inactive(anon): 0 kB' 'Active(file): 395496 kB' 'Inactive(file): 3506192 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 521440 kB' 'Mapped: 173248 kB' 'Shmem: 7824696 kB' 'KReclaimable: 198704 kB' 'Slab: 569376 kB' 'SReclaimable: 198704 kB' 'SUnreclaim: 370672 kB' 'KernelStack: 12848 kB' 'PageTables: 7852 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 9431452 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196032 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1814108 kB' 'DirectMap2M: 14882816 kB' 'DirectMap1G: 52428800 kB' 00:04:01.955 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.955 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.955 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.955 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.955 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.955 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.955 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.955 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.955 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.955 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.955 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.955 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.955 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.955 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.955 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.955 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.955 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.955 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.955 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.955 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.955 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.955 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.955 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.955 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.955 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.955 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.955 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.955 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.955 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.955 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.955 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.955 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.955 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.955 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.955 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.955 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.955 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.955 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.955 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.955 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.955 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.955 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.955 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.955 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.955 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.955 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.955 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.955 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.955 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.955 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.955 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.955 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.955 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.955 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.955 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.955 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.955 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.955 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.955 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.955 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.955 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.955 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.955 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.955 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.955 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.955 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.955 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.955 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.955 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.955 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.955 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.955 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.955 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.955 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.955 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.956 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.956 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.956 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.956 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.956 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.956 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.956 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.956 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.956 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.956 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.956 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.956 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.956 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.956 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.956 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.956 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.956 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.956 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.956 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.956 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.956 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.956 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.956 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.956 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.956 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.956 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.956 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.956 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.956 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.956 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.956 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.956 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.956 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.956 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.956 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.956 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.956 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.956 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.956 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.956 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.956 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.956 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.956 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.956 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.956 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.956 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.956 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.956 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.956 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.956 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.956 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.956 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.956 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.956 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.956 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.956 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.956 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.956 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.956 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.956 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.956 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.956 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.956 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.956 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.956 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.956 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.956 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.956 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.956 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.956 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.956 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.956 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.956 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.956 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.956 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.956 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.956 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.956 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.956 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.956 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.956 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.956 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.956 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.956 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.956 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.956 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.956 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.956 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.956 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.956 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.956 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.956 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.956 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.956 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.956 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.956 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.956 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.956 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.956 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.956 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.956 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.956 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.956 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.956 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.956 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.956 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.956 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.956 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.956 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.956 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.956 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.956 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.956 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.956 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.956 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.956 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.956 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.957 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.957 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.957 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.957 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.957 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.957 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.957 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.957 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.957 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.957 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.957 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.957 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.957 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.957 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:01.957 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:01.957 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:01.957 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:01.957 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:01.957 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:01.957 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:01.957 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:01.957 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.957 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:01.957 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:01.957 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.957 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.957 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.957 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.957 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 44371776 kB' 'MemAvailable: 47878976 kB' 'Buffers: 2704 kB' 'Cached: 11723728 kB' 'SwapCached: 0 kB' 'Active: 8738028 kB' 'Inactive: 3506192 kB' 'Active(anon): 8342532 kB' 'Inactive(anon): 0 kB' 'Active(file): 395496 kB' 'Inactive(file): 3506192 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 521216 kB' 'Mapped: 173248 kB' 'Shmem: 7824744 kB' 'KReclaimable: 198704 kB' 'Slab: 569476 kB' 'SReclaimable: 198704 kB' 'SUnreclaim: 370772 kB' 'KernelStack: 12832 kB' 'PageTables: 7828 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 9431844 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196032 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1814108 kB' 'DirectMap2M: 14882816 kB' 'DirectMap1G: 52428800 kB' 00:04:01.957 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.957 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.957 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.957 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.957 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.957 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.957 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.957 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.957 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.957 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.957 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.957 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.957 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.957 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.957 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.957 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.957 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.957 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.957 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.957 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.957 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.957 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.957 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.957 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.957 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.957 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.957 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.957 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.957 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.957 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.957 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.957 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.957 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.957 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.957 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.957 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.957 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.957 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.957 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.957 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.957 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.957 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.957 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.957 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.957 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.957 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.957 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.957 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.957 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.957 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.957 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.957 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.957 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.957 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.957 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.957 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.957 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.957 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.957 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.957 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.957 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.957 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.957 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.957 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.957 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.957 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.957 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.957 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.957 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.957 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.957 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.957 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.958 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.958 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.958 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.958 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.958 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.958 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.958 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.958 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.958 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.958 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.958 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.958 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.958 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.958 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.958 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.958 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.958 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.958 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.958 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.958 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.958 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.958 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.958 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.958 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.958 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.958 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.958 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.958 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.958 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.958 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.958 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.958 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.958 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.958 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.958 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.958 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.958 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.958 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.958 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.958 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.958 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.958 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.958 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.958 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.958 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.958 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.958 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.958 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.958 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.958 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.958 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.958 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.958 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.958 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.958 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.958 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.958 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.958 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.958 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.958 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.958 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.958 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.958 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.958 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.958 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.958 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.958 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.958 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.958 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.958 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.958 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.958 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.958 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.958 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.958 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.958 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.958 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.958 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.958 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.958 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.958 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.958 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.958 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.958 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.958 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.958 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.958 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.958 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.958 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.958 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.958 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.958 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.958 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.958 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.958 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.958 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.958 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.958 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.959 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.959 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.959 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.959 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.959 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.959 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.959 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.959 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.959 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.959 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.959 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.959 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.959 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.959 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.959 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.959 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.959 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.959 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.959 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.959 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.959 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.959 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.959 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.959 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.959 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.959 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.959 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.959 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.959 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.959 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.959 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.959 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:01.959 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:01.959 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:01.959 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:01.959 nr_hugepages=1024 00:04:01.959 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:01.959 resv_hugepages=0 00:04:01.959 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:01.959 surplus_hugepages=0 00:04:01.959 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:01.959 anon_hugepages=0 00:04:01.959 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:01.959 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:01.959 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:01.959 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:01.959 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:01.959 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:01.959 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:01.959 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.959 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:01.959 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:01.959 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.959 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.959 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.959 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.959 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 44372464 kB' 'MemAvailable: 47879664 kB' 'Buffers: 2704 kB' 'Cached: 11723732 kB' 'SwapCached: 0 kB' 'Active: 8738156 kB' 'Inactive: 3506192 kB' 'Active(anon): 8342660 kB' 'Inactive(anon): 0 kB' 'Active(file): 395496 kB' 'Inactive(file): 3506192 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 521300 kB' 'Mapped: 173252 kB' 'Shmem: 7824748 kB' 'KReclaimable: 198704 kB' 'Slab: 569476 kB' 'SReclaimable: 198704 kB' 'SUnreclaim: 370772 kB' 'KernelStack: 12864 kB' 'PageTables: 7916 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 9431868 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196032 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1814108 kB' 'DirectMap2M: 14882816 kB' 'DirectMap1G: 52428800 kB' 00:04:01.959 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.959 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.959 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.959 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.959 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.959 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.959 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.959 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.959 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.959 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.959 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.959 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.959 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.959 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.959 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.959 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.959 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.959 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.959 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.959 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.959 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.959 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.959 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.959 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.959 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.959 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.959 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.959 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.959 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.959 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.959 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.959 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.959 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.959 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.959 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.959 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.959 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.959 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.959 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.959 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.959 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.959 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.959 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.959 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.959 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.959 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.959 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.959 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.959 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.959 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.960 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.960 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.960 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.960 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.960 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.960 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.960 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.960 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.960 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.960 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.960 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.960 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.960 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.960 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.960 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.960 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.960 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.960 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.960 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.960 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.960 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.960 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.960 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.960 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.960 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.960 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.960 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.960 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.960 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.960 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.960 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.960 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.960 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.960 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.960 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.960 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.960 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.960 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.960 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.960 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.960 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.960 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.960 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.960 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.960 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.960 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.960 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.960 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.960 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.960 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.960 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.960 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.960 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.960 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.960 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.960 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.960 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.960 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.960 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.960 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.960 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.960 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.960 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.960 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.960 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.960 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.960 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.960 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.960 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.960 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.960 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.960 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.960 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.960 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.960 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.960 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.960 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.960 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.960 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.960 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.960 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.960 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.960 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.960 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.960 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.960 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.960 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.960 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.960 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.960 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.960 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.960 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.960 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.960 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.960 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.960 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.960 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.960 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.960 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.960 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.960 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.960 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.960 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.960 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.960 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.960 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.960 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.960 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.960 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.960 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.960 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.960 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.960 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.960 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.960 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.960 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.960 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.960 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.961 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.961 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.961 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.961 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.961 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.961 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.961 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.961 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.961 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.961 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.961 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.961 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.961 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.961 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.961 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.961 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.961 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.961 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.961 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.961 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.961 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.961 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.961 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.961 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.961 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.961 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:01.961 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:01.961 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:01.961 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:01.961 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:01.961 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:01.961 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:01.961 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:01.961 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:01.961 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:01.961 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:01.961 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:01.961 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:01.961 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:01.961 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:01.961 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:04:01.961 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:01.961 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:01.961 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.961 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:01.961 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:01.961 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.961 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.961 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.961 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 21386580 kB' 'MemUsed: 11490360 kB' 'SwapCached: 0 kB' 'Active: 5920780 kB' 'Inactive: 3357228 kB' 'Active(anon): 5648848 kB' 'Inactive(anon): 0 kB' 'Active(file): 271932 kB' 'Inactive(file): 3357228 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9153456 kB' 'Mapped: 104872 kB' 'AnonPages: 127852 kB' 'Shmem: 5524296 kB' 'KernelStack: 6616 kB' 'PageTables: 3544 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 93984 kB' 'Slab: 311188 kB' 'SReclaimable: 93984 kB' 'SUnreclaim: 217204 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:01.961 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.961 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.961 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.961 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.961 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.961 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.961 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.961 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.961 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.961 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.961 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.961 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.961 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.961 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.961 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.961 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.961 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.961 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.961 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.961 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.961 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.961 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.961 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.961 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.961 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.961 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.961 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.961 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.961 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.961 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.961 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.961 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.961 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.961 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.961 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.961 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.961 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.961 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.961 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.961 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.961 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.961 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.961 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.961 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.961 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.961 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.961 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.961 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.961 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.961 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.961 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.961 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.961 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.961 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.961 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.962 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.962 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.962 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.962 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.962 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.962 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.962 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.962 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.962 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.962 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.962 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.962 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.962 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.962 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.962 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.962 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.962 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.962 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.962 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.962 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.962 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.962 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.962 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.962 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.962 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.962 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.962 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.962 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.962 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.962 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.962 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.962 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.962 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.962 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.962 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.962 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.962 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.962 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.962 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.962 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.962 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.962 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.962 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.962 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.962 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.962 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.962 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.962 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.962 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.962 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.962 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.962 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.962 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.962 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.962 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.962 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.962 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.962 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.962 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.962 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.962 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.962 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.962 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.962 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.962 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.962 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.962 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.962 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.962 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.962 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.962 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.962 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.962 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.962 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.962 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.962 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.962 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.962 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.962 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.962 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.962 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.962 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.962 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.962 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.962 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.962 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.962 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.962 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.962 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.962 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.962 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.962 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:01.962 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:01.962 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:01.962 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:01.962 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:01.962 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:01.963 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:01.963 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:04:01.963 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:01.963 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:01.963 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.963 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:01.963 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:01.963 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.963 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.963 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.963 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.963 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664772 kB' 'MemFree: 22986984 kB' 'MemUsed: 4677788 kB' 'SwapCached: 0 kB' 'Active: 2817432 kB' 'Inactive: 148964 kB' 'Active(anon): 2693868 kB' 'Inactive(anon): 0 kB' 'Active(file): 123564 kB' 'Inactive(file): 148964 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2573004 kB' 'Mapped: 68380 kB' 'AnonPages: 393448 kB' 'Shmem: 2300476 kB' 'KernelStack: 6248 kB' 'PageTables: 4372 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 104720 kB' 'Slab: 258288 kB' 'SReclaimable: 104720 kB' 'SUnreclaim: 153568 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:01.963 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.963 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.963 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.963 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.963 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.963 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.963 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.963 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.963 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.963 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.963 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.963 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.963 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.963 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.963 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.963 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.963 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.963 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.963 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.963 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.963 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.963 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.963 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.963 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.963 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.963 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.963 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.963 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.963 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.963 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.963 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.963 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.963 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.963 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.963 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.963 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.963 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.963 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.963 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.963 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.963 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.963 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.963 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.963 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.963 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.963 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.963 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.963 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.963 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.963 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.963 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.963 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.963 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.963 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.963 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.963 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.963 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.963 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.963 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.963 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.963 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.963 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.963 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.963 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.963 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.963 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.963 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.963 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.963 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.963 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.963 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.963 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.963 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.963 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.963 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.963 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.963 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.963 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.963 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.963 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.963 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.963 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.963 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.963 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.963 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.963 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.963 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.963 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.963 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.963 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.963 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.963 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.963 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.963 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.963 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.963 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.963 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.964 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.964 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.964 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.964 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.964 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.964 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.964 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.964 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.964 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.964 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.964 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.964 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.964 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.964 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.964 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.964 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.964 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.964 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.964 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.964 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.964 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.964 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.964 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.964 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.964 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.964 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.964 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.964 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.964 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.964 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.964 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.964 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.964 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.964 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.964 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.964 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.964 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.964 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.964 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.964 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.964 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.964 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.964 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.964 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.964 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:01.964 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.964 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.964 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.964 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:01.964 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:01.964 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:01.964 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:01.964 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:01.964 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:01.964 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:01.964 node0=512 expecting 512 00:04:01.964 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:01.964 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:01.964 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:01.964 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:01.964 node1=512 expecting 512 00:04:01.964 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:01.964 00:04:01.964 real 0m1.356s 00:04:01.964 user 0m0.569s 00:04:01.964 sys 0m0.749s 00:04:01.964 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:01.964 08:37:20 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:01.964 ************************************ 00:04:01.964 END TEST per_node_1G_alloc 00:04:01.964 ************************************ 00:04:01.964 08:37:20 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:01.964 08:37:20 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:01.964 08:37:20 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:01.964 08:37:20 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:01.964 ************************************ 00:04:01.964 START TEST even_2G_alloc 00:04:01.964 ************************************ 00:04:01.964 08:37:20 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1125 -- # even_2G_alloc 00:04:01.964 08:37:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:01.964 08:37:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:01.964 08:37:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:01.964 08:37:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:01.964 08:37:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:01.964 08:37:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:01.964 08:37:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:01.964 08:37:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:01.964 08:37:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:01.964 08:37:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:01.964 08:37:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:01.964 08:37:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:01.964 08:37:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:01.964 08:37:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:01.964 08:37:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:01.964 08:37:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:01.964 08:37:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:04:01.964 08:37:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:01.964 08:37:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:01.964 08:37:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:01.964 08:37:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:01.964 08:37:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:01.964 08:37:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:01.964 08:37:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:01.964 08:37:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:01.964 08:37:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:04:01.964 08:37:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:01.964 08:37:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:03.348 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:03.348 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:03.348 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:03.348 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:03.348 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:03.348 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:03.348 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:03.348 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:03.348 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:03.348 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:03.348 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:03.348 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:03.348 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:03.348 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:03.348 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:03.348 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:03.348 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:03.348 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:03.348 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:03.348 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:03.348 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:03.348 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:03.348 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:03.348 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:03.348 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:03.348 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:03.348 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:03.348 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:03.348 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:03.348 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:03.348 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.348 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.348 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.348 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.348 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.348 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.348 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.348 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 44368560 kB' 'MemAvailable: 47875736 kB' 'Buffers: 2704 kB' 'Cached: 11723824 kB' 'SwapCached: 0 kB' 'Active: 8742448 kB' 'Inactive: 3506192 kB' 'Active(anon): 8346952 kB' 'Inactive(anon): 0 kB' 'Active(file): 395496 kB' 'Inactive(file): 3506192 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 525368 kB' 'Mapped: 173836 kB' 'Shmem: 7824840 kB' 'KReclaimable: 198656 kB' 'Slab: 569484 kB' 'SReclaimable: 198656 kB' 'SUnreclaim: 370828 kB' 'KernelStack: 12896 kB' 'PageTables: 7936 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 9436856 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196064 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1814108 kB' 'DirectMap2M: 14882816 kB' 'DirectMap1G: 52428800 kB' 00:04:03.348 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.349 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.349 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.349 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.349 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.349 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.349 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.349 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.349 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.349 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.349 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.349 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.349 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.349 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.349 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.349 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.349 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.349 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.349 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.349 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.349 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.349 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.349 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.349 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.349 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.349 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.349 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.349 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.349 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.349 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.349 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.349 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.349 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.349 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.349 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.349 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.349 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.349 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.349 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.349 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.349 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.349 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.349 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.349 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.349 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.349 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.349 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.349 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.349 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.349 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.349 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.349 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.349 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.349 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.349 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.349 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.349 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.349 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.349 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.349 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.349 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.349 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.349 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.349 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.349 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.349 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.349 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.349 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.349 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.349 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.349 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.349 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.349 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.349 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.349 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.349 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.349 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.349 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.349 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.349 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.349 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.349 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.349 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.349 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.349 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.349 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.349 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.349 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.349 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.349 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.349 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.349 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.349 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.349 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.349 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.349 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.349 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.349 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.349 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.349 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.349 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.349 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.349 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.349 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.349 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.349 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.349 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.349 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.349 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.349 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.349 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.349 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.349 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.349 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.349 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.349 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.349 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.349 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.349 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.349 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.350 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.350 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.350 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.350 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.350 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.350 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.350 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.350 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.350 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.350 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.350 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.350 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.350 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.350 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.350 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.350 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.350 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.350 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.350 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.350 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.350 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.350 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.350 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.350 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.350 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.350 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.350 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.350 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.350 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.350 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.350 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.350 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.350 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.350 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.350 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.350 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.350 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.350 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.350 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.350 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.350 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.350 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:03.350 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:03.350 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:03.350 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:03.350 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:03.350 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:03.350 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:03.350 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:03.350 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.350 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.350 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.350 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.350 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.350 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.350 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.350 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 44365236 kB' 'MemAvailable: 47872412 kB' 'Buffers: 2704 kB' 'Cached: 11723828 kB' 'SwapCached: 0 kB' 'Active: 8744244 kB' 'Inactive: 3506192 kB' 'Active(anon): 8348748 kB' 'Inactive(anon): 0 kB' 'Active(file): 395496 kB' 'Inactive(file): 3506192 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 527168 kB' 'Mapped: 174128 kB' 'Shmem: 7824844 kB' 'KReclaimable: 198656 kB' 'Slab: 569460 kB' 'SReclaimable: 198656 kB' 'SUnreclaim: 370804 kB' 'KernelStack: 12896 kB' 'PageTables: 7876 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 9438208 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196052 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1814108 kB' 'DirectMap2M: 14882816 kB' 'DirectMap1G: 52428800 kB' 00:04:03.350 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.350 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.350 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.350 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.350 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.350 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.350 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.350 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.350 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.350 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.350 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.350 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.350 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.350 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.350 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.350 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.350 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.350 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.350 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.350 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.350 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.350 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.350 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.350 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.350 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.350 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.350 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.350 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.350 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.350 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.350 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.350 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.350 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.350 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.350 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.350 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.350 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.350 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.350 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.350 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.350 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.350 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.350 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.350 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.350 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.350 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.350 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.350 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.350 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.350 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.350 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.351 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.351 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.351 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.351 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.351 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.351 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.351 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.351 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.351 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.351 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.351 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.351 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.351 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.351 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.351 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.351 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.351 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.351 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.351 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.351 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.351 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.351 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.351 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.351 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.351 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.351 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.351 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.351 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.351 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.351 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.351 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.351 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.351 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.351 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.351 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.351 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.351 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.351 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.351 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.351 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.351 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.351 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.351 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.351 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.351 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.351 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.351 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.351 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.351 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.351 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.351 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.351 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.351 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.351 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.351 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.351 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.351 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.351 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.351 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.351 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.351 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.351 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.351 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.351 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.351 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.351 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.351 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.351 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.351 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.351 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.351 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.351 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.351 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.351 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.351 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.351 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.351 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.351 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.351 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.351 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.351 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.351 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.351 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.351 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.351 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.351 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.351 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.351 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.351 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.351 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.351 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.351 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.351 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.351 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.351 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.351 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.351 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.351 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.351 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.351 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.351 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.351 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.351 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.351 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.351 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.351 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.351 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.351 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.351 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.351 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.351 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.351 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.351 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.351 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.351 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.351 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.351 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.351 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.351 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.351 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.352 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.352 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.352 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.352 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.352 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.352 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.352 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.352 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.352 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.352 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.352 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.352 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.352 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.352 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.352 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.352 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.352 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.352 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.352 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.352 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.352 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.352 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.352 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.352 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.352 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.352 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.352 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.352 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.352 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.352 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.352 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.352 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.352 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.352 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.352 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:03.352 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:03.352 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:03.352 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:03.352 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:03.352 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:03.352 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:03.352 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:03.352 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.352 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.352 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.352 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.352 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.352 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.352 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.352 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 44369468 kB' 'MemAvailable: 47876644 kB' 'Buffers: 2704 kB' 'Cached: 11723828 kB' 'SwapCached: 0 kB' 'Active: 8740764 kB' 'Inactive: 3506192 kB' 'Active(anon): 8345268 kB' 'Inactive(anon): 0 kB' 'Active(file): 395496 kB' 'Inactive(file): 3506192 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 523676 kB' 'Mapped: 173700 kB' 'Shmem: 7824844 kB' 'KReclaimable: 198656 kB' 'Slab: 569460 kB' 'SReclaimable: 198656 kB' 'SUnreclaim: 370804 kB' 'KernelStack: 12960 kB' 'PageTables: 8024 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 9434916 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196032 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1814108 kB' 'DirectMap2M: 14882816 kB' 'DirectMap1G: 52428800 kB' 00:04:03.352 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.352 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.352 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.352 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.352 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.352 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.352 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.352 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.352 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.352 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.352 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.352 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.352 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.352 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.352 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.352 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.352 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.352 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.352 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.352 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.352 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.352 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.352 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.352 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.352 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.352 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.352 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.352 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.352 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.352 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.352 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.352 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.352 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.352 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.352 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.352 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.352 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.352 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.352 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.352 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.352 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.352 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.352 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.352 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.352 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.352 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.352 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.352 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.352 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.352 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.352 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.352 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.352 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.352 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.352 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.352 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.353 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.353 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.353 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.353 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.353 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.353 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.353 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.353 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.353 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.353 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.353 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.353 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.353 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.353 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.353 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.353 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.353 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.353 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.353 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.353 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.353 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.353 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.353 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.353 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.353 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.353 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.353 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.353 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.353 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.353 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.353 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.353 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.353 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.353 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.353 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.353 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.353 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.353 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.353 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.353 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.353 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.353 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.353 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.353 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.353 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.353 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.353 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.353 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.353 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.353 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.353 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.353 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.353 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.353 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.353 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.353 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.353 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.353 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.353 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.353 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.353 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.353 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.353 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.353 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.353 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.353 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.353 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.353 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.353 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.353 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.353 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.353 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.353 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.353 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.353 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.353 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.353 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.353 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.353 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.353 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.353 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.353 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.353 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.353 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.353 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.353 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.353 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.353 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.353 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.353 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.353 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.353 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.353 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.353 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.353 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.353 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.354 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.354 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.354 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.354 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.354 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.354 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.354 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.354 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.354 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.354 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.354 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.354 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.354 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.354 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.354 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.354 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.354 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.354 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.354 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.354 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.354 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.354 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.354 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.354 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.354 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.354 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.354 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.354 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.354 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.354 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.354 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.354 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.354 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.354 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.354 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.354 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.354 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.354 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.354 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.354 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.354 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.354 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.354 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.354 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.354 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.354 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.354 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.354 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.354 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.354 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:03.354 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:03.354 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:03.354 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:03.354 nr_hugepages=1024 00:04:03.354 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:03.354 resv_hugepages=0 00:04:03.354 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:03.354 surplus_hugepages=0 00:04:03.354 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:03.354 anon_hugepages=0 00:04:03.354 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:03.354 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:03.354 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:03.354 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:03.354 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:03.354 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:03.354 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:03.354 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.354 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.354 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.354 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.354 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.354 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.354 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.354 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 44363420 kB' 'MemAvailable: 47870596 kB' 'Buffers: 2704 kB' 'Cached: 11723868 kB' 'SwapCached: 0 kB' 'Active: 8744208 kB' 'Inactive: 3506192 kB' 'Active(anon): 8348712 kB' 'Inactive(anon): 0 kB' 'Active(file): 395496 kB' 'Inactive(file): 3506192 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 527060 kB' 'Mapped: 173700 kB' 'Shmem: 7824884 kB' 'KReclaimable: 198656 kB' 'Slab: 569460 kB' 'SReclaimable: 198656 kB' 'SUnreclaim: 370804 kB' 'KernelStack: 12912 kB' 'PageTables: 7868 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 9438252 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196036 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1814108 kB' 'DirectMap2M: 14882816 kB' 'DirectMap1G: 52428800 kB' 00:04:03.354 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.354 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.354 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.354 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.354 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.354 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.354 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.354 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.354 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.354 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.354 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.354 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.354 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.354 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.354 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.354 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.354 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.354 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.354 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.354 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.354 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.354 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.354 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.354 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.354 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.354 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.354 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.354 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.354 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.354 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.354 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.354 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.354 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.354 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.354 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.355 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.355 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.355 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.355 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.355 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.355 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.355 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.355 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.355 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.355 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.355 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.355 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.355 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.355 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.355 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.355 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.355 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.355 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.355 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.355 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.355 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.355 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.355 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.355 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.355 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.355 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.355 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.355 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.355 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.355 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.355 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.355 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.355 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.355 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.355 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.355 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.355 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.355 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.355 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.355 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.355 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.355 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.355 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.355 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.355 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.355 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.355 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.355 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.355 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.355 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.355 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.355 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.355 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.355 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.355 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.355 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.355 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.355 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.355 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.355 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.355 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.355 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.355 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.355 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.355 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.355 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.355 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.355 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.355 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.355 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.355 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.355 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.355 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.355 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.355 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.355 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.355 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.355 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.355 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.355 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.355 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.355 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.355 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.355 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.355 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.355 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.355 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.355 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.355 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.355 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.355 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.355 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.355 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.355 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.355 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.355 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.355 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.355 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.355 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.355 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.355 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.355 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.355 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.355 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.355 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.355 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.355 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.355 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.355 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.355 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.355 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.355 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.355 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.355 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.355 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.355 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.355 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.355 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.355 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.355 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.356 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.356 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.356 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.356 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.356 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.356 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.356 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.356 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.356 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.356 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.356 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.356 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.356 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.356 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.356 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.356 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.356 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.356 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.356 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.356 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.356 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.356 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.356 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.356 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.356 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.356 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.356 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.356 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.356 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.356 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.356 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.356 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.356 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.356 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.356 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.356 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.356 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.356 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.356 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:03.356 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:03.356 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:03.356 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:03.356 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:03.356 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:03.356 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:03.356 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:03.356 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:03.356 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:03.356 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:03.356 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:03.356 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:03.356 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:03.356 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:03.356 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:04:03.356 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:03.356 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:03.356 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.356 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:03.356 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:03.356 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.356 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.356 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.356 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.356 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 21387536 kB' 'MemUsed: 11489404 kB' 'SwapCached: 0 kB' 'Active: 5921016 kB' 'Inactive: 3357228 kB' 'Active(anon): 5649084 kB' 'Inactive(anon): 0 kB' 'Active(file): 271932 kB' 'Inactive(file): 3357228 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9153580 kB' 'Mapped: 105200 kB' 'AnonPages: 127840 kB' 'Shmem: 5524420 kB' 'KernelStack: 6600 kB' 'PageTables: 3340 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 93952 kB' 'Slab: 311176 kB' 'SReclaimable: 93952 kB' 'SUnreclaim: 217224 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:03.356 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.356 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.356 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.356 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.356 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.356 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.356 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.356 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.356 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.356 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.356 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.356 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.356 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.356 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.356 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.356 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.356 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.356 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.356 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.356 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.356 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.356 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.356 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.356 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.356 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.356 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.356 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.356 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.356 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.356 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.356 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.356 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.356 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.356 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.356 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.356 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.356 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.356 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.356 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.356 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.356 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.356 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.356 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.356 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.356 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.356 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.356 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.356 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.357 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.357 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.357 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.357 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.357 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.357 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.357 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.357 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.357 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.357 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.357 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.357 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.357 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.357 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.357 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.357 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.357 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.357 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.357 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.357 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.357 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.357 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.357 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.357 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.357 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.357 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.357 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.357 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.357 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.357 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.357 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.357 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.357 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.357 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.357 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.357 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.357 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.357 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.357 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.357 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.357 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.357 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.357 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.357 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.357 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.357 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.357 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.357 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.357 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.357 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.357 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.357 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.357 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.357 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.357 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.357 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.357 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.357 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.357 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.357 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.357 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.357 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.357 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.357 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.357 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.357 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.357 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.357 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.357 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.357 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.357 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.357 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.357 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.357 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.357 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.357 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.357 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.357 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.357 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.357 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.357 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.357 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.357 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.357 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.357 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.357 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.357 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.357 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.357 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.357 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.357 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.357 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.357 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.357 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.357 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.357 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.357 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.357 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:03.357 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:03.357 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:03.357 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:03.357 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:03.357 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:03.357 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:03.357 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:04:03.357 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:03.357 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:03.357 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.358 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:03.358 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:03.358 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.358 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.358 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.358 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.358 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664772 kB' 'MemFree: 22975884 kB' 'MemUsed: 4688888 kB' 'SwapCached: 0 kB' 'Active: 2817392 kB' 'Inactive: 148964 kB' 'Active(anon): 2693828 kB' 'Inactive(anon): 0 kB' 'Active(file): 123564 kB' 'Inactive(file): 148964 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2573012 kB' 'Mapped: 68380 kB' 'AnonPages: 393416 kB' 'Shmem: 2300484 kB' 'KernelStack: 6280 kB' 'PageTables: 4420 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 104704 kB' 'Slab: 258284 kB' 'SReclaimable: 104704 kB' 'SUnreclaim: 153580 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:03.358 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.358 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.358 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.358 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.358 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.358 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.358 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.358 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.359 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.359 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.359 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.359 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.359 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.359 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.359 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.359 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.359 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.359 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.359 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.359 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.359 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.359 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.359 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.359 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.359 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.359 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.359 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.359 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.359 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.359 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.359 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.359 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.359 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.359 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.359 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.359 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.359 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.359 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.359 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.359 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.359 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.359 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.359 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.359 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.359 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.359 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.359 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.359 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.359 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.359 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.359 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.359 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.359 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.359 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.359 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.359 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.359 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.359 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.359 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.359 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.359 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.359 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.359 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.359 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.359 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.359 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.359 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.359 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.359 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.359 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.359 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.359 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.359 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.359 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.359 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.359 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.359 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.359 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.359 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.359 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.359 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.359 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.359 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.359 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.359 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.359 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.359 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.359 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.359 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.359 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.359 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.359 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.359 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.359 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.359 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.359 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.359 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.359 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.359 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.359 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.359 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.359 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.359 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.359 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.359 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.359 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.359 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.359 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.359 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.359 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.359 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.359 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.359 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.359 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.359 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.359 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.359 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.359 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.359 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.359 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.359 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.359 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.359 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.359 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.359 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.359 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.359 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.359 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.359 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.360 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.360 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.360 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.360 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.360 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.360 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.360 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.360 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.360 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.360 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.360 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.360 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.360 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.360 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.360 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.360 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.360 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:03.360 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:03.619 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:03.619 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:03.619 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:03.619 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:03.619 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:03.619 node0=512 expecting 512 00:04:03.619 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:03.619 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:03.619 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:03.619 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:03.619 node1=512 expecting 512 00:04:03.619 08:37:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:03.619 00:04:03.619 real 0m1.456s 00:04:03.619 user 0m0.628s 00:04:03.619 sys 0m0.789s 00:04:03.619 08:37:21 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:03.619 08:37:21 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:03.619 ************************************ 00:04:03.619 END TEST even_2G_alloc 00:04:03.619 ************************************ 00:04:03.619 08:37:21 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:03.619 08:37:21 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:03.619 08:37:21 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:03.619 08:37:21 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:03.619 ************************************ 00:04:03.619 START TEST odd_alloc 00:04:03.619 ************************************ 00:04:03.619 08:37:21 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1125 -- # odd_alloc 00:04:03.619 08:37:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:03.619 08:37:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:04:03.619 08:37:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:03.619 08:37:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:03.619 08:37:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:03.619 08:37:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:03.619 08:37:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:03.619 08:37:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:03.619 08:37:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:03.619 08:37:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:03.619 08:37:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:03.619 08:37:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:03.619 08:37:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:03.619 08:37:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:03.619 08:37:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:03.619 08:37:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:03.619 08:37:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:04:03.619 08:37:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:03.619 08:37:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:03.619 08:37:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:04:03.619 08:37:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:03.619 08:37:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:03.619 08:37:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:03.619 08:37:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:03.619 08:37:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:03.619 08:37:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:04:03.619 08:37:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:03.619 08:37:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:04.556 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:04.556 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:04.556 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:04.556 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:04.556 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:04.556 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:04.556 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:04.556 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:04.556 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:04.556 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:04.556 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:04.556 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:04.556 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:04.556 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:04.556 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:04.556 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:04.556 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:04.819 08:37:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:04.819 08:37:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:04:04.819 08:37:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:04.819 08:37:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:04.819 08:37:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:04.819 08:37:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:04.819 08:37:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:04.819 08:37:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:04.819 08:37:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:04.819 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:04.819 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:04.819 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:04.820 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:04.820 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.820 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:04.820 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:04.820 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.820 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.820 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.820 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.820 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 44387228 kB' 'MemAvailable: 47894392 kB' 'Buffers: 2704 kB' 'Cached: 11723956 kB' 'SwapCached: 0 kB' 'Active: 8734636 kB' 'Inactive: 3506192 kB' 'Active(anon): 8339140 kB' 'Inactive(anon): 0 kB' 'Active(file): 395496 kB' 'Inactive(file): 3506192 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 517464 kB' 'Mapped: 172512 kB' 'Shmem: 7824972 kB' 'KReclaimable: 198632 kB' 'Slab: 569224 kB' 'SReclaimable: 198632 kB' 'SUnreclaim: 370592 kB' 'KernelStack: 12832 kB' 'PageTables: 7460 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609860 kB' 'Committed_AS: 9417000 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196144 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1814108 kB' 'DirectMap2M: 14882816 kB' 'DirectMap1G: 52428800 kB' 00:04:04.820 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.820 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.820 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.820 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.820 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.820 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.820 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.820 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.820 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.820 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.820 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.820 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.820 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.820 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.820 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.820 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.820 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.820 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.820 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.820 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.820 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.820 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.820 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.820 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.820 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.820 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.820 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.820 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.820 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.820 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.820 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.820 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.820 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.820 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.820 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.820 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.820 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.820 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.820 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.820 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.820 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.820 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.820 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.820 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.820 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.820 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.820 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.820 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.820 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.820 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.820 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.820 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.820 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.820 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.820 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.820 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.820 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.820 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.820 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.820 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.820 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.820 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.820 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.820 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.820 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.820 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.820 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.820 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.820 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.820 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.820 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.820 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.820 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.820 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.820 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.820 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.820 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.820 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.820 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.820 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.820 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.820 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.820 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.820 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.820 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.820 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.820 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.820 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.820 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.820 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.820 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.820 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.820 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.820 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.820 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.820 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.820 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.820 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.820 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.820 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.820 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.820 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.821 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.821 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.821 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.821 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.821 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.821 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.821 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.821 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.821 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.821 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.821 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.821 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.821 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.821 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.821 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.821 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.821 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.821 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.821 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.821 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.821 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.821 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.821 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.821 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.821 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.821 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.821 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.821 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.821 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.821 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.821 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.821 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.821 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.821 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.821 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.821 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.821 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.821 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.821 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.821 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.821 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.821 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.821 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.821 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.821 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.821 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.821 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.821 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.821 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.821 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.821 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.821 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.821 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.821 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.821 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.821 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.821 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.821 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.821 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.821 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:04.821 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:04.821 08:37:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:04.821 08:37:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:04.821 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:04.821 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:04.821 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:04.821 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:04.821 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.821 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:04.821 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:04.821 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.821 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.821 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.821 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.821 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 44387192 kB' 'MemAvailable: 47894356 kB' 'Buffers: 2704 kB' 'Cached: 11723956 kB' 'SwapCached: 0 kB' 'Active: 8735636 kB' 'Inactive: 3506192 kB' 'Active(anon): 8340140 kB' 'Inactive(anon): 0 kB' 'Active(file): 395496 kB' 'Inactive(file): 3506192 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 518468 kB' 'Mapped: 172512 kB' 'Shmem: 7824972 kB' 'KReclaimable: 198632 kB' 'Slab: 569196 kB' 'SReclaimable: 198632 kB' 'SUnreclaim: 370564 kB' 'KernelStack: 12864 kB' 'PageTables: 7564 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609860 kB' 'Committed_AS: 9418152 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196128 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1814108 kB' 'DirectMap2M: 14882816 kB' 'DirectMap1G: 52428800 kB' 00:04:04.821 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.821 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.821 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.821 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.821 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.821 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.821 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.821 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.821 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.821 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.821 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.821 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.821 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.821 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.821 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.821 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.821 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.821 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.821 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.821 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.821 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.821 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.821 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.821 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.821 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.821 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.821 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.821 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.821 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.821 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.821 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.821 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.821 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.821 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.821 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.821 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.822 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.822 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.822 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.822 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.822 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.822 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.822 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.822 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.822 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.822 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.822 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.822 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.822 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.822 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.822 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.822 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.822 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.822 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.822 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.822 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.822 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.822 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.822 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.822 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.822 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.822 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.822 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.822 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.822 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.822 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.822 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.822 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.822 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.822 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.822 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.822 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.822 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.822 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.822 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.822 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.822 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.822 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.822 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.822 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.822 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.822 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.822 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.822 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.822 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.822 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.822 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.822 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.822 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.822 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.822 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.822 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.822 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.822 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.822 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.822 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.822 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.822 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.822 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.822 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.822 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.822 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.822 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.822 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.822 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.822 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.822 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.822 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.822 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.822 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.822 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.822 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.822 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.822 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.822 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.822 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.822 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.822 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.822 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.822 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.822 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.822 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.822 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.822 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.822 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.822 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.822 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.822 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.822 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.822 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.822 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.822 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.822 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.822 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.822 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.822 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.822 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.822 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.822 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.822 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.822 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.822 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.822 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.822 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.822 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.822 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.822 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.822 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.822 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.822 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.822 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.822 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.822 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.822 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.822 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.822 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.822 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.822 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.822 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.822 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.823 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.823 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.823 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.823 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.823 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.823 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.823 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.823 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.823 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.823 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.823 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.823 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.823 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.823 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.823 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.823 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.823 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.823 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.823 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.823 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.823 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.823 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.823 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.823 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.823 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.823 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.823 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.823 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.823 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.823 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.823 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.823 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.823 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.823 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.823 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.823 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.823 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.823 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.823 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.823 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.823 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.823 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.823 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.823 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.823 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.823 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:04.823 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:04.823 08:37:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:04.823 08:37:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:04.823 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:04.823 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:04.823 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:04.823 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:04.823 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.823 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:04.823 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:04.823 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.823 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.823 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.823 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.823 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 44387496 kB' 'MemAvailable: 47894660 kB' 'Buffers: 2704 kB' 'Cached: 11723972 kB' 'SwapCached: 0 kB' 'Active: 8735180 kB' 'Inactive: 3506192 kB' 'Active(anon): 8339684 kB' 'Inactive(anon): 0 kB' 'Active(file): 395496 kB' 'Inactive(file): 3506192 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 517980 kB' 'Mapped: 172436 kB' 'Shmem: 7824988 kB' 'KReclaimable: 198632 kB' 'Slab: 569188 kB' 'SReclaimable: 198632 kB' 'SUnreclaim: 370556 kB' 'KernelStack: 12880 kB' 'PageTables: 7684 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609860 kB' 'Committed_AS: 9419400 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196160 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1814108 kB' 'DirectMap2M: 14882816 kB' 'DirectMap1G: 52428800 kB' 00:04:04.823 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.823 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.823 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.823 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.823 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.823 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.823 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.823 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.823 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.823 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.823 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.823 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.823 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.823 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.823 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.823 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.823 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.823 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.823 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.823 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.823 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.823 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.823 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.823 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.823 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.823 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.823 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.823 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.823 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.823 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.823 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.823 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.823 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.823 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.823 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.823 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.823 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.823 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.823 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.823 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.823 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.823 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.823 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.823 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.823 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.823 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.823 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.823 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.823 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.823 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.823 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.824 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.824 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.824 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.824 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.824 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.824 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.824 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.824 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.824 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.824 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.824 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.824 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.824 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.824 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.824 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.824 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.824 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.824 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.824 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.824 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.824 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.824 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.824 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.824 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.824 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.824 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.824 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.824 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.824 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.824 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.824 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.824 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.824 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.824 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.824 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.824 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.824 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.824 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.824 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.824 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.824 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.824 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.824 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.824 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.824 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.824 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.824 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.824 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.824 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.824 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.824 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.824 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.824 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.824 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.824 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.824 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.824 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.824 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.824 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.824 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.824 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.824 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.824 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.824 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.824 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.824 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.824 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.824 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.824 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.824 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.824 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.824 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.824 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.824 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.824 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.824 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.824 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.824 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.824 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.824 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.824 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.824 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.824 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.824 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.824 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.824 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.824 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.824 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.824 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.824 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.824 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.824 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.824 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.824 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.824 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.824 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.824 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.824 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.824 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.824 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.825 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.825 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.825 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.825 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.825 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.825 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.825 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.825 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.825 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.825 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.825 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.825 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.825 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.825 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.825 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.825 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.825 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.825 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.825 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.825 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.825 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.825 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.825 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.825 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.825 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.825 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.825 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.825 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.825 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.825 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.825 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.825 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.825 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.825 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.825 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.825 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.825 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.825 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.825 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.825 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.825 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.825 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.825 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.825 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.825 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.825 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.825 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.825 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.825 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.825 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.825 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:04.825 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:04.825 08:37:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:04.825 08:37:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:04.825 nr_hugepages=1025 00:04:04.825 08:37:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:04.825 resv_hugepages=0 00:04:04.825 08:37:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:04.825 surplus_hugepages=0 00:04:04.825 08:37:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:04.825 anon_hugepages=0 00:04:04.825 08:37:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:04.825 08:37:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:04.825 08:37:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:04.825 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:04.825 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:04.825 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:04.825 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:04.825 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.825 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:04.825 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:04.825 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.825 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.825 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.825 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.825 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 44386700 kB' 'MemAvailable: 47893864 kB' 'Buffers: 2704 kB' 'Cached: 11723996 kB' 'SwapCached: 0 kB' 'Active: 8735676 kB' 'Inactive: 3506192 kB' 'Active(anon): 8340180 kB' 'Inactive(anon): 0 kB' 'Active(file): 395496 kB' 'Inactive(file): 3506192 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 518392 kB' 'Mapped: 172436 kB' 'Shmem: 7825012 kB' 'KReclaimable: 198632 kB' 'Slab: 569220 kB' 'SReclaimable: 198632 kB' 'SUnreclaim: 370588 kB' 'KernelStack: 13152 kB' 'PageTables: 8352 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609860 kB' 'Committed_AS: 9440364 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196240 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1814108 kB' 'DirectMap2M: 14882816 kB' 'DirectMap1G: 52428800 kB' 00:04:04.825 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.825 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.825 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.825 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.825 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.825 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.825 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.825 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.825 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.825 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.825 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.825 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.825 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.825 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.825 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.825 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.825 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.825 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.825 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.825 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.825 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.825 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.825 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.825 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.825 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.825 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.825 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.825 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.825 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.825 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.825 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.825 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.825 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.825 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.825 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.825 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.825 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.826 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.826 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.826 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.826 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.826 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.826 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.826 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.826 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.826 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.826 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.826 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.826 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.826 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.826 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.826 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.826 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.826 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.826 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.826 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.826 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.826 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.826 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.826 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.826 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.826 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.826 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.826 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.826 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.826 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.826 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.826 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.826 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.826 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.826 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.826 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.826 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.826 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.826 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.826 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.826 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.826 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.826 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.826 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.826 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.826 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.826 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.826 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.826 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.826 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.826 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.826 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.826 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.826 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.826 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.826 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.826 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.826 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.826 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.826 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.826 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.826 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.826 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.826 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.826 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.826 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.826 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.826 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.826 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.826 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.826 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.826 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.826 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.826 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.826 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.826 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.826 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.826 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.826 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.826 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.826 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.826 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.826 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.826 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.826 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.826 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.826 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.826 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.826 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.826 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.826 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.826 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.826 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.826 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.826 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.826 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.826 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.826 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.826 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.826 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.826 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.826 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.826 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.826 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.826 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.826 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.826 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.826 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.826 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.826 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.826 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.826 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.826 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.826 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.826 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.826 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.826 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.826 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.826 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.826 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.826 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.826 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.826 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.826 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.826 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.826 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.827 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.827 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.827 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.827 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.827 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.827 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.827 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.827 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.827 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.827 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.827 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.827 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.827 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.827 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.827 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.827 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.827 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.827 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.827 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.827 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.827 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.827 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.827 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.827 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.827 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.827 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.827 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.827 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.827 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.827 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.827 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.827 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:04:04.827 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:04.827 08:37:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:04.827 08:37:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:04.827 08:37:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:04:04.827 08:37:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:04.827 08:37:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:04.827 08:37:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:04.827 08:37:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:04:04.827 08:37:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:04.827 08:37:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:04.827 08:37:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:04.827 08:37:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:04.827 08:37:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:04.827 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:04.827 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:04:04.827 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:04.827 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:04.827 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.827 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:04.827 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:04.827 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.827 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.827 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.827 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.827 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 21398756 kB' 'MemUsed: 11478184 kB' 'SwapCached: 0 kB' 'Active: 5919736 kB' 'Inactive: 3357228 kB' 'Active(anon): 5647804 kB' 'Inactive(anon): 0 kB' 'Active(file): 271932 kB' 'Inactive(file): 3357228 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9153700 kB' 'Mapped: 105020 kB' 'AnonPages: 126448 kB' 'Shmem: 5524540 kB' 'KernelStack: 6696 kB' 'PageTables: 3492 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 93952 kB' 'Slab: 311012 kB' 'SReclaimable: 93952 kB' 'SUnreclaim: 217060 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:04.827 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.827 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.827 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.827 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.827 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.827 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.827 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.827 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.827 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.827 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.827 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.827 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.827 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.827 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.827 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.827 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.827 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.827 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.827 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.827 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.827 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.827 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.827 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.827 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.827 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.827 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.827 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.827 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.827 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.827 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.827 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.827 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.827 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.827 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.827 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.827 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.827 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.827 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.827 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.827 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.827 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.827 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.827 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.827 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.827 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.827 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.827 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.827 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.827 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.827 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.827 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.827 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.827 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.827 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.827 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.827 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.827 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.827 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.828 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.828 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.828 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.828 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.828 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.828 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.828 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.828 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.828 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.828 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.828 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.828 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.828 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.828 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.828 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.828 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.828 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.828 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.828 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.828 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.828 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.828 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.828 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.828 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.828 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.828 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.828 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.828 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.828 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.828 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.828 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.828 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.828 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.828 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.828 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.828 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.828 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.828 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.828 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.828 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.828 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.828 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.828 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.828 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.828 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.828 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.828 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.828 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.828 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.828 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.828 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.828 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.828 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.828 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.828 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.828 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.828 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.828 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.828 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.828 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.828 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.828 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.828 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.828 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.828 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.828 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.828 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.828 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.828 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.828 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.828 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.828 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.828 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.828 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.828 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.828 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.828 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.828 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.828 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.828 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.828 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.828 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.828 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.828 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.828 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.828 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.828 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.828 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:04.828 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:04.828 08:37:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:04.828 08:37:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:04.828 08:37:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:04.828 08:37:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:04.828 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:04.828 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:04:04.828 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:04.828 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:04.828 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.828 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:04.828 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:04.828 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.828 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.828 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.828 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.828 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664772 kB' 'MemFree: 22987720 kB' 'MemUsed: 4677052 kB' 'SwapCached: 0 kB' 'Active: 2817620 kB' 'Inactive: 148964 kB' 'Active(anon): 2694056 kB' 'Inactive(anon): 0 kB' 'Active(file): 123564 kB' 'Inactive(file): 148964 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2573048 kB' 'Mapped: 68336 kB' 'AnonPages: 393688 kB' 'Shmem: 2300520 kB' 'KernelStack: 6280 kB' 'PageTables: 4328 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 104680 kB' 'Slab: 258328 kB' 'SReclaimable: 104680 kB' 'SUnreclaim: 153648 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:04:04.828 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.828 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.828 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.828 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.828 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.828 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.828 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.828 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.828 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.828 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.828 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.829 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.829 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.829 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.829 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.829 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.829 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.829 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.829 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.829 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.829 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.829 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.829 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.829 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.829 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.829 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.829 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.829 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.829 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.829 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.829 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.829 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.829 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.829 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.829 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.829 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.829 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.829 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.829 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.829 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.829 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.829 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.829 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.829 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.829 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.829 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.829 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.829 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.829 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.829 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.829 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.829 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.829 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.829 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.829 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.829 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.829 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.829 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.829 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.829 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.829 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.829 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.829 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.829 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.829 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.829 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.829 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.829 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.829 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.829 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.829 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.829 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.829 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.829 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.829 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.829 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.829 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.829 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.829 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.829 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.829 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.829 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.829 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.829 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.829 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.829 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.829 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.829 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.829 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.829 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.829 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.829 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.829 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.829 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.829 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.829 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.829 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.829 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.829 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.829 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.829 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.829 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.829 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.829 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.829 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.829 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.829 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.829 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.829 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.829 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.829 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.829 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.829 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.829 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.829 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.829 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.829 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.829 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.830 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.830 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.830 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.830 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.830 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.830 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.830 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.830 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.830 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.830 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.830 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.830 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.830 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.830 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.830 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.830 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.830 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.830 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.830 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.830 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.830 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.830 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.830 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.830 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.830 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.830 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.830 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.830 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:04.830 08:37:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:04.830 08:37:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:04.830 08:37:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:04.830 08:37:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:04.830 08:37:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:04.830 08:37:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:04:04.830 node0=512 expecting 513 00:04:04.830 08:37:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:04.830 08:37:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:04.830 08:37:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:04.830 08:37:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:04:04.830 node1=513 expecting 512 00:04:04.830 08:37:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:04:04.830 00:04:04.830 real 0m1.422s 00:04:04.830 user 0m0.598s 00:04:04.830 sys 0m0.786s 00:04:04.830 08:37:23 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:04.830 08:37:23 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:04.830 ************************************ 00:04:04.830 END TEST odd_alloc 00:04:04.830 ************************************ 00:04:05.092 08:37:23 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:05.092 08:37:23 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:05.092 08:37:23 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:05.092 08:37:23 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:05.092 ************************************ 00:04:05.092 START TEST custom_alloc 00:04:05.092 ************************************ 00:04:05.092 08:37:23 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1125 -- # custom_alloc 00:04:05.092 08:37:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:04:05.092 08:37:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:04:05.092 08:37:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:05.092 08:37:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:05.092 08:37:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:05.092 08:37:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:05.092 08:37:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:05.092 08:37:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:05.092 08:37:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:05.092 08:37:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:05.092 08:37:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:05.092 08:37:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:05.092 08:37:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:05.092 08:37:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:05.092 08:37:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:05.092 08:37:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:05.092 08:37:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:05.092 08:37:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:05.092 08:37:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:05.092 08:37:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:05.092 08:37:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:05.092 08:37:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:04:05.092 08:37:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:05.092 08:37:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:05.092 08:37:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:05.092 08:37:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:05.092 08:37:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:05.092 08:37:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:05.092 08:37:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:05.092 08:37:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:04:05.092 08:37:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:04:05.092 08:37:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:05.092 08:37:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:05.092 08:37:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:05.092 08:37:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:05.092 08:37:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:05.092 08:37:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:05.092 08:37:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:05.092 08:37:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:05.092 08:37:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:05.092 08:37:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:05.092 08:37:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:05.092 08:37:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:05.092 08:37:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:05.092 08:37:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:05.092 08:37:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:05.092 08:37:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:05.092 08:37:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:04:05.092 08:37:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:05.092 08:37:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:05.092 08:37:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:05.092 08:37:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:05.092 08:37:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:05.092 08:37:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:05.092 08:37:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:05.092 08:37:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:05.092 08:37:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:05.092 08:37:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:05.092 08:37:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:05.092 08:37:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:05.092 08:37:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:05.092 08:37:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:05.092 08:37:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:04:05.092 08:37:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:05.092 08:37:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:05.092 08:37:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:05.092 08:37:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:04:05.092 08:37:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:05.092 08:37:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:04:05.092 08:37:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:04:05.092 08:37:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:05.092 08:37:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:06.030 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:06.030 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:06.030 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:06.030 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:06.030 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:06.030 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:06.030 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:06.030 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:06.030 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:06.030 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:06.030 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:06.030 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:06.030 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:06.030 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:06.030 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:06.030 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:06.030 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:06.294 08:37:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:04:06.294 08:37:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:06.294 08:37:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:04:06.294 08:37:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:06.294 08:37:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:06.294 08:37:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:06.294 08:37:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:06.294 08:37:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:06.294 08:37:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:06.294 08:37:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:06.294 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:06.294 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:06.294 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:06.294 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:06.294 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.294 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:06.294 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:06.294 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.294 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.294 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.294 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.294 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 43310188 kB' 'MemAvailable: 46817352 kB' 'Buffers: 2704 kB' 'Cached: 11724092 kB' 'SwapCached: 0 kB' 'Active: 8736352 kB' 'Inactive: 3506192 kB' 'Active(anon): 8340856 kB' 'Inactive(anon): 0 kB' 'Active(file): 395496 kB' 'Inactive(file): 3506192 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 519072 kB' 'Mapped: 173456 kB' 'Shmem: 7825108 kB' 'KReclaimable: 198632 kB' 'Slab: 569392 kB' 'SReclaimable: 198632 kB' 'SUnreclaim: 370760 kB' 'KernelStack: 12944 kB' 'PageTables: 7604 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086596 kB' 'Committed_AS: 9451792 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196160 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1814108 kB' 'DirectMap2M: 14882816 kB' 'DirectMap1G: 52428800 kB' 00:04:06.294 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.294 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.294 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.294 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.294 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.294 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.294 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.294 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.294 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.294 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.294 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.294 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.294 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.294 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.294 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.294 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.294 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.294 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.294 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.294 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.294 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.294 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.294 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.294 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.294 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.294 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.294 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.294 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.294 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.294 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.294 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.294 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.294 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.294 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.294 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.294 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.294 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.294 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.294 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.294 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.294 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.294 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.294 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.294 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.294 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.294 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.294 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.294 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.294 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.294 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.294 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.294 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.294 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.294 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.294 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.294 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.294 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.294 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.294 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.294 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.294 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.294 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.294 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.294 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.294 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.294 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.294 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.294 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.294 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.295 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.295 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.295 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.295 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.295 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.295 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.295 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.295 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.295 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.295 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.295 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.295 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.295 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.295 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.295 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.295 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.295 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.295 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.295 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.295 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.295 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.295 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.295 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.295 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.295 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.295 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.295 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.295 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.295 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.295 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.295 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.295 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.295 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.295 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.295 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.295 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.295 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.295 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.295 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.295 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.295 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.295 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.295 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.295 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.295 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.295 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.295 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.295 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.295 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.295 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.295 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.295 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.295 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.295 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.295 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.295 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.295 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.295 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.295 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.295 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.295 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.295 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.295 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.295 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.295 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.295 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.295 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.295 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.295 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.295 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.295 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.295 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.295 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.295 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.295 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.295 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.295 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.295 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.295 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.295 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.295 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.295 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.295 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.295 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.295 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.295 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.295 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.295 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.295 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.295 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.295 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.295 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.295 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:06.295 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:06.295 08:37:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:06.295 08:37:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:06.295 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:06.295 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:06.295 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:06.295 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:06.295 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.295 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:06.295 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:06.295 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.295 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.295 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.295 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.296 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 43310300 kB' 'MemAvailable: 46817464 kB' 'Buffers: 2704 kB' 'Cached: 11724096 kB' 'SwapCached: 0 kB' 'Active: 8736336 kB' 'Inactive: 3506192 kB' 'Active(anon): 8340840 kB' 'Inactive(anon): 0 kB' 'Active(file): 395496 kB' 'Inactive(file): 3506192 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 519088 kB' 'Mapped: 173448 kB' 'Shmem: 7825112 kB' 'KReclaimable: 198632 kB' 'Slab: 569392 kB' 'SReclaimable: 198632 kB' 'SUnreclaim: 370760 kB' 'KernelStack: 12960 kB' 'PageTables: 7644 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086596 kB' 'Committed_AS: 9451812 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196144 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1814108 kB' 'DirectMap2M: 14882816 kB' 'DirectMap1G: 52428800 kB' 00:04:06.296 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.296 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.296 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.296 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.296 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.296 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.296 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.296 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.296 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.296 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.296 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.296 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.296 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.296 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.296 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.296 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.296 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.296 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.296 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.296 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.296 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.296 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.296 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.296 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.296 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.296 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.296 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.296 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.296 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.296 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.296 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.296 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.296 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.296 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.296 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.296 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.296 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.296 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.296 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.296 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.296 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.296 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.296 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.296 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.296 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.296 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.296 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.296 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.296 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.296 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.296 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.296 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.296 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.296 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.296 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.296 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.296 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.296 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.296 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.296 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.296 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.296 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.296 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.296 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.296 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.296 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.296 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.296 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.296 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.296 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.296 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.296 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.296 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.296 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.296 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.296 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.296 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.296 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.296 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.296 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.296 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.296 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.296 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.296 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.296 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.296 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.296 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.296 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.296 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.296 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.296 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.296 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.296 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.296 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.296 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.296 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.296 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.296 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.296 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.296 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.296 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.296 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.296 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.296 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.296 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.296 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.296 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.296 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.296 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.296 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.296 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.296 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.296 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.296 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.296 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.296 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.297 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.297 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.297 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.297 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.297 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.297 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.297 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.297 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.297 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.297 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.297 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.297 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.297 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.297 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.297 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.297 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.297 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.297 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.297 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.297 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.297 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.297 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.297 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.297 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.297 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.297 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.297 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.297 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.297 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.297 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.297 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.297 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.297 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.297 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.297 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.297 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.297 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.297 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.297 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.297 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.297 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.297 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.297 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.297 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.297 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.297 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.297 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.297 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.297 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.297 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.297 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.297 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.297 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.297 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.297 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.297 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.297 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.297 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.297 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.297 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.297 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.297 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.297 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.297 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.297 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.297 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.297 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.297 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.297 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.297 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.297 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.297 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.297 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.297 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.297 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.297 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.297 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.297 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.297 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.297 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.297 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.297 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.297 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.297 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.297 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.297 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.297 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.297 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.297 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.297 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:06.297 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:06.297 08:37:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:06.297 08:37:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:06.297 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:06.297 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:06.297 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:06.297 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:06.297 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.297 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:06.297 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:06.297 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.297 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.297 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.297 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.297 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 43310048 kB' 'MemAvailable: 46817212 kB' 'Buffers: 2704 kB' 'Cached: 11724108 kB' 'SwapCached: 0 kB' 'Active: 8736340 kB' 'Inactive: 3506192 kB' 'Active(anon): 8340844 kB' 'Inactive(anon): 0 kB' 'Active(file): 395496 kB' 'Inactive(file): 3506192 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 519068 kB' 'Mapped: 173448 kB' 'Shmem: 7825124 kB' 'KReclaimable: 198632 kB' 'Slab: 569392 kB' 'SReclaimable: 198632 kB' 'SUnreclaim: 370760 kB' 'KernelStack: 12976 kB' 'PageTables: 7644 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086596 kB' 'Committed_AS: 9451832 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196160 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1814108 kB' 'DirectMap2M: 14882816 kB' 'DirectMap1G: 52428800 kB' 00:04:06.297 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.297 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.297 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.297 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.298 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.298 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.298 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.298 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.298 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.298 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.298 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.298 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.298 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.298 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.298 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.298 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.298 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.298 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.298 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.298 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.298 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.298 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.298 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.298 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.298 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.298 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.298 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.298 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.298 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.298 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.298 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.298 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.298 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.298 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.298 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.298 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.298 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.298 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.298 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.298 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.298 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.298 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.298 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.298 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.298 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.298 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.298 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.298 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.298 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.298 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.298 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.298 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.298 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.298 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.298 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.298 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.298 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.298 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.298 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.298 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.298 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.298 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.298 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.298 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.298 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.298 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.298 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.298 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.298 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.298 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.298 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.298 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.298 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.298 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.298 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.298 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.298 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.298 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.298 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.298 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.299 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.299 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.299 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.299 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.299 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.299 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.299 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.299 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.299 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.299 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.299 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.299 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.299 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.299 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.299 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.299 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.299 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.299 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.299 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.299 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.299 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.299 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.299 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.299 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.299 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.299 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.299 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.299 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.299 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.299 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.299 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.299 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.299 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.299 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.299 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.299 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.299 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.299 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.299 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.299 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.299 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.299 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.299 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.299 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.299 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.299 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.299 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.299 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.299 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.299 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.299 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.299 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.299 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.299 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.299 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.299 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.299 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.299 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.299 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.299 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.299 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.299 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.299 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.299 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.299 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.299 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.299 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.299 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.299 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.299 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.299 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.299 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.299 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.299 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.299 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.299 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.299 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.299 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.299 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.299 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.299 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.299 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.299 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.299 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.299 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.299 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.299 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.299 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.299 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.299 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.299 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.299 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.299 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.299 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.299 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.299 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.299 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.299 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.299 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.299 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.299 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.299 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.299 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.299 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.299 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.299 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.299 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.299 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.299 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.299 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.299 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.299 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.299 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.299 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.299 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.299 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.299 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.299 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.299 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.299 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.299 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.300 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:06.300 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:06.300 08:37:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:06.300 08:37:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:04:06.300 nr_hugepages=1536 00:04:06.300 08:37:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:06.300 resv_hugepages=0 00:04:06.300 08:37:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:06.300 surplus_hugepages=0 00:04:06.300 08:37:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:06.300 anon_hugepages=0 00:04:06.300 08:37:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:06.300 08:37:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:04:06.300 08:37:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:06.300 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:06.300 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:06.300 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:06.300 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:06.300 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.300 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:06.300 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:06.300 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.300 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.300 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.300 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.300 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 43310048 kB' 'MemAvailable: 46817212 kB' 'Buffers: 2704 kB' 'Cached: 11724132 kB' 'SwapCached: 0 kB' 'Active: 8736240 kB' 'Inactive: 3506192 kB' 'Active(anon): 8340744 kB' 'Inactive(anon): 0 kB' 'Active(file): 395496 kB' 'Inactive(file): 3506192 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 518912 kB' 'Mapped: 173372 kB' 'Shmem: 7825148 kB' 'KReclaimable: 198632 kB' 'Slab: 569400 kB' 'SReclaimable: 198632 kB' 'SUnreclaim: 370768 kB' 'KernelStack: 12976 kB' 'PageTables: 7636 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086596 kB' 'Committed_AS: 9451852 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196160 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1814108 kB' 'DirectMap2M: 14882816 kB' 'DirectMap1G: 52428800 kB' 00:04:06.300 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.300 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.300 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.300 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.300 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.300 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.300 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.300 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.300 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.300 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.300 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.300 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.300 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.300 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.300 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.300 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.300 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.300 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.300 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.300 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.300 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.300 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.300 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.300 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.300 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.300 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.300 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.300 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.300 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.300 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.300 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.300 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.300 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.300 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.300 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.300 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.300 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.300 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.300 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.300 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.300 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.300 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.300 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.300 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.300 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.300 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.300 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.300 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.300 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.300 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.300 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.300 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.300 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.300 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.300 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.300 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.300 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.300 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.300 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.300 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.300 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.300 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.300 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.300 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.300 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.300 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.300 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.300 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.300 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.300 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.300 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.300 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.300 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.300 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.300 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.300 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.300 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.300 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.300 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.300 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.300 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.300 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.300 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.300 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.301 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.301 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.301 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.301 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.301 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.301 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.301 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.301 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.301 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.301 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.301 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.301 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.301 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.301 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.301 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.301 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.301 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.301 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.301 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.301 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.301 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.301 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.301 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.301 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.301 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.301 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.301 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.301 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.301 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.301 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.301 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.301 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.301 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.301 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.301 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.301 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.301 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.301 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.301 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.301 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.301 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.301 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.301 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.301 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.301 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.301 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.301 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.301 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.301 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.301 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.301 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.301 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.301 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.301 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.301 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.301 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.301 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.301 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.301 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.301 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.301 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.301 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.301 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.301 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.301 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.301 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.301 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.301 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.301 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.301 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.301 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.301 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.301 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.301 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.301 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.301 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.301 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.301 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.301 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.301 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.301 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.301 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.301 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.301 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.301 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.301 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.301 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.301 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.301 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.301 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.301 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.301 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.301 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.301 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.301 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.301 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.301 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.301 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.301 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.301 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.301 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.301 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.301 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.301 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.301 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.301 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.301 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.301 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.301 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.301 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:04:06.301 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:06.301 08:37:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:06.301 08:37:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:06.301 08:37:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:04:06.301 08:37:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:06.301 08:37:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:06.301 08:37:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:06.301 08:37:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:06.301 08:37:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:06.301 08:37:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:06.301 08:37:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:06.302 08:37:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:06.302 08:37:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:06.302 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:06.302 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:04:06.302 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:06.302 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:06.302 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.302 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:06.302 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:06.302 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.302 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.302 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.302 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.302 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 21391508 kB' 'MemUsed: 11485432 kB' 'SwapCached: 0 kB' 'Active: 5919192 kB' 'Inactive: 3357228 kB' 'Active(anon): 5647260 kB' 'Inactive(anon): 0 kB' 'Active(file): 271932 kB' 'Inactive(file): 3357228 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9153708 kB' 'Mapped: 105040 kB' 'AnonPages: 125964 kB' 'Shmem: 5524548 kB' 'KernelStack: 6632 kB' 'PageTables: 3172 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 93952 kB' 'Slab: 311040 kB' 'SReclaimable: 93952 kB' 'SUnreclaim: 217088 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:06.302 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.302 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.302 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.302 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.302 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.302 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.302 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.302 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.302 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.302 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.302 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.302 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.302 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.302 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.302 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.302 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.302 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.302 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.302 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.302 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.302 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.302 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.302 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.302 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.302 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.302 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.302 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.302 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.302 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.302 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.302 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.302 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.302 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.302 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.302 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.302 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.302 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.302 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.302 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.302 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.302 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.302 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.302 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.302 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.302 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.302 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.302 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.302 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.302 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.302 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.302 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.302 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.302 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.302 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.302 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.302 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.302 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.302 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.302 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.302 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.302 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.302 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.302 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.302 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.302 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.302 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.302 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.302 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.302 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.302 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.302 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.302 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.302 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.302 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.302 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.302 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.302 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.302 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.302 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.302 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.302 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.302 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.302 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.302 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.302 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.302 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.302 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.302 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.302 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.302 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.302 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.302 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.302 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.302 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.302 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.302 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.302 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.302 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.302 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.302 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.303 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.303 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.303 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.303 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.303 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.303 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.303 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.303 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.303 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.303 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.303 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.303 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.303 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.303 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.303 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.303 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.303 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.303 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.303 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.303 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.303 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.303 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.303 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.303 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.303 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.562 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.562 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.562 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.563 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.563 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.563 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.563 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.563 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.563 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.563 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.563 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.563 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.563 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.563 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.563 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.563 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.563 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.563 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.563 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.563 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.563 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:06.563 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:06.563 08:37:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:06.563 08:37:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:06.563 08:37:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:06.563 08:37:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:06.563 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:06.563 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:04:06.563 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:06.563 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:06.563 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.563 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:06.563 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:06.563 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.563 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.563 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.563 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.563 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664772 kB' 'MemFree: 21918316 kB' 'MemUsed: 5746456 kB' 'SwapCached: 0 kB' 'Active: 2817272 kB' 'Inactive: 148964 kB' 'Active(anon): 2693708 kB' 'Inactive(anon): 0 kB' 'Active(file): 123564 kB' 'Inactive(file): 148964 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2573172 kB' 'Mapped: 68332 kB' 'AnonPages: 393164 kB' 'Shmem: 2300644 kB' 'KernelStack: 6360 kB' 'PageTables: 4520 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 104680 kB' 'Slab: 258360 kB' 'SReclaimable: 104680 kB' 'SUnreclaim: 153680 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:06.563 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.563 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.563 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.563 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.563 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.563 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.563 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.563 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.563 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.563 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.563 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.563 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.563 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.563 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.563 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.563 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.563 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.563 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.563 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.563 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.563 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.563 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.563 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.563 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.563 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.563 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.563 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.563 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.563 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.563 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.563 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.563 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.563 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.563 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.563 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.563 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.563 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.563 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.563 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.563 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.563 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.563 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.563 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.563 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.563 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.563 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.563 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.563 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.563 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.563 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.563 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.563 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.563 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.563 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.563 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.563 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.563 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.563 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.563 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.563 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.563 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.563 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.563 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.563 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.563 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.563 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.563 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.563 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.563 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.563 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.564 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.564 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.564 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.564 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.564 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.564 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.564 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.564 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.564 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.564 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.564 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.564 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.564 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.564 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.564 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.564 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.564 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.564 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.564 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.564 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.564 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.564 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.564 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.564 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.564 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.564 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.564 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.564 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.564 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.564 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.564 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.564 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.564 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.564 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.564 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.564 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.564 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.564 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.564 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.564 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.564 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.564 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.564 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.564 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.564 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.564 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.564 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.564 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.564 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.564 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.564 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.564 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.564 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.564 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.564 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.564 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.564 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.564 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.564 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.564 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.564 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.564 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.564 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.564 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.564 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.564 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.564 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.564 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.564 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.564 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.564 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.564 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.564 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.564 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.564 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.564 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:06.564 08:37:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:06.564 08:37:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:06.564 08:37:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:06.564 08:37:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:06.564 08:37:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:06.564 08:37:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:06.564 node0=512 expecting 512 00:04:06.564 08:37:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:06.564 08:37:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:06.564 08:37:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:06.564 08:37:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:04:06.564 node1=1024 expecting 1024 00:04:06.564 08:37:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:04:06.564 00:04:06.564 real 0m1.462s 00:04:06.564 user 0m0.646s 00:04:06.564 sys 0m0.777s 00:04:06.564 08:37:24 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:06.564 08:37:24 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:06.564 ************************************ 00:04:06.564 END TEST custom_alloc 00:04:06.564 ************************************ 00:04:06.564 08:37:24 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:06.564 08:37:24 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:06.564 08:37:24 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:06.564 08:37:24 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:06.564 ************************************ 00:04:06.564 START TEST no_shrink_alloc 00:04:06.564 ************************************ 00:04:06.564 08:37:24 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1125 -- # no_shrink_alloc 00:04:06.564 08:37:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:06.564 08:37:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:06.564 08:37:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:06.564 08:37:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:04:06.564 08:37:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:06.564 08:37:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:06.564 08:37:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:06.564 08:37:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:06.564 08:37:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:06.564 08:37:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:06.564 08:37:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:06.564 08:37:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:06.564 08:37:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:06.564 08:37:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:06.565 08:37:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:06.565 08:37:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:06.565 08:37:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:06.565 08:37:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:06.565 08:37:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:06.565 08:37:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:04:06.565 08:37:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:06.565 08:37:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:07.503 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:07.503 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:07.503 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:07.503 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:07.503 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:07.503 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:07.503 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:07.503 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:07.503 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:07.503 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:07.503 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:07.766 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:07.766 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:07.766 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:07.766 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:07.766 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:07.766 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:07.766 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:07.766 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:07.766 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:07.766 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:07.766 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:07.766 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:07.766 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:07.766 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:07.766 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:07.766 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:07.766 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:07.766 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:07.766 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:07.766 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.766 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:07.766 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:07.766 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.766 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.766 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.766 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.766 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 44368116 kB' 'MemAvailable: 47875280 kB' 'Buffers: 2704 kB' 'Cached: 11724220 kB' 'SwapCached: 0 kB' 'Active: 8736904 kB' 'Inactive: 3506192 kB' 'Active(anon): 8341408 kB' 'Inactive(anon): 0 kB' 'Active(file): 395496 kB' 'Inactive(file): 3506192 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 519540 kB' 'Mapped: 173440 kB' 'Shmem: 7825236 kB' 'KReclaimable: 198632 kB' 'Slab: 569052 kB' 'SReclaimable: 198632 kB' 'SUnreclaim: 370420 kB' 'KernelStack: 12992 kB' 'PageTables: 7648 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 9452288 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196192 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1814108 kB' 'DirectMap2M: 14882816 kB' 'DirectMap1G: 52428800 kB' 00:04:07.766 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.766 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.766 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.766 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.766 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.766 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.766 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.766 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.766 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.766 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.766 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.766 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.766 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.766 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.766 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.766 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.766 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.766 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.766 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.766 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.766 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.767 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.767 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.767 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.767 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.767 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.767 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.767 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.767 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.767 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.767 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.767 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.767 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.767 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.767 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.767 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.767 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.767 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.767 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.767 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.767 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.767 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.767 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.767 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.767 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.767 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.767 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.767 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.767 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.767 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.767 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.767 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.767 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.767 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.767 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.767 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.767 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.767 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.767 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.767 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.767 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.767 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.767 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.767 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.767 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.767 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.767 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.767 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.767 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.767 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.767 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.767 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.767 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.767 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.767 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.767 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.767 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.767 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.767 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.767 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.767 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.767 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.767 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.767 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.767 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.767 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.767 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.767 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.767 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.767 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.767 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.767 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.767 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.767 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.767 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.767 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.767 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.767 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.767 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.767 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.767 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.767 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.767 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.767 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.767 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.767 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.767 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.767 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.767 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.767 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.767 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.767 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.767 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.767 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.767 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.767 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.767 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.767 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.767 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.767 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.767 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.767 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.767 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.767 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.767 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.767 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.767 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.767 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.767 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.767 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.767 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.767 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.767 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.767 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.767 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.767 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.767 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.767 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.767 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.767 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.768 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.768 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.768 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.768 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.768 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.768 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.768 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.768 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.768 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.768 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.768 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.768 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.768 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.768 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.768 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.768 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.768 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.768 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.768 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.768 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.768 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.768 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:07.768 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:07.768 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:07.768 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:07.768 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:07.768 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:07.768 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:07.768 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:07.768 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.768 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:07.768 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:07.768 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.768 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.768 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.768 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.768 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 44369664 kB' 'MemAvailable: 47876828 kB' 'Buffers: 2704 kB' 'Cached: 11724220 kB' 'SwapCached: 0 kB' 'Active: 8736776 kB' 'Inactive: 3506192 kB' 'Active(anon): 8341280 kB' 'Inactive(anon): 0 kB' 'Active(file): 395496 kB' 'Inactive(file): 3506192 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 519384 kB' 'Mapped: 173460 kB' 'Shmem: 7825236 kB' 'KReclaimable: 198632 kB' 'Slab: 569100 kB' 'SReclaimable: 198632 kB' 'SUnreclaim: 370468 kB' 'KernelStack: 12992 kB' 'PageTables: 7652 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 9452304 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196160 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1814108 kB' 'DirectMap2M: 14882816 kB' 'DirectMap1G: 52428800 kB' 00:04:07.768 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.768 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.768 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.768 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.768 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.768 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.768 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.768 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.768 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.768 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.768 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.768 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.768 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.768 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.768 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.768 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.768 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.768 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.768 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.768 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.768 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.768 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.768 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.768 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.768 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.768 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.768 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.768 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.768 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.768 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.768 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.768 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.768 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.768 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.768 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.768 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.768 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.768 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.768 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.768 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.768 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.768 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.768 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.768 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.768 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.768 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.768 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.768 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.768 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.768 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.768 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.768 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.768 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.768 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.768 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.768 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.768 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.768 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.768 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.768 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.768 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.768 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.768 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.768 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.768 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.768 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.768 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.768 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.768 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.768 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.769 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.769 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.769 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.769 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.769 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.769 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.769 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.769 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.769 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.769 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.769 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.769 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.769 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.769 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.769 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.769 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.769 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.769 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.769 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.769 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.769 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.769 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.769 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.769 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.769 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.769 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.769 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.769 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.769 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.769 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.769 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.769 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.769 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.769 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.769 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.769 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.769 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.769 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.769 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.769 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.769 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.769 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.769 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.769 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.769 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.769 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.769 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.769 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.769 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.769 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.769 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.769 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.769 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.769 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.769 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.769 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.769 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.769 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.769 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.769 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.769 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.769 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.769 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.769 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.769 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.769 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.769 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.769 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.769 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.769 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.769 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.769 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.769 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.769 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.769 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.769 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.769 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.769 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.769 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.769 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.769 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.769 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.769 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.769 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.769 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.769 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.769 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.769 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.769 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.769 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.769 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.769 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.769 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.769 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.769 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.769 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.769 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.769 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.769 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.769 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.769 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.769 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.769 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.769 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.769 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.769 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.769 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.769 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.769 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.769 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.769 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.769 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.769 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.769 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.769 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.769 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.769 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.769 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.769 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.770 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.770 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.770 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.770 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.770 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.770 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.770 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.770 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.770 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.770 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.770 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.770 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.770 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.770 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.770 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.770 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.770 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:07.770 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:07.770 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:07.770 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:07.770 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:07.770 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:07.770 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:07.770 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:07.770 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.770 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:07.770 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:07.770 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.770 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.770 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.770 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.770 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 44370748 kB' 'MemAvailable: 47877912 kB' 'Buffers: 2704 kB' 'Cached: 11724220 kB' 'SwapCached: 0 kB' 'Active: 8737388 kB' 'Inactive: 3506192 kB' 'Active(anon): 8341892 kB' 'Inactive(anon): 0 kB' 'Active(file): 395496 kB' 'Inactive(file): 3506192 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 519964 kB' 'Mapped: 173384 kB' 'Shmem: 7825236 kB' 'KReclaimable: 198632 kB' 'Slab: 569092 kB' 'SReclaimable: 198632 kB' 'SUnreclaim: 370460 kB' 'KernelStack: 13040 kB' 'PageTables: 7756 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 9452328 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196160 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1814108 kB' 'DirectMap2M: 14882816 kB' 'DirectMap1G: 52428800 kB' 00:04:07.770 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.770 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.770 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.770 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.770 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.770 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.770 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.770 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.770 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.770 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.770 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.770 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.770 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.770 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.770 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.770 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.770 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.770 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.770 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.770 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.770 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.770 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.770 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.770 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.770 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.770 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.770 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.770 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.770 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.770 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.770 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.770 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.770 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.770 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.770 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.770 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.770 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.770 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.770 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.770 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.770 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.770 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.770 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.770 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.770 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.770 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.770 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.770 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.770 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.770 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.770 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.770 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.770 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.770 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.770 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.770 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.770 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.770 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.770 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.770 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.770 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.770 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.770 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.770 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.770 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.770 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.770 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.770 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.770 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.770 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.770 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.770 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.770 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.770 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.770 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.771 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.771 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.771 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.771 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.771 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.771 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.771 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.771 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.771 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.771 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.771 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.771 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.771 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.771 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.771 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.771 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.771 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.771 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.771 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.771 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.771 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.771 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.771 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.771 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.771 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.771 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.771 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.771 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.771 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.771 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.771 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.771 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.771 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.771 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.771 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.771 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.771 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.771 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.771 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.771 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.771 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.771 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.771 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.771 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.771 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.771 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.771 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.771 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.771 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.771 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.771 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.771 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.771 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.771 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.771 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.771 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.771 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.771 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.771 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.771 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.771 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.771 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.771 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.771 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.771 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.771 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.771 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.771 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.771 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.771 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.771 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.771 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.771 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.771 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.771 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.771 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.771 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.771 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.771 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.771 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.771 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.771 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.771 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.771 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.771 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.771 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.771 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.771 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.771 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.771 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.771 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.771 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.771 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.771 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.771 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.771 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.032 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.032 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.032 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.032 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.032 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.032 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.032 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.032 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.032 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.032 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.032 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.032 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.032 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.032 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.032 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.032 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.032 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.032 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.032 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.032 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.032 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.032 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.032 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.032 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.032 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.032 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.032 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.032 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.032 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.032 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.032 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:08.032 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:08.032 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:08.032 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:08.032 nr_hugepages=1024 00:04:08.032 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:08.032 resv_hugepages=0 00:04:08.032 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:08.032 surplus_hugepages=0 00:04:08.032 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:08.032 anon_hugepages=0 00:04:08.032 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:08.032 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:08.032 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:08.032 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:08.032 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:08.032 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:08.032 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:08.032 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:08.032 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:08.032 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:08.032 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:08.032 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:08.032 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.032 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.032 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 44372264 kB' 'MemAvailable: 47879428 kB' 'Buffers: 2704 kB' 'Cached: 11724256 kB' 'SwapCached: 0 kB' 'Active: 8736512 kB' 'Inactive: 3506192 kB' 'Active(anon): 8341016 kB' 'Inactive(anon): 0 kB' 'Active(file): 395496 kB' 'Inactive(file): 3506192 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 518980 kB' 'Mapped: 173384 kB' 'Shmem: 7825272 kB' 'KReclaimable: 198632 kB' 'Slab: 569092 kB' 'SReclaimable: 198632 kB' 'SUnreclaim: 370460 kB' 'KernelStack: 13024 kB' 'PageTables: 7712 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 9452352 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196160 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1814108 kB' 'DirectMap2M: 14882816 kB' 'DirectMap1G: 52428800 kB' 00:04:08.032 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.033 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.033 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.033 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.033 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.033 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.033 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.033 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.033 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.033 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.033 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.033 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.033 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.033 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.033 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.033 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.033 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.033 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.033 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.033 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.033 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.033 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.033 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.033 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.033 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.033 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.033 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.033 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.033 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.033 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.033 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.033 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.033 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.033 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.033 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.033 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.033 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.033 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.033 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.033 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.033 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.033 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.033 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.033 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.033 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.033 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.033 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.033 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.033 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.033 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.033 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.033 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.033 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.033 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.033 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.033 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.033 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.033 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.033 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.033 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.033 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.033 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.033 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.033 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.033 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.033 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.033 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.033 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.033 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.033 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.033 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.033 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.033 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.033 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.033 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.033 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.033 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.033 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.033 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.033 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.033 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.033 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.033 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.033 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.033 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.033 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.033 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.033 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.033 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.033 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.033 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.033 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.033 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.033 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.033 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.033 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.033 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.033 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.033 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.033 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.033 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.033 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.033 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.033 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.033 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.033 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.033 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.033 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.033 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.033 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.033 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.033 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.033 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.033 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.033 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.033 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.033 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.033 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.033 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.034 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.034 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.034 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.034 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.034 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.034 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.034 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.034 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.034 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.034 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.034 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.034 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.034 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.034 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.034 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.034 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.034 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.034 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.034 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.034 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.034 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.034 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.034 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.034 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.034 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.034 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.034 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.034 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.034 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.034 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.034 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.034 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.034 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.034 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.034 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.034 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.034 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.034 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.034 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.034 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.034 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.034 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.034 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.034 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.034 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.034 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.034 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.034 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.034 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.034 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.034 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.034 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.034 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.034 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.034 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.034 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.034 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.034 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.034 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.034 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.034 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.034 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.034 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.034 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.034 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.034 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.034 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.034 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.034 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.034 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.034 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.034 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.034 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.034 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.034 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:08.034 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:08.034 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:08.034 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:08.034 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:08.034 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:08.034 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:08.034 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:08.034 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:08.034 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:08.034 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:08.034 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:08.034 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:08.034 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:08.034 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:08.034 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:08.034 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:08.034 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:08.034 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:08.034 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:08.034 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:08.034 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:08.034 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:08.034 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.034 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.034 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 20343048 kB' 'MemUsed: 12533892 kB' 'SwapCached: 0 kB' 'Active: 5919532 kB' 'Inactive: 3357228 kB' 'Active(anon): 5647600 kB' 'Inactive(anon): 0 kB' 'Active(file): 271932 kB' 'Inactive(file): 3357228 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9153772 kB' 'Mapped: 105488 kB' 'AnonPages: 126188 kB' 'Shmem: 5524612 kB' 'KernelStack: 6616 kB' 'PageTables: 3132 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 93952 kB' 'Slab: 310872 kB' 'SReclaimable: 93952 kB' 'SUnreclaim: 216920 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:08.034 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.034 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.034 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.034 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.034 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.034 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.034 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.034 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.034 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.035 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.035 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.035 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.035 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.035 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.035 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.035 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.035 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.035 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.035 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.035 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.035 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.035 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.035 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.035 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.035 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.035 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.035 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.035 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.035 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.035 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.035 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.035 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.035 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.035 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.035 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.035 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.035 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.035 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.035 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.035 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.035 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.035 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.035 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.035 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.035 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.035 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.035 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.035 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.035 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.035 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.035 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.035 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.035 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.035 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.035 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.035 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.035 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.035 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.035 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.035 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.035 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.035 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.035 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.035 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.035 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.035 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.035 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.035 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.035 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.035 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.035 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.035 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.035 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.035 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.035 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.035 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.035 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.035 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.035 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.035 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.035 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.035 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.035 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.035 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.035 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.035 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.035 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.035 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.035 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.035 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.035 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.035 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.035 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.035 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.035 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.035 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.035 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.035 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.035 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.035 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.035 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.035 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.035 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.035 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.035 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.035 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.035 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.035 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.035 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.035 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.035 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.035 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.035 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.035 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.035 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.035 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.035 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.035 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.035 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.035 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.035 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.035 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.035 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.035 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.035 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.035 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.035 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.035 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.036 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.036 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.036 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.036 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.036 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.036 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.036 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.036 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.036 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.036 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.036 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.036 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.036 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.036 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.036 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.036 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.036 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.036 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:08.036 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:08.036 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:08.036 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:08.036 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:08.036 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:08.036 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:08.036 node0=1024 expecting 1024 00:04:08.036 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:08.036 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:08.036 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:08.036 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:04:08.036 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:08.036 08:37:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:08.975 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:08.975 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:08.975 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:08.975 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:08.975 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:08.975 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:08.975 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:08.975 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:08.975 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:08.975 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:08.975 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:08.975 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:08.975 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:08.975 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:08.975 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:08.975 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:08.975 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:08.975 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:09.238 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:09.238 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:09.238 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:09.238 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:09.238 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:09.238 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:09.238 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:09.238 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:09.238 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:09.238 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:09.238 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:09.238 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:09.238 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:09.238 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:09.238 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:09.238 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:09.238 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:09.238 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:09.238 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.238 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.238 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 44355756 kB' 'MemAvailable: 47862920 kB' 'Buffers: 2704 kB' 'Cached: 11724328 kB' 'SwapCached: 0 kB' 'Active: 8742636 kB' 'Inactive: 3506192 kB' 'Active(anon): 8347140 kB' 'Inactive(anon): 0 kB' 'Active(file): 395496 kB' 'Inactive(file): 3506192 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 525128 kB' 'Mapped: 173836 kB' 'Shmem: 7825344 kB' 'KReclaimable: 198632 kB' 'Slab: 569352 kB' 'SReclaimable: 198632 kB' 'SUnreclaim: 370720 kB' 'KernelStack: 12992 kB' 'PageTables: 7664 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 9458652 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196324 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1814108 kB' 'DirectMap2M: 14882816 kB' 'DirectMap1G: 52428800 kB' 00:04:09.238 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.238 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.238 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.239 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.239 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.239 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.239 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.239 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.239 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.239 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.239 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.239 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.239 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.239 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.239 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.239 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.239 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.239 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.239 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.239 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.239 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.239 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.239 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.239 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.239 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.239 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.239 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.239 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.239 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.239 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.239 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.239 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.239 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.239 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.239 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.239 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.239 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.239 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.239 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.239 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.239 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.239 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.239 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.239 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.239 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.239 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.239 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.239 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.239 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.239 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.239 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.239 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.239 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.239 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.239 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.239 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.239 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.239 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.239 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.239 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.239 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.239 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.239 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.239 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.239 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.239 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.239 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.239 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.239 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.239 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.239 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.239 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.239 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.239 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.239 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.239 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.239 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.239 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.239 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.239 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.239 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.239 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.239 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.239 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.239 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.239 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.239 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.239 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.239 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.239 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.239 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.239 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.239 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.239 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.239 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.239 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.239 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.239 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.239 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.239 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.239 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.239 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.239 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.239 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.239 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.239 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.239 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.239 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.239 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.239 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.239 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.239 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.239 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.239 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.239 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.239 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.239 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.239 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.239 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.239 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.240 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.240 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.240 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.240 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.240 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.240 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.240 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.240 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.240 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.240 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.240 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.240 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.240 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.240 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.240 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.240 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.240 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.240 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.240 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.240 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.240 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.240 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.240 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.240 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.240 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.240 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.240 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.240 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.240 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.240 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.240 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.240 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.240 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.240 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.240 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.240 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.240 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.240 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.240 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.240 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.240 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.240 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:09.240 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:09.240 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:09.240 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:09.240 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:09.240 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:09.240 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:09.240 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:09.240 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:09.240 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:09.240 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:09.240 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:09.240 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:09.240 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.240 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.240 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 44360116 kB' 'MemAvailable: 47867280 kB' 'Buffers: 2704 kB' 'Cached: 11724332 kB' 'SwapCached: 0 kB' 'Active: 8736944 kB' 'Inactive: 3506192 kB' 'Active(anon): 8341448 kB' 'Inactive(anon): 0 kB' 'Active(file): 395496 kB' 'Inactive(file): 3506192 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 519348 kB' 'Mapped: 173744 kB' 'Shmem: 7825348 kB' 'KReclaimable: 198632 kB' 'Slab: 569344 kB' 'SReclaimable: 198632 kB' 'SUnreclaim: 370712 kB' 'KernelStack: 13008 kB' 'PageTables: 7652 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 9453568 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196288 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1814108 kB' 'DirectMap2M: 14882816 kB' 'DirectMap1G: 52428800 kB' 00:04:09.240 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.240 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.240 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.240 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.240 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.240 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.240 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.240 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.240 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.240 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.240 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.240 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.240 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.240 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.240 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.240 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.240 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.240 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.240 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.240 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.240 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.240 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.240 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.240 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.240 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.240 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.240 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.240 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.240 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.240 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.240 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.240 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.240 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.240 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.240 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.240 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.240 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.240 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.240 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.240 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.240 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.240 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.240 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.240 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.240 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.240 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.240 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.240 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.240 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.240 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.241 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.241 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.241 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.241 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.241 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.241 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.241 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.241 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.241 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.241 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.241 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.241 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.241 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.241 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.241 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.241 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.241 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.241 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.241 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.241 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.241 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.241 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.241 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.241 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.241 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.241 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.241 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.241 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.241 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.241 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.241 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.241 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.241 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.241 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.241 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.241 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.241 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.241 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.241 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.241 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.241 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.241 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.241 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.241 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.241 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.241 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.241 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.241 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.241 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.241 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.241 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.241 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.241 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.241 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.241 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.241 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.241 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.241 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.241 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.241 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.241 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.241 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.241 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.241 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.241 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.241 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.241 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.241 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.241 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.241 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.241 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.241 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.241 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.241 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.241 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.241 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.241 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.241 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.241 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.241 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.241 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.241 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.241 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.241 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.241 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.241 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.241 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.241 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.241 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.241 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.241 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.241 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.241 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.241 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.241 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.241 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.241 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.241 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.241 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.241 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.241 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.241 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.241 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.241 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.241 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.241 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.241 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.241 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.241 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.241 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.241 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.241 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.241 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.241 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.241 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.241 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.241 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.241 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.242 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.242 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.242 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.242 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.242 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.242 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.242 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.242 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.242 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.242 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.242 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.242 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.242 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.242 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.242 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.242 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.242 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.242 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.242 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.242 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.242 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.242 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.242 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.242 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.242 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.242 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.242 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.242 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.242 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.242 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.242 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.242 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.242 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.242 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.242 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.242 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.242 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.242 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:09.242 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:09.242 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:09.242 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:09.242 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:09.242 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:09.242 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:09.242 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:09.242 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:09.242 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:09.242 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:09.242 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:09.242 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:09.242 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.242 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.242 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 44355616 kB' 'MemAvailable: 47862780 kB' 'Buffers: 2704 kB' 'Cached: 11724352 kB' 'SwapCached: 0 kB' 'Active: 8739924 kB' 'Inactive: 3506192 kB' 'Active(anon): 8344428 kB' 'Inactive(anon): 0 kB' 'Active(file): 395496 kB' 'Inactive(file): 3506192 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522356 kB' 'Mapped: 173832 kB' 'Shmem: 7825368 kB' 'KReclaimable: 198632 kB' 'Slab: 569344 kB' 'SReclaimable: 198632 kB' 'SUnreclaim: 370712 kB' 'KernelStack: 13008 kB' 'PageTables: 7632 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 9456436 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196272 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1814108 kB' 'DirectMap2M: 14882816 kB' 'DirectMap1G: 52428800 kB' 00:04:09.242 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.242 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.242 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.242 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.242 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.242 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.242 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.242 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.242 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.242 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.242 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.242 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.242 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.242 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.242 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.242 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.242 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.242 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.242 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.242 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.242 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.242 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.242 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.242 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.242 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.242 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.242 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.242 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.242 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.242 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.242 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.242 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.242 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.242 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.242 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.242 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.242 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.242 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.242 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.242 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.242 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.242 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.242 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.242 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.242 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.242 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.242 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.242 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.242 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.242 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.242 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.242 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.242 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.242 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.243 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.243 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.243 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.243 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.243 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.243 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.243 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.243 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.243 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.243 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.243 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.243 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.243 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.243 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.243 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.243 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.243 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.243 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.243 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.243 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.243 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.243 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.243 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.243 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.243 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.243 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.243 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.243 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.243 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.243 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.243 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.243 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.243 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.243 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.243 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.243 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.243 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.243 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.243 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.243 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.243 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.243 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.243 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.243 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.243 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.243 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.243 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.243 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.243 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.243 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.243 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.243 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.243 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.243 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.243 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.243 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.243 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.243 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.243 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.243 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.243 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.243 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.243 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.243 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.243 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.243 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.243 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.243 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.243 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.243 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.243 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.243 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.243 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.243 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.243 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.243 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.243 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.243 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.243 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.243 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.243 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.243 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.243 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.243 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.243 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.243 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.243 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.243 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.243 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.243 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.243 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.243 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.243 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.243 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.243 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.243 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.243 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.244 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.244 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.244 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.244 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.244 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.244 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.244 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.244 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.244 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.244 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.244 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.244 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.244 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.244 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.244 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.244 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.244 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.244 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.244 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.244 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.244 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.244 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.244 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.244 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.244 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.244 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.244 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.244 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.244 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.244 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.244 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.244 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.244 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.244 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.244 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.244 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.244 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.244 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.244 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.244 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.244 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.244 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.244 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.244 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.244 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.244 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.244 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.244 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.244 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.244 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.244 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:09.244 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:09.244 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:09.244 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:09.244 nr_hugepages=1024 00:04:09.244 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:09.244 resv_hugepages=0 00:04:09.244 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:09.244 surplus_hugepages=0 00:04:09.244 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:09.244 anon_hugepages=0 00:04:09.244 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:09.244 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:09.244 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:09.244 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:09.244 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:09.244 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:09.244 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:09.244 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:09.244 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:09.244 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:09.244 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:09.244 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:09.244 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.244 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.244 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 44355004 kB' 'MemAvailable: 47862168 kB' 'Buffers: 2704 kB' 'Cached: 11724352 kB' 'SwapCached: 0 kB' 'Active: 8742736 kB' 'Inactive: 3506192 kB' 'Active(anon): 8347240 kB' 'Inactive(anon): 0 kB' 'Active(file): 395496 kB' 'Inactive(file): 3506192 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 525324 kB' 'Mapped: 174248 kB' 'Shmem: 7825368 kB' 'KReclaimable: 198632 kB' 'Slab: 569372 kB' 'SReclaimable: 198632 kB' 'SUnreclaim: 370740 kB' 'KernelStack: 13088 kB' 'PageTables: 7884 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 9458712 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196228 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1814108 kB' 'DirectMap2M: 14882816 kB' 'DirectMap1G: 52428800 kB' 00:04:09.244 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.244 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.244 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.244 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.244 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.244 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.244 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.244 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.244 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.244 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.244 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.244 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.244 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.244 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.244 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.244 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.244 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.244 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.244 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.244 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.244 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.244 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.244 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.244 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.244 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.244 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.244 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.244 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.244 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.244 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.244 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.244 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.245 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.245 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.245 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.245 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.245 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.245 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.245 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.245 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.245 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.245 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.245 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.245 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.245 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.245 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.245 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.245 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.245 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.245 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.245 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.245 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.245 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.245 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.245 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.245 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.245 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.245 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.245 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.245 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.245 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.245 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.245 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.245 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.245 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.245 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.245 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.245 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.245 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.245 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.245 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.245 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.245 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.245 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.245 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.245 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.245 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.245 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.245 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.245 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.245 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.245 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.245 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.245 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.245 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.245 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.245 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.245 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.245 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.245 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.245 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.245 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.245 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.245 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.245 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.245 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.245 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.245 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.245 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.245 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.245 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.245 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.245 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.245 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.245 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.245 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.245 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.245 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.245 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.245 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.245 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.245 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.245 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.245 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.245 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.245 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.245 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.245 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.245 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.245 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.245 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.245 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.245 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.245 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.245 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.245 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.245 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.245 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.245 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.245 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.245 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.245 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.245 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.245 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.245 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.245 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.245 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.245 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.245 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.245 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.245 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.245 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.245 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.245 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.245 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.245 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.245 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.245 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.245 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.245 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.245 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.245 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.246 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.246 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.246 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.246 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.246 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.246 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.246 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.246 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.246 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.246 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.246 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.246 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.246 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.246 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.246 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.246 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.246 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.246 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.246 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.246 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.246 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.246 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.246 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.246 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.246 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.246 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.246 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.246 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.246 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.246 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.246 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.246 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.246 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.246 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.246 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.246 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.246 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.246 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.246 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.246 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.246 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.246 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:09.246 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:09.246 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:09.246 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:09.246 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:09.246 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:09.246 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:09.246 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:09.246 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:09.246 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:09.246 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:09.246 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:09.246 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:09.246 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:09.246 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:09.246 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:09.246 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:09.246 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:09.246 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:09.246 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:09.246 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:09.246 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:09.246 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:09.246 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.246 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.246 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 20345740 kB' 'MemUsed: 12531200 kB' 'SwapCached: 0 kB' 'Active: 5919188 kB' 'Inactive: 3357228 kB' 'Active(anon): 5647256 kB' 'Inactive(anon): 0 kB' 'Active(file): 271932 kB' 'Inactive(file): 3357228 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9153876 kB' 'Mapped: 105384 kB' 'AnonPages: 125684 kB' 'Shmem: 5524716 kB' 'KernelStack: 6584 kB' 'PageTables: 3036 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 93952 kB' 'Slab: 310924 kB' 'SReclaimable: 93952 kB' 'SUnreclaim: 216972 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:09.246 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.246 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.246 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.246 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.246 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.246 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.246 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.246 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.246 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.246 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.246 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.246 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.246 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.246 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.246 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.246 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.246 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.246 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.246 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.246 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.246 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.246 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.246 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.246 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.246 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.246 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.246 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.246 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.246 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.246 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.246 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.246 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.246 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.246 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.246 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.246 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.246 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.246 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.246 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.246 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.246 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.246 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.246 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.247 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.247 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.247 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.247 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.247 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.247 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.247 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.247 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.247 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.247 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.247 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.247 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.247 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.247 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.247 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.247 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.247 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.247 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.247 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.247 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.247 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.247 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.247 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.247 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.247 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.247 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.247 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.247 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.247 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.247 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.247 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.247 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.247 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.247 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.247 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.247 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.247 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.247 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.247 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.247 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.247 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.247 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.247 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.247 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.247 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.247 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.247 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.247 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.247 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.247 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.247 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.247 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.247 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.247 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.247 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.247 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.247 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.247 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.247 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.247 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.247 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.247 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.247 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.247 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.247 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.247 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.247 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.247 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.247 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.247 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.247 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.247 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.247 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.247 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.247 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.247 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.247 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.247 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.247 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.247 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.247 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.247 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.247 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.247 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.247 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.247 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.247 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.247 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.247 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.247 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.247 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.247 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.247 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.247 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.247 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.247 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.247 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.247 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.247 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.247 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.247 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.247 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.247 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:09.247 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:09.247 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:09.247 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:09.247 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:09.247 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:09.247 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:09.248 node0=1024 expecting 1024 00:04:09.248 08:37:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:09.248 00:04:09.248 real 0m2.745s 00:04:09.248 user 0m1.149s 00:04:09.248 sys 0m1.516s 00:04:09.248 08:37:27 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:09.248 08:37:27 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:09.248 ************************************ 00:04:09.248 END TEST no_shrink_alloc 00:04:09.248 ************************************ 00:04:09.248 08:37:27 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:04:09.248 08:37:27 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:09.248 08:37:27 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:09.248 08:37:27 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:09.248 08:37:27 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:09.248 08:37:27 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:09.248 08:37:27 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:09.248 08:37:27 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:09.248 08:37:27 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:09.248 08:37:27 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:09.248 08:37:27 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:09.248 08:37:27 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:09.248 08:37:27 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:09.248 08:37:27 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:09.248 00:04:09.248 real 0m11.300s 00:04:09.248 user 0m4.425s 00:04:09.248 sys 0m5.781s 00:04:09.248 08:37:27 setup.sh.hugepages -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:09.248 08:37:27 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:09.248 ************************************ 00:04:09.248 END TEST hugepages 00:04:09.248 ************************************ 00:04:09.248 08:37:27 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:09.248 08:37:27 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:09.248 08:37:27 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:09.248 08:37:27 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:09.248 ************************************ 00:04:09.248 START TEST driver 00:04:09.248 ************************************ 00:04:09.248 08:37:27 setup.sh.driver -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:09.506 * Looking for test storage... 00:04:09.506 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:09.506 08:37:27 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:04:09.506 08:37:27 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:09.506 08:37:27 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:12.042 08:37:30 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:12.042 08:37:30 setup.sh.driver -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:12.042 08:37:30 setup.sh.driver -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:12.042 08:37:30 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:12.042 ************************************ 00:04:12.042 START TEST guess_driver 00:04:12.042 ************************************ 00:04:12.042 08:37:30 setup.sh.driver.guess_driver -- common/autotest_common.sh@1125 -- # guess_driver 00:04:12.042 08:37:30 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:12.042 08:37:30 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:04:12.042 08:37:30 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:04:12.042 08:37:30 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:04:12.042 08:37:30 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:04:12.042 08:37:30 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:12.042 08:37:30 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:12.042 08:37:30 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:12.042 08:37:30 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:12.042 08:37:30 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 141 > 0 )) 00:04:12.042 08:37:30 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:04:12.042 08:37:30 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:04:12.042 08:37:30 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:04:12.042 08:37:30 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:04:12.042 08:37:30 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:04:12.042 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:12.042 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:12.042 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:12.042 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:12.042 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:04:12.042 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:04:12.042 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:04:12.042 08:37:30 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:04:12.042 08:37:30 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:04:12.042 08:37:30 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:04:12.042 08:37:30 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:12.042 08:37:30 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:04:12.042 Looking for driver=vfio-pci 00:04:12.042 08:37:30 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:12.043 08:37:30 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:04:12.043 08:37:30 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:04:12.043 08:37:30 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:12.980 08:37:31 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:12.980 08:37:31 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:12.980 08:37:31 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:12.980 08:37:31 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:12.980 08:37:31 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:12.980 08:37:31 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:12.980 08:37:31 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:12.980 08:37:31 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:12.980 08:37:31 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:12.980 08:37:31 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:12.980 08:37:31 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:12.980 08:37:31 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:12.980 08:37:31 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:12.980 08:37:31 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:12.980 08:37:31 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:12.980 08:37:31 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:12.980 08:37:31 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:12.980 08:37:31 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:12.980 08:37:31 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:12.980 08:37:31 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:12.980 08:37:31 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:12.980 08:37:31 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:12.980 08:37:31 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:12.980 08:37:31 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:12.980 08:37:31 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:12.980 08:37:31 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:12.980 08:37:31 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:12.980 08:37:31 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:12.980 08:37:31 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:12.980 08:37:31 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:12.980 08:37:31 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:12.980 08:37:31 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:12.980 08:37:31 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:12.980 08:37:31 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:12.980 08:37:31 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:12.980 08:37:31 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:12.980 08:37:31 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:12.980 08:37:31 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:12.980 08:37:31 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:12.980 08:37:31 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:12.980 08:37:31 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:12.980 08:37:31 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:12.980 08:37:31 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:12.980 08:37:31 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:12.980 08:37:31 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:12.980 08:37:31 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:12.980 08:37:31 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:12.980 08:37:31 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:13.919 08:37:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:13.919 08:37:32 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:13.919 08:37:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:13.919 08:37:32 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:13.919 08:37:32 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:13.919 08:37:32 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:13.919 08:37:32 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:16.453 00:04:16.453 real 0m4.599s 00:04:16.453 user 0m1.033s 00:04:16.453 sys 0m1.716s 00:04:16.453 08:37:34 setup.sh.driver.guess_driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:16.453 08:37:34 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:16.453 ************************************ 00:04:16.453 END TEST guess_driver 00:04:16.453 ************************************ 00:04:16.453 00:04:16.453 real 0m7.145s 00:04:16.453 user 0m1.588s 00:04:16.453 sys 0m2.733s 00:04:16.453 08:37:34 setup.sh.driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:16.453 08:37:34 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:16.453 ************************************ 00:04:16.453 END TEST driver 00:04:16.453 ************************************ 00:04:16.453 08:37:34 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:16.453 08:37:34 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:16.453 08:37:34 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:16.453 08:37:34 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:16.453 ************************************ 00:04:16.453 START TEST devices 00:04:16.453 ************************************ 00:04:16.453 08:37:34 setup.sh.devices -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:16.453 * Looking for test storage... 00:04:16.453 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:16.453 08:37:34 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:16.453 08:37:34 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:16.453 08:37:34 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:16.453 08:37:34 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:17.834 08:37:36 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:17.834 08:37:36 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:17.834 08:37:36 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:17.834 08:37:36 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:17.834 08:37:36 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:17.834 08:37:36 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:17.834 08:37:36 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:17.834 08:37:36 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:17.834 08:37:36 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:17.834 08:37:36 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:17.834 08:37:36 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:17.834 08:37:36 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:17.834 08:37:36 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:17.834 08:37:36 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:17.834 08:37:36 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:17.834 08:37:36 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:17.834 08:37:36 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:17.834 08:37:36 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:88:00.0 00:04:17.834 08:37:36 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:04:17.834 08:37:36 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:17.834 08:37:36 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:04:17.834 08:37:36 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:04:18.094 No valid GPT data, bailing 00:04:18.094 08:37:36 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:18.094 08:37:36 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:18.094 08:37:36 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:18.094 08:37:36 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:18.094 08:37:36 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:18.094 08:37:36 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:18.094 08:37:36 setup.sh.devices -- setup/common.sh@80 -- # echo 1000204886016 00:04:18.094 08:37:36 setup.sh.devices -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:04:18.094 08:37:36 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:18.094 08:37:36 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:88:00.0 00:04:18.094 08:37:36 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:18.094 08:37:36 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:18.094 08:37:36 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:18.094 08:37:36 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:18.094 08:37:36 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:18.094 08:37:36 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:18.094 ************************************ 00:04:18.094 START TEST nvme_mount 00:04:18.094 ************************************ 00:04:18.094 08:37:36 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1125 -- # nvme_mount 00:04:18.094 08:37:36 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:18.094 08:37:36 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:18.094 08:37:36 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:18.094 08:37:36 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:18.094 08:37:36 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:18.094 08:37:36 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:18.094 08:37:36 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:18.094 08:37:36 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:18.094 08:37:36 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:18.094 08:37:36 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:18.094 08:37:36 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:18.094 08:37:36 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:18.094 08:37:36 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:18.094 08:37:36 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:18.094 08:37:36 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:18.094 08:37:36 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:18.094 08:37:36 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:18.094 08:37:36 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:18.094 08:37:36 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:19.044 Creating new GPT entries in memory. 00:04:19.044 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:19.044 other utilities. 00:04:19.044 08:37:37 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:19.044 08:37:37 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:19.044 08:37:37 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:19.044 08:37:37 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:19.044 08:37:37 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:19.981 Creating new GPT entries in memory. 00:04:19.981 The operation has completed successfully. 00:04:19.981 08:37:38 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:19.981 08:37:38 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:19.981 08:37:38 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 823039 00:04:19.981 08:37:38 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:19.981 08:37:38 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:04:19.981 08:37:38 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:19.981 08:37:38 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:19.981 08:37:38 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:20.239 08:37:38 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:20.239 08:37:38 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:88:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:20.239 08:37:38 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:04:20.239 08:37:38 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:20.239 08:37:38 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:20.239 08:37:38 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:20.239 08:37:38 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:20.239 08:37:38 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:20.239 08:37:38 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:20.239 08:37:38 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:20.239 08:37:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.239 08:37:38 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:04:20.239 08:37:38 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:20.239 08:37:38 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:20.239 08:37:38 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:21.176 08:37:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:21.176 08:37:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:21.176 08:37:39 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:21.176 08:37:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.176 08:37:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:21.176 08:37:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.176 08:37:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:21.176 08:37:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.176 08:37:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:21.176 08:37:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.176 08:37:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:21.176 08:37:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.176 08:37:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:21.176 08:37:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.176 08:37:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:21.176 08:37:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.176 08:37:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:21.176 08:37:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.176 08:37:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:21.176 08:37:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.176 08:37:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:21.176 08:37:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.176 08:37:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:21.176 08:37:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.176 08:37:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:21.176 08:37:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.176 08:37:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:21.176 08:37:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.176 08:37:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:21.176 08:37:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.176 08:37:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:21.176 08:37:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.176 08:37:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:21.176 08:37:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.176 08:37:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:21.176 08:37:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.436 08:37:39 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:21.436 08:37:39 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:21.436 08:37:39 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:21.436 08:37:39 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:21.436 08:37:39 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:21.436 08:37:39 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:21.436 08:37:39 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:21.436 08:37:39 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:21.436 08:37:39 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:21.436 08:37:39 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:21.436 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:21.436 08:37:39 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:21.436 08:37:39 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:21.695 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:21.695 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:04:21.695 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:21.695 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:21.695 08:37:40 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:21.695 08:37:40 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:21.695 08:37:40 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:21.695 08:37:40 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:21.695 08:37:40 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:21.695 08:37:40 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:21.695 08:37:40 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:88:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:21.695 08:37:40 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:04:21.695 08:37:40 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:21.695 08:37:40 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:21.695 08:37:40 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:21.695 08:37:40 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:21.695 08:37:40 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:21.695 08:37:40 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:21.695 08:37:40 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:21.695 08:37:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.695 08:37:40 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:04:21.695 08:37:40 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:21.695 08:37:40 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:21.695 08:37:40 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:23.074 08:37:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:23.074 08:37:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:23.074 08:37:41 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:23.074 08:37:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.074 08:37:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:23.074 08:37:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.074 08:37:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:23.074 08:37:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.074 08:37:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:23.074 08:37:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.074 08:37:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:23.074 08:37:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.074 08:37:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:23.074 08:37:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.074 08:37:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:23.074 08:37:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.074 08:37:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:23.074 08:37:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.074 08:37:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:23.074 08:37:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.074 08:37:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:23.074 08:37:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.074 08:37:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:23.074 08:37:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.074 08:37:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:23.074 08:37:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.074 08:37:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:23.074 08:37:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.074 08:37:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:23.074 08:37:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.074 08:37:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:23.074 08:37:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.074 08:37:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:23.074 08:37:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.074 08:37:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:23.074 08:37:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.074 08:37:41 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:23.074 08:37:41 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:23.074 08:37:41 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:23.074 08:37:41 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:23.074 08:37:41 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:23.074 08:37:41 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:23.074 08:37:41 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:88:00.0 data@nvme0n1 '' '' 00:04:23.074 08:37:41 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:04:23.074 08:37:41 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:23.074 08:37:41 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:23.074 08:37:41 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:23.074 08:37:41 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:23.074 08:37:41 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:23.074 08:37:41 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:23.074 08:37:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.074 08:37:41 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:04:23.074 08:37:41 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:23.074 08:37:41 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:23.074 08:37:41 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:24.015 08:37:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:24.015 08:37:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:24.015 08:37:42 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:24.015 08:37:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.015 08:37:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:24.015 08:37:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.015 08:37:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:24.015 08:37:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.015 08:37:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:24.015 08:37:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.015 08:37:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:24.015 08:37:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.015 08:37:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:24.015 08:37:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.015 08:37:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:24.015 08:37:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.015 08:37:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:24.015 08:37:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.015 08:37:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:24.015 08:37:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.015 08:37:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:24.015 08:37:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.015 08:37:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:24.015 08:37:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.015 08:37:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:24.015 08:37:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.015 08:37:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:24.015 08:37:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.015 08:37:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:24.015 08:37:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.015 08:37:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:24.015 08:37:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.015 08:37:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:24.015 08:37:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.015 08:37:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:24.015 08:37:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.276 08:37:42 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:24.276 08:37:42 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:24.276 08:37:42 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:24.276 08:37:42 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:24.276 08:37:42 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:24.276 08:37:42 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:24.276 08:37:42 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:24.276 08:37:42 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:24.276 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:24.276 00:04:24.276 real 0m6.212s 00:04:24.276 user 0m1.455s 00:04:24.276 sys 0m2.325s 00:04:24.276 08:37:42 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:24.276 08:37:42 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:24.276 ************************************ 00:04:24.276 END TEST nvme_mount 00:04:24.276 ************************************ 00:04:24.276 08:37:42 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:24.276 08:37:42 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:24.276 08:37:42 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:24.276 08:37:42 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:24.276 ************************************ 00:04:24.276 START TEST dm_mount 00:04:24.276 ************************************ 00:04:24.276 08:37:42 setup.sh.devices.dm_mount -- common/autotest_common.sh@1125 -- # dm_mount 00:04:24.276 08:37:42 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:24.276 08:37:42 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:24.276 08:37:42 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:24.276 08:37:42 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:24.276 08:37:42 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:24.276 08:37:42 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:24.277 08:37:42 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:24.277 08:37:42 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:24.277 08:37:42 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:24.277 08:37:42 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:24.277 08:37:42 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:24.277 08:37:42 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:24.277 08:37:42 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:24.277 08:37:42 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:24.277 08:37:42 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:24.277 08:37:42 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:24.277 08:37:42 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:24.277 08:37:42 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:24.277 08:37:42 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:24.277 08:37:42 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:24.277 08:37:42 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:25.263 Creating new GPT entries in memory. 00:04:25.263 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:25.263 other utilities. 00:04:25.263 08:37:43 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:25.263 08:37:43 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:25.263 08:37:43 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:25.263 08:37:43 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:25.263 08:37:43 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:26.230 Creating new GPT entries in memory. 00:04:26.230 The operation has completed successfully. 00:04:26.230 08:37:44 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:26.230 08:37:44 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:26.230 08:37:44 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:26.230 08:37:44 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:26.230 08:37:44 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:27.613 The operation has completed successfully. 00:04:27.613 08:37:45 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:27.613 08:37:45 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:27.613 08:37:45 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 825431 00:04:27.613 08:37:45 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:27.613 08:37:45 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:27.613 08:37:45 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:27.613 08:37:45 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:27.613 08:37:45 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:27.613 08:37:45 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:27.613 08:37:45 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:27.613 08:37:45 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:27.613 08:37:45 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:27.613 08:37:45 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:27.613 08:37:45 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:04:27.613 08:37:45 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:27.613 08:37:45 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:27.613 08:37:45 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:27.613 08:37:45 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:04:27.613 08:37:45 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:27.613 08:37:45 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:27.613 08:37:45 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:27.613 08:37:45 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:27.613 08:37:45 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:88:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:27.613 08:37:45 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:04:27.613 08:37:45 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:27.613 08:37:45 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:27.613 08:37:45 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:27.613 08:37:45 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:27.613 08:37:45 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:27.613 08:37:45 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:27.613 08:37:45 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:27.613 08:37:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.613 08:37:45 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:04:27.613 08:37:45 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:27.613 08:37:45 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:27.613 08:37:45 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:28.551 08:37:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:28.551 08:37:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:28.551 08:37:46 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:28.551 08:37:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.551 08:37:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:28.551 08:37:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.551 08:37:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:28.551 08:37:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.551 08:37:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:28.551 08:37:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.551 08:37:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:28.551 08:37:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.551 08:37:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:28.551 08:37:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.551 08:37:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:28.551 08:37:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.551 08:37:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:28.551 08:37:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.551 08:37:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:28.551 08:37:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.551 08:37:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:28.551 08:37:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.551 08:37:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:28.551 08:37:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.551 08:37:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:28.551 08:37:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.551 08:37:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:28.551 08:37:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.551 08:37:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:28.551 08:37:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.552 08:37:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:28.552 08:37:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.552 08:37:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:28.552 08:37:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.552 08:37:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:28.552 08:37:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.811 08:37:47 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:28.811 08:37:47 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:28.811 08:37:47 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:28.811 08:37:47 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:28.811 08:37:47 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:28.811 08:37:47 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:28.811 08:37:47 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:88:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:28.811 08:37:47 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:04:28.811 08:37:47 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:28.811 08:37:47 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:28.811 08:37:47 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:04:28.811 08:37:47 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:28.811 08:37:47 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:28.811 08:37:47 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:28.811 08:37:47 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.811 08:37:47 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:04:28.811 08:37:47 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:28.812 08:37:47 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:28.812 08:37:47 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:29.752 08:37:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:29.752 08:37:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:29.752 08:37:48 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:29.752 08:37:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.752 08:37:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:29.752 08:37:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.752 08:37:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:29.752 08:37:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.752 08:37:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:29.752 08:37:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.752 08:37:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:29.752 08:37:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.752 08:37:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:29.752 08:37:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.752 08:37:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:29.752 08:37:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.752 08:37:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:29.752 08:37:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.752 08:37:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:29.752 08:37:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.752 08:37:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:29.752 08:37:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.752 08:37:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:29.752 08:37:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.752 08:37:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:29.752 08:37:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.752 08:37:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:29.752 08:37:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.752 08:37:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:29.752 08:37:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.752 08:37:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:29.752 08:37:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.752 08:37:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:29.752 08:37:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.752 08:37:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:29.752 08:37:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.012 08:37:48 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:30.012 08:37:48 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:30.012 08:37:48 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:04:30.012 08:37:48 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:04:30.012 08:37:48 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:30.012 08:37:48 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:30.012 08:37:48 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:30.012 08:37:48 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:30.012 08:37:48 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:30.012 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:30.012 08:37:48 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:30.012 08:37:48 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:30.012 00:04:30.012 real 0m5.674s 00:04:30.012 user 0m0.994s 00:04:30.012 sys 0m1.552s 00:04:30.012 08:37:48 setup.sh.devices.dm_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:30.012 08:37:48 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:04:30.012 ************************************ 00:04:30.012 END TEST dm_mount 00:04:30.012 ************************************ 00:04:30.012 08:37:48 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:04:30.012 08:37:48 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:04:30.012 08:37:48 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:30.012 08:37:48 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:30.012 08:37:48 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:30.012 08:37:48 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:30.012 08:37:48 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:30.273 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:30.273 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:04:30.273 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:30.273 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:30.273 08:37:48 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:04:30.273 08:37:48 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:30.273 08:37:48 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:30.273 08:37:48 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:30.273 08:37:48 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:30.273 08:37:48 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:30.273 08:37:48 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:30.273 00:04:30.273 real 0m13.760s 00:04:30.273 user 0m3.084s 00:04:30.273 sys 0m4.883s 00:04:30.273 08:37:48 setup.sh.devices -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:30.273 08:37:48 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:30.273 ************************************ 00:04:30.273 END TEST devices 00:04:30.273 ************************************ 00:04:30.273 00:04:30.273 real 0m42.844s 00:04:30.273 user 0m12.453s 00:04:30.273 sys 0m18.684s 00:04:30.273 08:37:48 setup.sh -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:30.273 08:37:48 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:30.273 ************************************ 00:04:30.273 END TEST setup.sh 00:04:30.273 ************************************ 00:04:30.273 08:37:48 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:31.655 Hugepages 00:04:31.655 node hugesize free / total 00:04:31.655 node0 1048576kB 0 / 0 00:04:31.655 node0 2048kB 2048 / 2048 00:04:31.655 node1 1048576kB 0 / 0 00:04:31.655 node1 2048kB 0 / 0 00:04:31.655 00:04:31.655 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:31.655 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:04:31.655 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:04:31.655 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:04:31.655 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:04:31.655 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:04:31.655 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:04:31.655 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:04:31.655 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:04:31.655 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:04:31.655 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:04:31.655 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:04:31.655 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:04:31.655 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:04:31.655 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:04:31.655 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:04:31.655 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:04:31.655 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:04:31.655 08:37:49 -- spdk/autotest.sh@130 -- # uname -s 00:04:31.655 08:37:49 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:31.655 08:37:49 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:31.655 08:37:49 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:32.592 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:32.592 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:32.592 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:32.592 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:32.592 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:32.592 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:32.592 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:32.592 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:32.592 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:32.592 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:32.592 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:32.852 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:32.852 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:32.852 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:32.852 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:32.852 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:33.792 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:04:33.792 08:37:52 -- common/autotest_common.sh@1532 -- # sleep 1 00:04:34.730 08:37:53 -- common/autotest_common.sh@1533 -- # bdfs=() 00:04:34.730 08:37:53 -- common/autotest_common.sh@1533 -- # local bdfs 00:04:34.730 08:37:53 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:04:34.730 08:37:53 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:04:34.730 08:37:53 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:34.730 08:37:53 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:34.730 08:37:53 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:34.731 08:37:53 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:34.731 08:37:53 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:34.731 08:37:53 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:04:34.731 08:37:53 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:88:00.0 00:04:34.731 08:37:53 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:35.668 Waiting for block devices as requested 00:04:35.928 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:04:35.928 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:04:36.187 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:04:36.187 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:04:36.187 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:04:36.187 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:04:36.445 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:04:36.445 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:04:36.445 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:04:36.445 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:04:36.705 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:04:36.705 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:04:36.705 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:04:36.705 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:04:36.964 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:04:36.964 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:04:36.964 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:04:37.222 08:37:55 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:04:37.222 08:37:55 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:88:00.0 00:04:37.222 08:37:55 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:04:37.222 08:37:55 -- common/autotest_common.sh@1502 -- # grep 0000:88:00.0/nvme/nvme 00:04:37.222 08:37:55 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:04:37.222 08:37:55 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 ]] 00:04:37.222 08:37:55 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:04:37.222 08:37:55 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:04:37.222 08:37:55 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:04:37.222 08:37:55 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:04:37.222 08:37:55 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:04:37.222 08:37:55 -- common/autotest_common.sh@1545 -- # grep oacs 00:04:37.222 08:37:55 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:04:37.223 08:37:55 -- common/autotest_common.sh@1545 -- # oacs=' 0xf' 00:04:37.223 08:37:55 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:04:37.223 08:37:55 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:04:37.223 08:37:55 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:04:37.223 08:37:55 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:04:37.223 08:37:55 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:04:37.223 08:37:55 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:04:37.223 08:37:55 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:04:37.223 08:37:55 -- common/autotest_common.sh@1557 -- # continue 00:04:37.223 08:37:55 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:04:37.223 08:37:55 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:37.223 08:37:55 -- common/autotest_common.sh@10 -- # set +x 00:04:37.223 08:37:55 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:04:37.223 08:37:55 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:37.223 08:37:55 -- common/autotest_common.sh@10 -- # set +x 00:04:37.223 08:37:55 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:38.599 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:38.599 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:38.599 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:38.599 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:38.599 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:38.599 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:38.599 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:38.599 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:38.599 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:38.599 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:38.599 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:38.599 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:38.599 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:38.599 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:38.599 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:38.599 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:39.540 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:04:39.540 08:37:57 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:04:39.540 08:37:57 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:39.540 08:37:57 -- common/autotest_common.sh@10 -- # set +x 00:04:39.540 08:37:57 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:04:39.540 08:37:57 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:04:39.540 08:37:57 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:04:39.540 08:37:57 -- common/autotest_common.sh@1577 -- # bdfs=() 00:04:39.540 08:37:57 -- common/autotest_common.sh@1577 -- # local bdfs 00:04:39.540 08:37:57 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:04:39.540 08:37:57 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:39.540 08:37:57 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:39.540 08:37:57 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:39.540 08:37:57 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:39.540 08:37:57 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:39.540 08:37:57 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:04:39.540 08:37:57 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:88:00.0 00:04:39.540 08:37:57 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:04:39.540 08:37:57 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:88:00.0/device 00:04:39.540 08:37:57 -- common/autotest_common.sh@1580 -- # device=0x0a54 00:04:39.540 08:37:57 -- common/autotest_common.sh@1581 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:39.540 08:37:57 -- common/autotest_common.sh@1582 -- # bdfs+=($bdf) 00:04:39.540 08:37:57 -- common/autotest_common.sh@1586 -- # printf '%s\n' 0000:88:00.0 00:04:39.540 08:37:57 -- common/autotest_common.sh@1592 -- # [[ -z 0000:88:00.0 ]] 00:04:39.540 08:37:57 -- common/autotest_common.sh@1597 -- # spdk_tgt_pid=830615 00:04:39.540 08:37:57 -- common/autotest_common.sh@1596 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:39.540 08:37:57 -- common/autotest_common.sh@1598 -- # waitforlisten 830615 00:04:39.540 08:37:57 -- common/autotest_common.sh@831 -- # '[' -z 830615 ']' 00:04:39.540 08:37:57 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:39.540 08:37:57 -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:39.540 08:37:57 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:39.540 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:39.540 08:37:57 -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:39.540 08:37:57 -- common/autotest_common.sh@10 -- # set +x 00:04:39.800 [2024-07-26 08:37:58.015037] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:04:39.800 [2024-07-26 08:37:58.015169] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid830615 ] 00:04:39.800 EAL: No free 2048 kB hugepages reported on node 1 00:04:39.800 [2024-07-26 08:37:58.048524] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:04:39.800 [2024-07-26 08:37:58.078917] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:39.800 [2024-07-26 08:37:58.172757] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:40.059 08:37:58 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:40.059 08:37:58 -- common/autotest_common.sh@864 -- # return 0 00:04:40.059 08:37:58 -- common/autotest_common.sh@1600 -- # bdf_id=0 00:04:40.059 08:37:58 -- common/autotest_common.sh@1601 -- # for bdf in "${bdfs[@]}" 00:04:40.059 08:37:58 -- common/autotest_common.sh@1602 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:88:00.0 00:04:43.380 nvme0n1 00:04:43.380 08:38:01 -- common/autotest_common.sh@1604 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:04:43.380 [2024-07-26 08:38:01.721688] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:04:43.380 [2024-07-26 08:38:01.721749] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:04:43.380 request: 00:04:43.380 { 00:04:43.380 "nvme_ctrlr_name": "nvme0", 00:04:43.380 "password": "test", 00:04:43.380 "method": "bdev_nvme_opal_revert", 00:04:43.380 "req_id": 1 00:04:43.380 } 00:04:43.380 Got JSON-RPC error response 00:04:43.380 response: 00:04:43.380 { 00:04:43.380 "code": -32603, 00:04:43.380 "message": "Internal error" 00:04:43.380 } 00:04:43.380 08:38:01 -- common/autotest_common.sh@1604 -- # true 00:04:43.380 08:38:01 -- common/autotest_common.sh@1605 -- # (( ++bdf_id )) 00:04:43.380 08:38:01 -- common/autotest_common.sh@1608 -- # killprocess 830615 00:04:43.380 08:38:01 -- common/autotest_common.sh@950 -- # '[' -z 830615 ']' 00:04:43.380 08:38:01 -- common/autotest_common.sh@954 -- # kill -0 830615 00:04:43.380 08:38:01 -- common/autotest_common.sh@955 -- # uname 00:04:43.380 08:38:01 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:43.380 08:38:01 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 830615 00:04:43.380 08:38:01 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:43.380 08:38:01 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:43.380 08:38:01 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 830615' 00:04:43.380 killing process with pid 830615 00:04:43.380 08:38:01 -- common/autotest_common.sh@969 -- # kill 830615 00:04:43.380 08:38:01 -- common/autotest_common.sh@974 -- # wait 830615 00:04:45.287 08:38:03 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:04:45.287 08:38:03 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:04:45.287 08:38:03 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:45.287 08:38:03 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:45.287 08:38:03 -- spdk/autotest.sh@162 -- # timing_enter lib 00:04:45.287 08:38:03 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:45.287 08:38:03 -- common/autotest_common.sh@10 -- # set +x 00:04:45.287 08:38:03 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:04:45.287 08:38:03 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:45.287 08:38:03 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:45.287 08:38:03 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:45.287 08:38:03 -- common/autotest_common.sh@10 -- # set +x 00:04:45.287 ************************************ 00:04:45.287 START TEST env 00:04:45.287 ************************************ 00:04:45.287 08:38:03 env -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:45.287 * Looking for test storage... 00:04:45.287 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:04:45.287 08:38:03 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:45.287 08:38:03 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:45.287 08:38:03 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:45.287 08:38:03 env -- common/autotest_common.sh@10 -- # set +x 00:04:45.287 ************************************ 00:04:45.287 START TEST env_memory 00:04:45.287 ************************************ 00:04:45.287 08:38:03 env.env_memory -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:45.287 00:04:45.287 00:04:45.287 CUnit - A unit testing framework for C - Version 2.1-3 00:04:45.287 http://cunit.sourceforge.net/ 00:04:45.287 00:04:45.287 00:04:45.287 Suite: memory 00:04:45.287 Test: alloc and free memory map ...[2024-07-26 08:38:03.643908] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:45.287 passed 00:04:45.287 Test: mem map translation ...[2024-07-26 08:38:03.664401] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:45.287 [2024-07-26 08:38:03.664424] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:45.287 [2024-07-26 08:38:03.664474] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:45.287 [2024-07-26 08:38:03.664488] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:45.287 passed 00:04:45.287 Test: mem map registration ...[2024-07-26 08:38:03.705639] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:45.287 [2024-07-26 08:38:03.705659] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:45.287 passed 00:04:45.547 Test: mem map adjacent registrations ...passed 00:04:45.547 00:04:45.547 Run Summary: Type Total Ran Passed Failed Inactive 00:04:45.547 suites 1 1 n/a 0 0 00:04:45.547 tests 4 4 4 0 0 00:04:45.547 asserts 152 152 152 0 n/a 00:04:45.547 00:04:45.547 Elapsed time = 0.142 seconds 00:04:45.547 00:04:45.547 real 0m0.150s 00:04:45.547 user 0m0.142s 00:04:45.547 sys 0m0.007s 00:04:45.547 08:38:03 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:45.547 08:38:03 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:45.547 ************************************ 00:04:45.547 END TEST env_memory 00:04:45.547 ************************************ 00:04:45.547 08:38:03 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:45.547 08:38:03 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:45.547 08:38:03 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:45.547 08:38:03 env -- common/autotest_common.sh@10 -- # set +x 00:04:45.547 ************************************ 00:04:45.547 START TEST env_vtophys 00:04:45.547 ************************************ 00:04:45.547 08:38:03 env.env_vtophys -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:45.547 EAL: lib.eal log level changed from notice to debug 00:04:45.547 EAL: Detected lcore 0 as core 0 on socket 0 00:04:45.547 EAL: Detected lcore 1 as core 1 on socket 0 00:04:45.547 EAL: Detected lcore 2 as core 2 on socket 0 00:04:45.547 EAL: Detected lcore 3 as core 3 on socket 0 00:04:45.547 EAL: Detected lcore 4 as core 4 on socket 0 00:04:45.547 EAL: Detected lcore 5 as core 5 on socket 0 00:04:45.547 EAL: Detected lcore 6 as core 8 on socket 0 00:04:45.547 EAL: Detected lcore 7 as core 9 on socket 0 00:04:45.547 EAL: Detected lcore 8 as core 10 on socket 0 00:04:45.548 EAL: Detected lcore 9 as core 11 on socket 0 00:04:45.548 EAL: Detected lcore 10 as core 12 on socket 0 00:04:45.548 EAL: Detected lcore 11 as core 13 on socket 0 00:04:45.548 EAL: Detected lcore 12 as core 0 on socket 1 00:04:45.548 EAL: Detected lcore 13 as core 1 on socket 1 00:04:45.548 EAL: Detected lcore 14 as core 2 on socket 1 00:04:45.548 EAL: Detected lcore 15 as core 3 on socket 1 00:04:45.548 EAL: Detected lcore 16 as core 4 on socket 1 00:04:45.548 EAL: Detected lcore 17 as core 5 on socket 1 00:04:45.548 EAL: Detected lcore 18 as core 8 on socket 1 00:04:45.548 EAL: Detected lcore 19 as core 9 on socket 1 00:04:45.548 EAL: Detected lcore 20 as core 10 on socket 1 00:04:45.548 EAL: Detected lcore 21 as core 11 on socket 1 00:04:45.548 EAL: Detected lcore 22 as core 12 on socket 1 00:04:45.548 EAL: Detected lcore 23 as core 13 on socket 1 00:04:45.548 EAL: Detected lcore 24 as core 0 on socket 0 00:04:45.548 EAL: Detected lcore 25 as core 1 on socket 0 00:04:45.548 EAL: Detected lcore 26 as core 2 on socket 0 00:04:45.548 EAL: Detected lcore 27 as core 3 on socket 0 00:04:45.548 EAL: Detected lcore 28 as core 4 on socket 0 00:04:45.548 EAL: Detected lcore 29 as core 5 on socket 0 00:04:45.548 EAL: Detected lcore 30 as core 8 on socket 0 00:04:45.548 EAL: Detected lcore 31 as core 9 on socket 0 00:04:45.548 EAL: Detected lcore 32 as core 10 on socket 0 00:04:45.548 EAL: Detected lcore 33 as core 11 on socket 0 00:04:45.548 EAL: Detected lcore 34 as core 12 on socket 0 00:04:45.548 EAL: Detected lcore 35 as core 13 on socket 0 00:04:45.548 EAL: Detected lcore 36 as core 0 on socket 1 00:04:45.548 EAL: Detected lcore 37 as core 1 on socket 1 00:04:45.548 EAL: Detected lcore 38 as core 2 on socket 1 00:04:45.548 EAL: Detected lcore 39 as core 3 on socket 1 00:04:45.548 EAL: Detected lcore 40 as core 4 on socket 1 00:04:45.548 EAL: Detected lcore 41 as core 5 on socket 1 00:04:45.548 EAL: Detected lcore 42 as core 8 on socket 1 00:04:45.548 EAL: Detected lcore 43 as core 9 on socket 1 00:04:45.548 EAL: Detected lcore 44 as core 10 on socket 1 00:04:45.548 EAL: Detected lcore 45 as core 11 on socket 1 00:04:45.548 EAL: Detected lcore 46 as core 12 on socket 1 00:04:45.548 EAL: Detected lcore 47 as core 13 on socket 1 00:04:45.548 EAL: Maximum logical cores by configuration: 128 00:04:45.548 EAL: Detected CPU lcores: 48 00:04:45.548 EAL: Detected NUMA nodes: 2 00:04:45.548 EAL: Checking presence of .so 'librte_eal.so.24.2' 00:04:45.548 EAL: Detected shared linkage of DPDK 00:04:45.548 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_pci.so.24.2 00:04:45.548 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_vdev.so.24.2 00:04:45.548 EAL: Registered [vdev] bus. 00:04:45.548 EAL: bus.vdev log level changed from disabled to notice 00:04:45.548 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_mempool_ring.so.24.2 00:04:45.548 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_net_i40e.so.24.2 00:04:45.548 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:04:45.548 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:04:45.548 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_pci.so 00:04:45.548 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_vdev.so 00:04:45.548 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_mempool_ring.so 00:04:45.548 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_net_i40e.so 00:04:45.548 EAL: No shared files mode enabled, IPC will be disabled 00:04:45.548 EAL: No shared files mode enabled, IPC is disabled 00:04:45.548 EAL: Bus pci wants IOVA as 'DC' 00:04:45.548 EAL: Bus vdev wants IOVA as 'DC' 00:04:45.548 EAL: Buses did not request a specific IOVA mode. 00:04:45.548 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:45.548 EAL: Selected IOVA mode 'VA' 00:04:45.548 EAL: No free 2048 kB hugepages reported on node 1 00:04:45.548 EAL: Probing VFIO support... 00:04:45.548 EAL: IOMMU type 1 (Type 1) is supported 00:04:45.548 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:45.548 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:45.548 EAL: VFIO support initialized 00:04:45.548 EAL: Ask a virtual area of 0x2e000 bytes 00:04:45.548 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:45.548 EAL: Setting up physically contiguous memory... 00:04:45.548 EAL: Setting maximum number of open files to 524288 00:04:45.548 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:45.548 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:45.548 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:45.548 EAL: Ask a virtual area of 0x61000 bytes 00:04:45.548 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:45.548 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:45.548 EAL: Ask a virtual area of 0x400000000 bytes 00:04:45.548 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:45.548 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:45.548 EAL: Ask a virtual area of 0x61000 bytes 00:04:45.548 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:45.548 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:45.548 EAL: Ask a virtual area of 0x400000000 bytes 00:04:45.548 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:45.548 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:45.548 EAL: Ask a virtual area of 0x61000 bytes 00:04:45.548 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:45.548 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:45.548 EAL: Ask a virtual area of 0x400000000 bytes 00:04:45.548 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:45.548 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:45.548 EAL: Ask a virtual area of 0x61000 bytes 00:04:45.548 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:45.548 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:45.548 EAL: Ask a virtual area of 0x400000000 bytes 00:04:45.548 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:45.548 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:45.548 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:45.548 EAL: Ask a virtual area of 0x61000 bytes 00:04:45.548 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:45.548 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:45.548 EAL: Ask a virtual area of 0x400000000 bytes 00:04:45.548 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:45.548 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:45.548 EAL: Ask a virtual area of 0x61000 bytes 00:04:45.548 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:45.548 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:45.548 EAL: Ask a virtual area of 0x400000000 bytes 00:04:45.548 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:45.548 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:45.548 EAL: Ask a virtual area of 0x61000 bytes 00:04:45.548 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:45.548 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:45.548 EAL: Ask a virtual area of 0x400000000 bytes 00:04:45.548 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:45.548 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:45.548 EAL: Ask a virtual area of 0x61000 bytes 00:04:45.548 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:45.548 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:45.548 EAL: Ask a virtual area of 0x400000000 bytes 00:04:45.548 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:45.548 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:45.548 EAL: Hugepages will be freed exactly as allocated. 00:04:45.548 EAL: No shared files mode enabled, IPC is disabled 00:04:45.548 EAL: No shared files mode enabled, IPC is disabled 00:04:45.548 EAL: TSC frequency is ~2700000 KHz 00:04:45.548 EAL: Main lcore 0 is ready (tid=7f03682c4a00;cpuset=[0]) 00:04:45.548 EAL: Trying to obtain current memory policy. 00:04:45.548 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:45.548 EAL: Restoring previous memory policy: 0 00:04:45.548 EAL: request: mp_malloc_sync 00:04:45.548 EAL: No shared files mode enabled, IPC is disabled 00:04:45.548 EAL: Heap on socket 0 was expanded by 2MB 00:04:45.548 EAL: No shared files mode enabled, IPC is disabled 00:04:45.548 EAL: No shared files mode enabled, IPC is disabled 00:04:45.548 EAL: Mem event callback 'spdk:(nil)' registered 00:04:45.548 00:04:45.548 00:04:45.548 CUnit - A unit testing framework for C - Version 2.1-3 00:04:45.548 http://cunit.sourceforge.net/ 00:04:45.548 00:04:45.548 00:04:45.548 Suite: components_suite 00:04:45.548 Test: vtophys_malloc_test ...passed 00:04:45.548 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:45.548 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:45.548 EAL: Restoring previous memory policy: 4 00:04:45.548 EAL: Calling mem event callback 'spdk:(nil)' 00:04:45.548 EAL: request: mp_malloc_sync 00:04:45.548 EAL: No shared files mode enabled, IPC is disabled 00:04:45.548 EAL: Heap on socket 0 was expanded by 4MB 00:04:45.548 EAL: Calling mem event callback 'spdk:(nil)' 00:04:45.548 EAL: request: mp_malloc_sync 00:04:45.548 EAL: No shared files mode enabled, IPC is disabled 00:04:45.548 EAL: Heap on socket 0 was shrunk by 4MB 00:04:45.548 EAL: Trying to obtain current memory policy. 00:04:45.548 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:45.548 EAL: Restoring previous memory policy: 4 00:04:45.548 EAL: Calling mem event callback 'spdk:(nil)' 00:04:45.548 EAL: request: mp_malloc_sync 00:04:45.548 EAL: No shared files mode enabled, IPC is disabled 00:04:45.548 EAL: Heap on socket 0 was expanded by 6MB 00:04:45.548 EAL: Calling mem event callback 'spdk:(nil)' 00:04:45.548 EAL: request: mp_malloc_sync 00:04:45.548 EAL: No shared files mode enabled, IPC is disabled 00:04:45.548 EAL: Heap on socket 0 was shrunk by 6MB 00:04:45.548 EAL: Trying to obtain current memory policy. 00:04:45.548 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:45.548 EAL: Restoring previous memory policy: 4 00:04:45.548 EAL: Calling mem event callback 'spdk:(nil)' 00:04:45.548 EAL: request: mp_malloc_sync 00:04:45.549 EAL: No shared files mode enabled, IPC is disabled 00:04:45.549 EAL: Heap on socket 0 was expanded by 10MB 00:04:45.549 EAL: Calling mem event callback 'spdk:(nil)' 00:04:45.549 EAL: request: mp_malloc_sync 00:04:45.549 EAL: No shared files mode enabled, IPC is disabled 00:04:45.549 EAL: Heap on socket 0 was shrunk by 10MB 00:04:45.549 EAL: Trying to obtain current memory policy. 00:04:45.549 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:45.549 EAL: Restoring previous memory policy: 4 00:04:45.549 EAL: Calling mem event callback 'spdk:(nil)' 00:04:45.549 EAL: request: mp_malloc_sync 00:04:45.549 EAL: No shared files mode enabled, IPC is disabled 00:04:45.549 EAL: Heap on socket 0 was expanded by 18MB 00:04:45.549 EAL: Calling mem event callback 'spdk:(nil)' 00:04:45.549 EAL: request: mp_malloc_sync 00:04:45.549 EAL: No shared files mode enabled, IPC is disabled 00:04:45.549 EAL: Heap on socket 0 was shrunk by 18MB 00:04:45.549 EAL: Trying to obtain current memory policy. 00:04:45.549 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:45.549 EAL: Restoring previous memory policy: 4 00:04:45.549 EAL: Calling mem event callback 'spdk:(nil)' 00:04:45.549 EAL: request: mp_malloc_sync 00:04:45.549 EAL: No shared files mode enabled, IPC is disabled 00:04:45.549 EAL: Heap on socket 0 was expanded by 34MB 00:04:45.549 EAL: Calling mem event callback 'spdk:(nil)' 00:04:45.549 EAL: request: mp_malloc_sync 00:04:45.549 EAL: No shared files mode enabled, IPC is disabled 00:04:45.549 EAL: Heap on socket 0 was shrunk by 34MB 00:04:45.549 EAL: Trying to obtain current memory policy. 00:04:45.549 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:45.549 EAL: Restoring previous memory policy: 4 00:04:45.549 EAL: Calling mem event callback 'spdk:(nil)' 00:04:45.549 EAL: request: mp_malloc_sync 00:04:45.549 EAL: No shared files mode enabled, IPC is disabled 00:04:45.549 EAL: Heap on socket 0 was expanded by 66MB 00:04:45.549 EAL: Calling mem event callback 'spdk:(nil)' 00:04:45.549 EAL: request: mp_malloc_sync 00:04:45.549 EAL: No shared files mode enabled, IPC is disabled 00:04:45.549 EAL: Heap on socket 0 was shrunk by 66MB 00:04:45.549 EAL: Trying to obtain current memory policy. 00:04:45.549 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:45.549 EAL: Restoring previous memory policy: 4 00:04:45.549 EAL: Calling mem event callback 'spdk:(nil)' 00:04:45.549 EAL: request: mp_malloc_sync 00:04:45.549 EAL: No shared files mode enabled, IPC is disabled 00:04:45.549 EAL: Heap on socket 0 was expanded by 130MB 00:04:45.809 EAL: Calling mem event callback 'spdk:(nil)' 00:04:45.809 EAL: request: mp_malloc_sync 00:04:45.809 EAL: No shared files mode enabled, IPC is disabled 00:04:45.809 EAL: Heap on socket 0 was shrunk by 130MB 00:04:45.809 EAL: Trying to obtain current memory policy. 00:04:45.809 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:45.809 EAL: Restoring previous memory policy: 4 00:04:45.809 EAL: Calling mem event callback 'spdk:(nil)' 00:04:45.809 EAL: request: mp_malloc_sync 00:04:45.809 EAL: No shared files mode enabled, IPC is disabled 00:04:45.809 EAL: Heap on socket 0 was expanded by 258MB 00:04:45.809 EAL: Calling mem event callback 'spdk:(nil)' 00:04:45.809 EAL: request: mp_malloc_sync 00:04:45.809 EAL: No shared files mode enabled, IPC is disabled 00:04:45.809 EAL: Heap on socket 0 was shrunk by 258MB 00:04:45.809 EAL: Trying to obtain current memory policy. 00:04:45.809 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:46.068 EAL: Restoring previous memory policy: 4 00:04:46.068 EAL: Calling mem event callback 'spdk:(nil)' 00:04:46.068 EAL: request: mp_malloc_sync 00:04:46.068 EAL: No shared files mode enabled, IPC is disabled 00:04:46.068 EAL: Heap on socket 0 was expanded by 514MB 00:04:46.068 EAL: Calling mem event callback 'spdk:(nil)' 00:04:46.328 EAL: request: mp_malloc_sync 00:04:46.328 EAL: No shared files mode enabled, IPC is disabled 00:04:46.328 EAL: Heap on socket 0 was shrunk by 514MB 00:04:46.328 EAL: Trying to obtain current memory policy. 00:04:46.328 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:46.588 EAL: Restoring previous memory policy: 4 00:04:46.588 EAL: Calling mem event callback 'spdk:(nil)' 00:04:46.588 EAL: request: mp_malloc_sync 00:04:46.588 EAL: No shared files mode enabled, IPC is disabled 00:04:46.588 EAL: Heap on socket 0 was expanded by 1026MB 00:04:46.848 EAL: Calling mem event callback 'spdk:(nil)' 00:04:46.848 EAL: request: mp_malloc_sync 00:04:46.848 EAL: No shared files mode enabled, IPC is disabled 00:04:46.848 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:46.848 passed 00:04:46.848 00:04:46.848 Run Summary: Type Total Ran Passed Failed Inactive 00:04:46.848 suites 1 1 n/a 0 0 00:04:46.848 tests 2 2 2 0 0 00:04:46.848 asserts 497 497 497 0 n/a 00:04:46.848 00:04:46.848 Elapsed time = 1.378 seconds 00:04:46.848 EAL: Calling mem event callback 'spdk:(nil)' 00:04:46.848 EAL: request: mp_malloc_sync 00:04:46.848 EAL: No shared files mode enabled, IPC is disabled 00:04:46.848 EAL: Heap on socket 0 was shrunk by 2MB 00:04:46.848 EAL: No shared files mode enabled, IPC is disabled 00:04:46.848 EAL: No shared files mode enabled, IPC is disabled 00:04:46.848 EAL: No shared files mode enabled, IPC is disabled 00:04:46.848 00:04:46.848 real 0m1.502s 00:04:46.848 user 0m0.860s 00:04:46.848 sys 0m0.603s 00:04:46.848 08:38:05 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:46.848 08:38:05 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:46.848 ************************************ 00:04:46.848 END TEST env_vtophys 00:04:46.848 ************************************ 00:04:47.108 08:38:05 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:47.108 08:38:05 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:47.108 08:38:05 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:47.108 08:38:05 env -- common/autotest_common.sh@10 -- # set +x 00:04:47.108 ************************************ 00:04:47.108 START TEST env_pci 00:04:47.108 ************************************ 00:04:47.108 08:38:05 env.env_pci -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:47.108 00:04:47.108 00:04:47.108 CUnit - A unit testing framework for C - Version 2.1-3 00:04:47.108 http://cunit.sourceforge.net/ 00:04:47.108 00:04:47.108 00:04:47.108 Suite: pci 00:04:47.108 Test: pci_hook ...[2024-07-26 08:38:05.363414] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 831509 has claimed it 00:04:47.108 EAL: Cannot find device (10000:00:01.0) 00:04:47.108 EAL: Failed to attach device on primary process 00:04:47.108 passed 00:04:47.108 00:04:47.108 Run Summary: Type Total Ran Passed Failed Inactive 00:04:47.108 suites 1 1 n/a 0 0 00:04:47.108 tests 1 1 1 0 0 00:04:47.108 asserts 25 25 25 0 n/a 00:04:47.108 00:04:47.108 Elapsed time = 0.021 seconds 00:04:47.108 00:04:47.108 real 0m0.034s 00:04:47.108 user 0m0.006s 00:04:47.108 sys 0m0.028s 00:04:47.108 08:38:05 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:47.108 08:38:05 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:47.108 ************************************ 00:04:47.108 END TEST env_pci 00:04:47.108 ************************************ 00:04:47.108 08:38:05 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:47.108 08:38:05 env -- env/env.sh@15 -- # uname 00:04:47.108 08:38:05 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:47.108 08:38:05 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:47.108 08:38:05 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:47.108 08:38:05 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:04:47.108 08:38:05 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:47.108 08:38:05 env -- common/autotest_common.sh@10 -- # set +x 00:04:47.108 ************************************ 00:04:47.108 START TEST env_dpdk_post_init 00:04:47.108 ************************************ 00:04:47.108 08:38:05 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:47.108 EAL: Detected CPU lcores: 48 00:04:47.108 EAL: Detected NUMA nodes: 2 00:04:47.108 EAL: Detected shared linkage of DPDK 00:04:47.108 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:47.108 EAL: Selected IOVA mode 'VA' 00:04:47.108 EAL: No free 2048 kB hugepages reported on node 1 00:04:47.108 EAL: VFIO support initialized 00:04:47.108 EAL: Using IOMMU type 1 (Type 1) 00:04:52.383 Starting DPDK initialization... 00:04:52.383 Starting SPDK post initialization... 00:04:52.383 SPDK NVMe probe 00:04:52.383 Attaching to 0000:88:00.0 00:04:52.383 Attached to 0000:88:00.0 00:04:52.383 Cleaning up... 00:04:52.383 00:04:52.383 real 0m4.379s 00:04:52.383 user 0m3.267s 00:04:52.383 sys 0m0.169s 00:04:52.383 08:38:09 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:52.383 08:38:09 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:52.383 ************************************ 00:04:52.383 END TEST env_dpdk_post_init 00:04:52.383 ************************************ 00:04:52.383 08:38:09 env -- env/env.sh@26 -- # uname 00:04:52.383 08:38:09 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:52.383 08:38:09 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:52.383 08:38:09 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:52.383 08:38:09 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:52.383 08:38:09 env -- common/autotest_common.sh@10 -- # set +x 00:04:52.383 ************************************ 00:04:52.383 START TEST env_mem_callbacks 00:04:52.383 ************************************ 00:04:52.383 08:38:09 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:52.383 EAL: Detected CPU lcores: 48 00:04:52.383 EAL: Detected NUMA nodes: 2 00:04:52.383 EAL: Detected shared linkage of DPDK 00:04:52.383 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:52.383 EAL: Selected IOVA mode 'VA' 00:04:52.383 EAL: No free 2048 kB hugepages reported on node 1 00:04:52.383 EAL: VFIO support initialized 00:04:52.383 00:04:52.383 00:04:52.383 CUnit - A unit testing framework for C - Version 2.1-3 00:04:52.383 http://cunit.sourceforge.net/ 00:04:52.383 00:04:52.383 00:04:52.383 Suite: memory 00:04:52.383 Test: test ... 00:04:52.383 register 0x200000200000 2097152 00:04:52.383 malloc 3145728 00:04:52.383 register 0x200000400000 4194304 00:04:52.383 buf 0x200000500000 len 3145728 PASSED 00:04:52.383 malloc 64 00:04:52.383 buf 0x2000004fff40 len 64 PASSED 00:04:52.383 malloc 4194304 00:04:52.383 register 0x200000800000 6291456 00:04:52.383 buf 0x200000a00000 len 4194304 PASSED 00:04:52.383 free 0x200000500000 3145728 00:04:52.383 free 0x2000004fff40 64 00:04:52.383 unregister 0x200000400000 4194304 PASSED 00:04:52.383 free 0x200000a00000 4194304 00:04:52.383 unregister 0x200000800000 6291456 PASSED 00:04:52.383 malloc 8388608 00:04:52.383 register 0x200000400000 10485760 00:04:52.383 buf 0x200000600000 len 8388608 PASSED 00:04:52.383 free 0x200000600000 8388608 00:04:52.383 unregister 0x200000400000 10485760 PASSED 00:04:52.383 passed 00:04:52.383 00:04:52.383 Run Summary: Type Total Ran Passed Failed Inactive 00:04:52.383 suites 1 1 n/a 0 0 00:04:52.383 tests 1 1 1 0 0 00:04:52.383 asserts 15 15 15 0 n/a 00:04:52.383 00:04:52.383 Elapsed time = 0.006 seconds 00:04:52.383 00:04:52.383 real 0m0.049s 00:04:52.383 user 0m0.010s 00:04:52.383 sys 0m0.039s 00:04:52.383 08:38:09 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:52.383 08:38:09 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:52.383 ************************************ 00:04:52.383 END TEST env_mem_callbacks 00:04:52.383 ************************************ 00:04:52.383 00:04:52.383 real 0m6.390s 00:04:52.383 user 0m4.380s 00:04:52.383 sys 0m1.045s 00:04:52.383 08:38:09 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:52.383 08:38:09 env -- common/autotest_common.sh@10 -- # set +x 00:04:52.383 ************************************ 00:04:52.383 END TEST env 00:04:52.383 ************************************ 00:04:52.383 08:38:09 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:52.383 08:38:09 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:52.383 08:38:09 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:52.383 08:38:09 -- common/autotest_common.sh@10 -- # set +x 00:04:52.383 ************************************ 00:04:52.383 START TEST rpc 00:04:52.383 ************************************ 00:04:52.383 08:38:09 rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:52.383 * Looking for test storage... 00:04:52.383 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:52.383 08:38:10 rpc -- rpc/rpc.sh@65 -- # spdk_pid=832166 00:04:52.383 08:38:10 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:52.383 08:38:10 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:52.383 08:38:10 rpc -- rpc/rpc.sh@67 -- # waitforlisten 832166 00:04:52.383 08:38:10 rpc -- common/autotest_common.sh@831 -- # '[' -z 832166 ']' 00:04:52.383 08:38:10 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:52.383 08:38:10 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:52.383 08:38:10 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:52.383 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:52.383 08:38:10 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:52.383 08:38:10 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:52.383 [2024-07-26 08:38:10.084565] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:04:52.383 [2024-07-26 08:38:10.084648] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid832166 ] 00:04:52.383 EAL: No free 2048 kB hugepages reported on node 1 00:04:52.383 [2024-07-26 08:38:10.116036] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:04:52.383 [2024-07-26 08:38:10.143878] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:52.383 [2024-07-26 08:38:10.232137] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:52.383 [2024-07-26 08:38:10.232190] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 832166' to capture a snapshot of events at runtime. 00:04:52.383 [2024-07-26 08:38:10.232220] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:52.383 [2024-07-26 08:38:10.232233] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:52.383 [2024-07-26 08:38:10.232244] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid832166 for offline analysis/debug. 00:04:52.383 [2024-07-26 08:38:10.232274] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.383 08:38:10 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:52.383 08:38:10 rpc -- common/autotest_common.sh@864 -- # return 0 00:04:52.383 08:38:10 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:52.383 08:38:10 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:52.383 08:38:10 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:52.383 08:38:10 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:52.383 08:38:10 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:52.383 08:38:10 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:52.384 08:38:10 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:52.384 ************************************ 00:04:52.384 START TEST rpc_integrity 00:04:52.384 ************************************ 00:04:52.384 08:38:10 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:04:52.384 08:38:10 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:52.384 08:38:10 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:52.384 08:38:10 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:52.384 08:38:10 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:52.384 08:38:10 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:52.384 08:38:10 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:52.384 08:38:10 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:52.384 08:38:10 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:52.384 08:38:10 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:52.384 08:38:10 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:52.384 08:38:10 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:52.384 08:38:10 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:52.384 08:38:10 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:52.384 08:38:10 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:52.384 08:38:10 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:52.384 08:38:10 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:52.384 08:38:10 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:52.384 { 00:04:52.384 "name": "Malloc0", 00:04:52.384 "aliases": [ 00:04:52.384 "5b54d612-166f-40aa-9ff8-7ff0902f9d2c" 00:04:52.384 ], 00:04:52.384 "product_name": "Malloc disk", 00:04:52.384 "block_size": 512, 00:04:52.384 "num_blocks": 16384, 00:04:52.384 "uuid": "5b54d612-166f-40aa-9ff8-7ff0902f9d2c", 00:04:52.384 "assigned_rate_limits": { 00:04:52.384 "rw_ios_per_sec": 0, 00:04:52.384 "rw_mbytes_per_sec": 0, 00:04:52.384 "r_mbytes_per_sec": 0, 00:04:52.384 "w_mbytes_per_sec": 0 00:04:52.384 }, 00:04:52.384 "claimed": false, 00:04:52.384 "zoned": false, 00:04:52.384 "supported_io_types": { 00:04:52.384 "read": true, 00:04:52.384 "write": true, 00:04:52.384 "unmap": true, 00:04:52.384 "flush": true, 00:04:52.384 "reset": true, 00:04:52.384 "nvme_admin": false, 00:04:52.384 "nvme_io": false, 00:04:52.384 "nvme_io_md": false, 00:04:52.384 "write_zeroes": true, 00:04:52.384 "zcopy": true, 00:04:52.384 "get_zone_info": false, 00:04:52.384 "zone_management": false, 00:04:52.384 "zone_append": false, 00:04:52.384 "compare": false, 00:04:52.384 "compare_and_write": false, 00:04:52.384 "abort": true, 00:04:52.384 "seek_hole": false, 00:04:52.384 "seek_data": false, 00:04:52.384 "copy": true, 00:04:52.384 "nvme_iov_md": false 00:04:52.384 }, 00:04:52.384 "memory_domains": [ 00:04:52.384 { 00:04:52.384 "dma_device_id": "system", 00:04:52.384 "dma_device_type": 1 00:04:52.384 }, 00:04:52.384 { 00:04:52.384 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:52.384 "dma_device_type": 2 00:04:52.384 } 00:04:52.384 ], 00:04:52.384 "driver_specific": {} 00:04:52.384 } 00:04:52.384 ]' 00:04:52.384 08:38:10 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:52.384 08:38:10 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:52.384 08:38:10 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:52.384 08:38:10 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:52.384 08:38:10 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:52.384 [2024-07-26 08:38:10.614140] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:52.384 [2024-07-26 08:38:10.614200] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:52.384 [2024-07-26 08:38:10.614225] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xda47f0 00:04:52.384 [2024-07-26 08:38:10.614239] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:52.384 [2024-07-26 08:38:10.615719] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:52.384 [2024-07-26 08:38:10.615747] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:52.384 Passthru0 00:04:52.384 08:38:10 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:52.384 08:38:10 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:52.384 08:38:10 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:52.384 08:38:10 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:52.384 08:38:10 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:52.384 08:38:10 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:52.384 { 00:04:52.384 "name": "Malloc0", 00:04:52.384 "aliases": [ 00:04:52.384 "5b54d612-166f-40aa-9ff8-7ff0902f9d2c" 00:04:52.384 ], 00:04:52.384 "product_name": "Malloc disk", 00:04:52.384 "block_size": 512, 00:04:52.384 "num_blocks": 16384, 00:04:52.384 "uuid": "5b54d612-166f-40aa-9ff8-7ff0902f9d2c", 00:04:52.384 "assigned_rate_limits": { 00:04:52.384 "rw_ios_per_sec": 0, 00:04:52.384 "rw_mbytes_per_sec": 0, 00:04:52.384 "r_mbytes_per_sec": 0, 00:04:52.384 "w_mbytes_per_sec": 0 00:04:52.384 }, 00:04:52.384 "claimed": true, 00:04:52.384 "claim_type": "exclusive_write", 00:04:52.384 "zoned": false, 00:04:52.384 "supported_io_types": { 00:04:52.384 "read": true, 00:04:52.384 "write": true, 00:04:52.384 "unmap": true, 00:04:52.384 "flush": true, 00:04:52.384 "reset": true, 00:04:52.384 "nvme_admin": false, 00:04:52.384 "nvme_io": false, 00:04:52.384 "nvme_io_md": false, 00:04:52.384 "write_zeroes": true, 00:04:52.384 "zcopy": true, 00:04:52.384 "get_zone_info": false, 00:04:52.384 "zone_management": false, 00:04:52.384 "zone_append": false, 00:04:52.384 "compare": false, 00:04:52.384 "compare_and_write": false, 00:04:52.384 "abort": true, 00:04:52.384 "seek_hole": false, 00:04:52.384 "seek_data": false, 00:04:52.384 "copy": true, 00:04:52.384 "nvme_iov_md": false 00:04:52.384 }, 00:04:52.384 "memory_domains": [ 00:04:52.384 { 00:04:52.384 "dma_device_id": "system", 00:04:52.384 "dma_device_type": 1 00:04:52.384 }, 00:04:52.384 { 00:04:52.384 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:52.384 "dma_device_type": 2 00:04:52.384 } 00:04:52.384 ], 00:04:52.384 "driver_specific": {} 00:04:52.384 }, 00:04:52.384 { 00:04:52.384 "name": "Passthru0", 00:04:52.384 "aliases": [ 00:04:52.384 "0cc5d424-fad6-5788-a04a-07602e1242cc" 00:04:52.384 ], 00:04:52.384 "product_name": "passthru", 00:04:52.384 "block_size": 512, 00:04:52.384 "num_blocks": 16384, 00:04:52.384 "uuid": "0cc5d424-fad6-5788-a04a-07602e1242cc", 00:04:52.384 "assigned_rate_limits": { 00:04:52.384 "rw_ios_per_sec": 0, 00:04:52.384 "rw_mbytes_per_sec": 0, 00:04:52.384 "r_mbytes_per_sec": 0, 00:04:52.384 "w_mbytes_per_sec": 0 00:04:52.384 }, 00:04:52.384 "claimed": false, 00:04:52.384 "zoned": false, 00:04:52.384 "supported_io_types": { 00:04:52.384 "read": true, 00:04:52.384 "write": true, 00:04:52.384 "unmap": true, 00:04:52.384 "flush": true, 00:04:52.384 "reset": true, 00:04:52.384 "nvme_admin": false, 00:04:52.384 "nvme_io": false, 00:04:52.384 "nvme_io_md": false, 00:04:52.384 "write_zeroes": true, 00:04:52.384 "zcopy": true, 00:04:52.384 "get_zone_info": false, 00:04:52.384 "zone_management": false, 00:04:52.384 "zone_append": false, 00:04:52.384 "compare": false, 00:04:52.384 "compare_and_write": false, 00:04:52.384 "abort": true, 00:04:52.384 "seek_hole": false, 00:04:52.384 "seek_data": false, 00:04:52.384 "copy": true, 00:04:52.384 "nvme_iov_md": false 00:04:52.384 }, 00:04:52.384 "memory_domains": [ 00:04:52.384 { 00:04:52.384 "dma_device_id": "system", 00:04:52.384 "dma_device_type": 1 00:04:52.384 }, 00:04:52.384 { 00:04:52.384 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:52.384 "dma_device_type": 2 00:04:52.384 } 00:04:52.384 ], 00:04:52.384 "driver_specific": { 00:04:52.384 "passthru": { 00:04:52.384 "name": "Passthru0", 00:04:52.384 "base_bdev_name": "Malloc0" 00:04:52.384 } 00:04:52.384 } 00:04:52.384 } 00:04:52.384 ]' 00:04:52.384 08:38:10 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:52.384 08:38:10 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:52.384 08:38:10 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:52.384 08:38:10 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:52.384 08:38:10 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:52.384 08:38:10 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:52.384 08:38:10 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:52.384 08:38:10 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:52.384 08:38:10 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:52.384 08:38:10 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:52.384 08:38:10 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:52.384 08:38:10 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:52.384 08:38:10 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:52.384 08:38:10 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:52.385 08:38:10 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:52.385 08:38:10 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:52.385 08:38:10 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:52.385 00:04:52.385 real 0m0.224s 00:04:52.385 user 0m0.143s 00:04:52.385 sys 0m0.026s 00:04:52.385 08:38:10 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:52.385 08:38:10 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:52.385 ************************************ 00:04:52.385 END TEST rpc_integrity 00:04:52.385 ************************************ 00:04:52.385 08:38:10 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:52.385 08:38:10 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:52.385 08:38:10 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:52.385 08:38:10 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:52.385 ************************************ 00:04:52.385 START TEST rpc_plugins 00:04:52.385 ************************************ 00:04:52.385 08:38:10 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:04:52.385 08:38:10 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:52.385 08:38:10 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:52.385 08:38:10 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:52.385 08:38:10 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:52.385 08:38:10 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:52.385 08:38:10 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:52.385 08:38:10 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:52.385 08:38:10 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:52.385 08:38:10 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:52.385 08:38:10 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:52.385 { 00:04:52.385 "name": "Malloc1", 00:04:52.385 "aliases": [ 00:04:52.385 "af55fe50-7037-4e81-8f6e-c33e46c4a358" 00:04:52.385 ], 00:04:52.385 "product_name": "Malloc disk", 00:04:52.385 "block_size": 4096, 00:04:52.385 "num_blocks": 256, 00:04:52.385 "uuid": "af55fe50-7037-4e81-8f6e-c33e46c4a358", 00:04:52.385 "assigned_rate_limits": { 00:04:52.385 "rw_ios_per_sec": 0, 00:04:52.385 "rw_mbytes_per_sec": 0, 00:04:52.385 "r_mbytes_per_sec": 0, 00:04:52.385 "w_mbytes_per_sec": 0 00:04:52.385 }, 00:04:52.385 "claimed": false, 00:04:52.385 "zoned": false, 00:04:52.385 "supported_io_types": { 00:04:52.385 "read": true, 00:04:52.385 "write": true, 00:04:52.385 "unmap": true, 00:04:52.385 "flush": true, 00:04:52.385 "reset": true, 00:04:52.385 "nvme_admin": false, 00:04:52.385 "nvme_io": false, 00:04:52.385 "nvme_io_md": false, 00:04:52.385 "write_zeroes": true, 00:04:52.385 "zcopy": true, 00:04:52.385 "get_zone_info": false, 00:04:52.385 "zone_management": false, 00:04:52.385 "zone_append": false, 00:04:52.385 "compare": false, 00:04:52.385 "compare_and_write": false, 00:04:52.385 "abort": true, 00:04:52.385 "seek_hole": false, 00:04:52.385 "seek_data": false, 00:04:52.385 "copy": true, 00:04:52.385 "nvme_iov_md": false 00:04:52.385 }, 00:04:52.385 "memory_domains": [ 00:04:52.385 { 00:04:52.385 "dma_device_id": "system", 00:04:52.385 "dma_device_type": 1 00:04:52.385 }, 00:04:52.385 { 00:04:52.385 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:52.385 "dma_device_type": 2 00:04:52.385 } 00:04:52.385 ], 00:04:52.385 "driver_specific": {} 00:04:52.385 } 00:04:52.385 ]' 00:04:52.385 08:38:10 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:52.385 08:38:10 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:52.385 08:38:10 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:52.385 08:38:10 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:52.385 08:38:10 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:52.646 08:38:10 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:52.646 08:38:10 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:52.646 08:38:10 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:52.646 08:38:10 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:52.646 08:38:10 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:52.646 08:38:10 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:52.646 08:38:10 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:52.646 08:38:10 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:52.646 00:04:52.646 real 0m0.111s 00:04:52.646 user 0m0.074s 00:04:52.646 sys 0m0.010s 00:04:52.646 08:38:10 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:52.646 08:38:10 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:52.646 ************************************ 00:04:52.646 END TEST rpc_plugins 00:04:52.646 ************************************ 00:04:52.646 08:38:10 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:52.646 08:38:10 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:52.646 08:38:10 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:52.646 08:38:10 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:52.646 ************************************ 00:04:52.646 START TEST rpc_trace_cmd_test 00:04:52.646 ************************************ 00:04:52.646 08:38:10 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:04:52.646 08:38:10 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:52.646 08:38:10 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:52.646 08:38:10 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:52.646 08:38:10 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:52.646 08:38:10 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:52.646 08:38:10 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:52.646 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid832166", 00:04:52.646 "tpoint_group_mask": "0x8", 00:04:52.646 "iscsi_conn": { 00:04:52.646 "mask": "0x2", 00:04:52.646 "tpoint_mask": "0x0" 00:04:52.646 }, 00:04:52.646 "scsi": { 00:04:52.646 "mask": "0x4", 00:04:52.646 "tpoint_mask": "0x0" 00:04:52.646 }, 00:04:52.646 "bdev": { 00:04:52.646 "mask": "0x8", 00:04:52.646 "tpoint_mask": "0xffffffffffffffff" 00:04:52.646 }, 00:04:52.646 "nvmf_rdma": { 00:04:52.646 "mask": "0x10", 00:04:52.646 "tpoint_mask": "0x0" 00:04:52.646 }, 00:04:52.646 "nvmf_tcp": { 00:04:52.646 "mask": "0x20", 00:04:52.646 "tpoint_mask": "0x0" 00:04:52.646 }, 00:04:52.646 "ftl": { 00:04:52.646 "mask": "0x40", 00:04:52.646 "tpoint_mask": "0x0" 00:04:52.646 }, 00:04:52.646 "blobfs": { 00:04:52.646 "mask": "0x80", 00:04:52.646 "tpoint_mask": "0x0" 00:04:52.646 }, 00:04:52.646 "dsa": { 00:04:52.646 "mask": "0x200", 00:04:52.646 "tpoint_mask": "0x0" 00:04:52.646 }, 00:04:52.646 "thread": { 00:04:52.646 "mask": "0x400", 00:04:52.646 "tpoint_mask": "0x0" 00:04:52.646 }, 00:04:52.646 "nvme_pcie": { 00:04:52.646 "mask": "0x800", 00:04:52.646 "tpoint_mask": "0x0" 00:04:52.646 }, 00:04:52.646 "iaa": { 00:04:52.646 "mask": "0x1000", 00:04:52.646 "tpoint_mask": "0x0" 00:04:52.646 }, 00:04:52.646 "nvme_tcp": { 00:04:52.646 "mask": "0x2000", 00:04:52.646 "tpoint_mask": "0x0" 00:04:52.646 }, 00:04:52.646 "bdev_nvme": { 00:04:52.646 "mask": "0x4000", 00:04:52.646 "tpoint_mask": "0x0" 00:04:52.646 }, 00:04:52.646 "sock": { 00:04:52.646 "mask": "0x8000", 00:04:52.646 "tpoint_mask": "0x0" 00:04:52.646 } 00:04:52.646 }' 00:04:52.646 08:38:10 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:52.646 08:38:10 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:04:52.646 08:38:10 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:52.646 08:38:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:52.646 08:38:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:52.646 08:38:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:52.646 08:38:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:52.646 08:38:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:52.646 08:38:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:52.906 08:38:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:52.906 00:04:52.906 real 0m0.183s 00:04:52.906 user 0m0.161s 00:04:52.906 sys 0m0.016s 00:04:52.906 08:38:11 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:52.906 08:38:11 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:52.906 ************************************ 00:04:52.906 END TEST rpc_trace_cmd_test 00:04:52.906 ************************************ 00:04:52.906 08:38:11 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:52.906 08:38:11 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:52.906 08:38:11 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:52.906 08:38:11 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:52.906 08:38:11 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:52.906 08:38:11 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:52.906 ************************************ 00:04:52.906 START TEST rpc_daemon_integrity 00:04:52.906 ************************************ 00:04:52.906 08:38:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:04:52.906 08:38:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:52.906 08:38:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:52.906 08:38:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:52.906 08:38:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:52.906 08:38:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:52.906 08:38:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:52.906 08:38:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:52.906 08:38:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:52.906 08:38:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:52.906 08:38:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:52.906 08:38:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:52.906 08:38:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:52.906 08:38:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:52.906 08:38:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:52.906 08:38:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:52.906 08:38:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:52.906 08:38:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:52.906 { 00:04:52.906 "name": "Malloc2", 00:04:52.906 "aliases": [ 00:04:52.906 "77397e4f-d2c9-4033-8275-3096b2fbb9df" 00:04:52.906 ], 00:04:52.906 "product_name": "Malloc disk", 00:04:52.906 "block_size": 512, 00:04:52.906 "num_blocks": 16384, 00:04:52.906 "uuid": "77397e4f-d2c9-4033-8275-3096b2fbb9df", 00:04:52.906 "assigned_rate_limits": { 00:04:52.906 "rw_ios_per_sec": 0, 00:04:52.906 "rw_mbytes_per_sec": 0, 00:04:52.906 "r_mbytes_per_sec": 0, 00:04:52.906 "w_mbytes_per_sec": 0 00:04:52.906 }, 00:04:52.906 "claimed": false, 00:04:52.906 "zoned": false, 00:04:52.906 "supported_io_types": { 00:04:52.906 "read": true, 00:04:52.906 "write": true, 00:04:52.906 "unmap": true, 00:04:52.906 "flush": true, 00:04:52.906 "reset": true, 00:04:52.906 "nvme_admin": false, 00:04:52.906 "nvme_io": false, 00:04:52.906 "nvme_io_md": false, 00:04:52.906 "write_zeroes": true, 00:04:52.906 "zcopy": true, 00:04:52.906 "get_zone_info": false, 00:04:52.906 "zone_management": false, 00:04:52.906 "zone_append": false, 00:04:52.906 "compare": false, 00:04:52.906 "compare_and_write": false, 00:04:52.906 "abort": true, 00:04:52.906 "seek_hole": false, 00:04:52.906 "seek_data": false, 00:04:52.906 "copy": true, 00:04:52.906 "nvme_iov_md": false 00:04:52.906 }, 00:04:52.906 "memory_domains": [ 00:04:52.906 { 00:04:52.906 "dma_device_id": "system", 00:04:52.906 "dma_device_type": 1 00:04:52.906 }, 00:04:52.906 { 00:04:52.906 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:52.906 "dma_device_type": 2 00:04:52.906 } 00:04:52.906 ], 00:04:52.906 "driver_specific": {} 00:04:52.906 } 00:04:52.906 ]' 00:04:52.906 08:38:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:52.906 08:38:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:52.906 08:38:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:52.906 08:38:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:52.906 08:38:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:52.906 [2024-07-26 08:38:11.268466] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:52.906 [2024-07-26 08:38:11.268514] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:52.906 [2024-07-26 08:38:11.268540] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xf48490 00:04:52.906 [2024-07-26 08:38:11.268556] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:52.906 [2024-07-26 08:38:11.269881] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:52.906 [2024-07-26 08:38:11.269909] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:52.906 Passthru0 00:04:52.906 08:38:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:52.906 08:38:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:52.906 08:38:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:52.906 08:38:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:52.906 08:38:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:52.906 08:38:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:52.906 { 00:04:52.906 "name": "Malloc2", 00:04:52.906 "aliases": [ 00:04:52.906 "77397e4f-d2c9-4033-8275-3096b2fbb9df" 00:04:52.906 ], 00:04:52.906 "product_name": "Malloc disk", 00:04:52.906 "block_size": 512, 00:04:52.906 "num_blocks": 16384, 00:04:52.906 "uuid": "77397e4f-d2c9-4033-8275-3096b2fbb9df", 00:04:52.906 "assigned_rate_limits": { 00:04:52.906 "rw_ios_per_sec": 0, 00:04:52.906 "rw_mbytes_per_sec": 0, 00:04:52.906 "r_mbytes_per_sec": 0, 00:04:52.906 "w_mbytes_per_sec": 0 00:04:52.906 }, 00:04:52.906 "claimed": true, 00:04:52.906 "claim_type": "exclusive_write", 00:04:52.906 "zoned": false, 00:04:52.906 "supported_io_types": { 00:04:52.906 "read": true, 00:04:52.906 "write": true, 00:04:52.906 "unmap": true, 00:04:52.906 "flush": true, 00:04:52.906 "reset": true, 00:04:52.906 "nvme_admin": false, 00:04:52.906 "nvme_io": false, 00:04:52.906 "nvme_io_md": false, 00:04:52.906 "write_zeroes": true, 00:04:52.906 "zcopy": true, 00:04:52.906 "get_zone_info": false, 00:04:52.906 "zone_management": false, 00:04:52.906 "zone_append": false, 00:04:52.906 "compare": false, 00:04:52.906 "compare_and_write": false, 00:04:52.906 "abort": true, 00:04:52.906 "seek_hole": false, 00:04:52.906 "seek_data": false, 00:04:52.906 "copy": true, 00:04:52.906 "nvme_iov_md": false 00:04:52.906 }, 00:04:52.906 "memory_domains": [ 00:04:52.906 { 00:04:52.906 "dma_device_id": "system", 00:04:52.906 "dma_device_type": 1 00:04:52.906 }, 00:04:52.906 { 00:04:52.906 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:52.906 "dma_device_type": 2 00:04:52.906 } 00:04:52.906 ], 00:04:52.906 "driver_specific": {} 00:04:52.906 }, 00:04:52.906 { 00:04:52.906 "name": "Passthru0", 00:04:52.906 "aliases": [ 00:04:52.906 "f674d493-a530-50c9-b291-a371560ef9a6" 00:04:52.906 ], 00:04:52.906 "product_name": "passthru", 00:04:52.906 "block_size": 512, 00:04:52.906 "num_blocks": 16384, 00:04:52.906 "uuid": "f674d493-a530-50c9-b291-a371560ef9a6", 00:04:52.906 "assigned_rate_limits": { 00:04:52.906 "rw_ios_per_sec": 0, 00:04:52.906 "rw_mbytes_per_sec": 0, 00:04:52.906 "r_mbytes_per_sec": 0, 00:04:52.906 "w_mbytes_per_sec": 0 00:04:52.906 }, 00:04:52.906 "claimed": false, 00:04:52.906 "zoned": false, 00:04:52.906 "supported_io_types": { 00:04:52.906 "read": true, 00:04:52.906 "write": true, 00:04:52.906 "unmap": true, 00:04:52.906 "flush": true, 00:04:52.906 "reset": true, 00:04:52.906 "nvme_admin": false, 00:04:52.906 "nvme_io": false, 00:04:52.906 "nvme_io_md": false, 00:04:52.906 "write_zeroes": true, 00:04:52.906 "zcopy": true, 00:04:52.906 "get_zone_info": false, 00:04:52.906 "zone_management": false, 00:04:52.906 "zone_append": false, 00:04:52.906 "compare": false, 00:04:52.906 "compare_and_write": false, 00:04:52.906 "abort": true, 00:04:52.906 "seek_hole": false, 00:04:52.906 "seek_data": false, 00:04:52.906 "copy": true, 00:04:52.906 "nvme_iov_md": false 00:04:52.906 }, 00:04:52.906 "memory_domains": [ 00:04:52.906 { 00:04:52.907 "dma_device_id": "system", 00:04:52.907 "dma_device_type": 1 00:04:52.907 }, 00:04:52.907 { 00:04:52.907 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:52.907 "dma_device_type": 2 00:04:52.907 } 00:04:52.907 ], 00:04:52.907 "driver_specific": { 00:04:52.907 "passthru": { 00:04:52.907 "name": "Passthru0", 00:04:52.907 "base_bdev_name": "Malloc2" 00:04:52.907 } 00:04:52.907 } 00:04:52.907 } 00:04:52.907 ]' 00:04:52.907 08:38:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:52.907 08:38:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:52.907 08:38:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:52.907 08:38:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:52.907 08:38:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:52.907 08:38:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:52.907 08:38:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:52.907 08:38:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:52.907 08:38:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:52.907 08:38:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:52.907 08:38:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:52.907 08:38:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:52.907 08:38:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:52.907 08:38:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:52.907 08:38:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:52.907 08:38:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:53.167 08:38:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:53.167 00:04:53.167 real 0m0.224s 00:04:53.167 user 0m0.144s 00:04:53.167 sys 0m0.028s 00:04:53.167 08:38:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:53.167 08:38:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:53.167 ************************************ 00:04:53.167 END TEST rpc_daemon_integrity 00:04:53.167 ************************************ 00:04:53.167 08:38:11 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:53.167 08:38:11 rpc -- rpc/rpc.sh@84 -- # killprocess 832166 00:04:53.167 08:38:11 rpc -- common/autotest_common.sh@950 -- # '[' -z 832166 ']' 00:04:53.167 08:38:11 rpc -- common/autotest_common.sh@954 -- # kill -0 832166 00:04:53.167 08:38:11 rpc -- common/autotest_common.sh@955 -- # uname 00:04:53.167 08:38:11 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:53.167 08:38:11 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 832166 00:04:53.167 08:38:11 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:53.167 08:38:11 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:53.167 08:38:11 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 832166' 00:04:53.167 killing process with pid 832166 00:04:53.167 08:38:11 rpc -- common/autotest_common.sh@969 -- # kill 832166 00:04:53.167 08:38:11 rpc -- common/autotest_common.sh@974 -- # wait 832166 00:04:53.426 00:04:53.426 real 0m1.877s 00:04:53.426 user 0m2.336s 00:04:53.426 sys 0m0.600s 00:04:53.426 08:38:11 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:53.426 08:38:11 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:53.426 ************************************ 00:04:53.426 END TEST rpc 00:04:53.426 ************************************ 00:04:53.426 08:38:11 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:53.426 08:38:11 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:53.426 08:38:11 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:53.426 08:38:11 -- common/autotest_common.sh@10 -- # set +x 00:04:53.684 ************************************ 00:04:53.684 START TEST skip_rpc 00:04:53.684 ************************************ 00:04:53.684 08:38:11 skip_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:53.684 * Looking for test storage... 00:04:53.684 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:53.684 08:38:11 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:53.684 08:38:11 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:53.684 08:38:11 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:53.684 08:38:11 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:53.684 08:38:11 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:53.684 08:38:11 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:53.684 ************************************ 00:04:53.684 START TEST skip_rpc 00:04:53.684 ************************************ 00:04:53.684 08:38:11 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:04:53.684 08:38:11 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=832592 00:04:53.684 08:38:11 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:53.684 08:38:11 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:53.684 08:38:11 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:53.684 [2024-07-26 08:38:12.036785] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:04:53.684 [2024-07-26 08:38:12.036866] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid832592 ] 00:04:53.684 EAL: No free 2048 kB hugepages reported on node 1 00:04:53.684 [2024-07-26 08:38:12.068271] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:04:53.684 [2024-07-26 08:38:12.098094] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:53.942 [2024-07-26 08:38:12.188842] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.219 08:38:16 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:59.219 08:38:16 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:04:59.219 08:38:16 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:59.219 08:38:16 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:04:59.219 08:38:16 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:59.219 08:38:16 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:04:59.219 08:38:16 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:59.219 08:38:16 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:04:59.219 08:38:16 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:59.219 08:38:16 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:59.219 08:38:16 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:59.219 08:38:16 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:04:59.219 08:38:16 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:59.219 08:38:16 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:59.219 08:38:16 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:59.219 08:38:16 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:59.219 08:38:16 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 832592 00:04:59.219 08:38:16 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 832592 ']' 00:04:59.219 08:38:16 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 832592 00:04:59.219 08:38:16 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:04:59.219 08:38:16 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:59.219 08:38:16 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 832592 00:04:59.219 08:38:17 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:59.219 08:38:17 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:59.219 08:38:17 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 832592' 00:04:59.219 killing process with pid 832592 00:04:59.219 08:38:17 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 832592 00:04:59.219 08:38:17 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 832592 00:04:59.219 00:04:59.219 real 0m5.441s 00:04:59.219 user 0m5.128s 00:04:59.219 sys 0m0.316s 00:04:59.219 08:38:17 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:59.219 08:38:17 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:59.219 ************************************ 00:04:59.219 END TEST skip_rpc 00:04:59.219 ************************************ 00:04:59.219 08:38:17 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:59.219 08:38:17 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:59.219 08:38:17 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:59.219 08:38:17 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:59.219 ************************************ 00:04:59.219 START TEST skip_rpc_with_json 00:04:59.219 ************************************ 00:04:59.219 08:38:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:04:59.219 08:38:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:59.219 08:38:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=833285 00:04:59.219 08:38:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:59.219 08:38:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:59.219 08:38:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 833285 00:04:59.219 08:38:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 833285 ']' 00:04:59.219 08:38:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:59.219 08:38:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:59.219 08:38:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:59.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:59.219 08:38:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:59.219 08:38:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:59.219 [2024-07-26 08:38:17.525599] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:04:59.219 [2024-07-26 08:38:17.525705] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid833285 ] 00:04:59.219 EAL: No free 2048 kB hugepages reported on node 1 00:04:59.219 [2024-07-26 08:38:17.557935] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:04:59.219 [2024-07-26 08:38:17.584137] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:59.219 [2024-07-26 08:38:17.672832] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.478 08:38:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:59.478 08:38:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:04:59.478 08:38:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:59.478 08:38:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:59.478 08:38:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:59.478 [2024-07-26 08:38:17.929998] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:59.478 request: 00:04:59.478 { 00:04:59.478 "trtype": "tcp", 00:04:59.478 "method": "nvmf_get_transports", 00:04:59.478 "req_id": 1 00:04:59.478 } 00:04:59.478 Got JSON-RPC error response 00:04:59.478 response: 00:04:59.478 { 00:04:59.478 "code": -19, 00:04:59.478 "message": "No such device" 00:04:59.478 } 00:04:59.478 08:38:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:59.478 08:38:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:59.478 08:38:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:59.478 08:38:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:59.478 [2024-07-26 08:38:17.938145] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:59.737 08:38:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:59.737 08:38:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:59.737 08:38:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:59.737 08:38:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:59.737 08:38:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:59.737 08:38:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:59.737 { 00:04:59.737 "subsystems": [ 00:04:59.737 { 00:04:59.737 "subsystem": "vfio_user_target", 00:04:59.737 "config": null 00:04:59.737 }, 00:04:59.737 { 00:04:59.737 "subsystem": "keyring", 00:04:59.737 "config": [] 00:04:59.737 }, 00:04:59.737 { 00:04:59.737 "subsystem": "iobuf", 00:04:59.737 "config": [ 00:04:59.737 { 00:04:59.737 "method": "iobuf_set_options", 00:04:59.737 "params": { 00:04:59.737 "small_pool_count": 8192, 00:04:59.737 "large_pool_count": 1024, 00:04:59.737 "small_bufsize": 8192, 00:04:59.737 "large_bufsize": 135168 00:04:59.737 } 00:04:59.737 } 00:04:59.737 ] 00:04:59.737 }, 00:04:59.737 { 00:04:59.737 "subsystem": "sock", 00:04:59.737 "config": [ 00:04:59.737 { 00:04:59.737 "method": "sock_set_default_impl", 00:04:59.737 "params": { 00:04:59.737 "impl_name": "posix" 00:04:59.737 } 00:04:59.737 }, 00:04:59.737 { 00:04:59.737 "method": "sock_impl_set_options", 00:04:59.737 "params": { 00:04:59.737 "impl_name": "ssl", 00:04:59.737 "recv_buf_size": 4096, 00:04:59.737 "send_buf_size": 4096, 00:04:59.737 "enable_recv_pipe": true, 00:04:59.737 "enable_quickack": false, 00:04:59.737 "enable_placement_id": 0, 00:04:59.737 "enable_zerocopy_send_server": true, 00:04:59.737 "enable_zerocopy_send_client": false, 00:04:59.737 "zerocopy_threshold": 0, 00:04:59.737 "tls_version": 0, 00:04:59.737 "enable_ktls": false 00:04:59.737 } 00:04:59.737 }, 00:04:59.737 { 00:04:59.737 "method": "sock_impl_set_options", 00:04:59.737 "params": { 00:04:59.737 "impl_name": "posix", 00:04:59.737 "recv_buf_size": 2097152, 00:04:59.737 "send_buf_size": 2097152, 00:04:59.737 "enable_recv_pipe": true, 00:04:59.737 "enable_quickack": false, 00:04:59.737 "enable_placement_id": 0, 00:04:59.737 "enable_zerocopy_send_server": true, 00:04:59.737 "enable_zerocopy_send_client": false, 00:04:59.737 "zerocopy_threshold": 0, 00:04:59.737 "tls_version": 0, 00:04:59.737 "enable_ktls": false 00:04:59.737 } 00:04:59.737 } 00:04:59.737 ] 00:04:59.737 }, 00:04:59.737 { 00:04:59.737 "subsystem": "vmd", 00:04:59.737 "config": [] 00:04:59.737 }, 00:04:59.737 { 00:04:59.737 "subsystem": "accel", 00:04:59.737 "config": [ 00:04:59.737 { 00:04:59.737 "method": "accel_set_options", 00:04:59.737 "params": { 00:04:59.737 "small_cache_size": 128, 00:04:59.737 "large_cache_size": 16, 00:04:59.737 "task_count": 2048, 00:04:59.737 "sequence_count": 2048, 00:04:59.737 "buf_count": 2048 00:04:59.737 } 00:04:59.737 } 00:04:59.737 ] 00:04:59.737 }, 00:04:59.737 { 00:04:59.737 "subsystem": "bdev", 00:04:59.737 "config": [ 00:04:59.737 { 00:04:59.737 "method": "bdev_set_options", 00:04:59.737 "params": { 00:04:59.737 "bdev_io_pool_size": 65535, 00:04:59.737 "bdev_io_cache_size": 256, 00:04:59.737 "bdev_auto_examine": true, 00:04:59.737 "iobuf_small_cache_size": 128, 00:04:59.737 "iobuf_large_cache_size": 16 00:04:59.737 } 00:04:59.737 }, 00:04:59.737 { 00:04:59.737 "method": "bdev_raid_set_options", 00:04:59.737 "params": { 00:04:59.737 "process_window_size_kb": 1024, 00:04:59.737 "process_max_bandwidth_mb_sec": 0 00:04:59.737 } 00:04:59.737 }, 00:04:59.737 { 00:04:59.737 "method": "bdev_iscsi_set_options", 00:04:59.737 "params": { 00:04:59.737 "timeout_sec": 30 00:04:59.737 } 00:04:59.737 }, 00:04:59.737 { 00:04:59.737 "method": "bdev_nvme_set_options", 00:04:59.737 "params": { 00:04:59.737 "action_on_timeout": "none", 00:04:59.737 "timeout_us": 0, 00:04:59.737 "timeout_admin_us": 0, 00:04:59.737 "keep_alive_timeout_ms": 10000, 00:04:59.737 "arbitration_burst": 0, 00:04:59.737 "low_priority_weight": 0, 00:04:59.737 "medium_priority_weight": 0, 00:04:59.737 "high_priority_weight": 0, 00:04:59.737 "nvme_adminq_poll_period_us": 10000, 00:04:59.737 "nvme_ioq_poll_period_us": 0, 00:04:59.737 "io_queue_requests": 0, 00:04:59.737 "delay_cmd_submit": true, 00:04:59.737 "transport_retry_count": 4, 00:04:59.737 "bdev_retry_count": 3, 00:04:59.737 "transport_ack_timeout": 0, 00:04:59.737 "ctrlr_loss_timeout_sec": 0, 00:04:59.737 "reconnect_delay_sec": 0, 00:04:59.737 "fast_io_fail_timeout_sec": 0, 00:04:59.737 "disable_auto_failback": false, 00:04:59.737 "generate_uuids": false, 00:04:59.737 "transport_tos": 0, 00:04:59.737 "nvme_error_stat": false, 00:04:59.737 "rdma_srq_size": 0, 00:04:59.737 "io_path_stat": false, 00:04:59.737 "allow_accel_sequence": false, 00:04:59.737 "rdma_max_cq_size": 0, 00:04:59.737 "rdma_cm_event_timeout_ms": 0, 00:04:59.737 "dhchap_digests": [ 00:04:59.737 "sha256", 00:04:59.737 "sha384", 00:04:59.737 "sha512" 00:04:59.737 ], 00:04:59.737 "dhchap_dhgroups": [ 00:04:59.737 "null", 00:04:59.737 "ffdhe2048", 00:04:59.737 "ffdhe3072", 00:04:59.737 "ffdhe4096", 00:04:59.737 "ffdhe6144", 00:04:59.737 "ffdhe8192" 00:04:59.737 ] 00:04:59.737 } 00:04:59.737 }, 00:04:59.737 { 00:04:59.737 "method": "bdev_nvme_set_hotplug", 00:04:59.737 "params": { 00:04:59.737 "period_us": 100000, 00:04:59.737 "enable": false 00:04:59.737 } 00:04:59.737 }, 00:04:59.737 { 00:04:59.737 "method": "bdev_wait_for_examine" 00:04:59.737 } 00:04:59.737 ] 00:04:59.737 }, 00:04:59.737 { 00:04:59.737 "subsystem": "scsi", 00:04:59.737 "config": null 00:04:59.737 }, 00:04:59.737 { 00:04:59.737 "subsystem": "scheduler", 00:04:59.737 "config": [ 00:04:59.737 { 00:04:59.737 "method": "framework_set_scheduler", 00:04:59.737 "params": { 00:04:59.737 "name": "static" 00:04:59.737 } 00:04:59.737 } 00:04:59.737 ] 00:04:59.737 }, 00:04:59.737 { 00:04:59.737 "subsystem": "vhost_scsi", 00:04:59.737 "config": [] 00:04:59.737 }, 00:04:59.737 { 00:04:59.737 "subsystem": "vhost_blk", 00:04:59.737 "config": [] 00:04:59.737 }, 00:04:59.737 { 00:04:59.737 "subsystem": "ublk", 00:04:59.737 "config": [] 00:04:59.737 }, 00:04:59.737 { 00:04:59.737 "subsystem": "nbd", 00:04:59.737 "config": [] 00:04:59.737 }, 00:04:59.737 { 00:04:59.737 "subsystem": "nvmf", 00:04:59.737 "config": [ 00:04:59.737 { 00:04:59.737 "method": "nvmf_set_config", 00:04:59.737 "params": { 00:04:59.737 "discovery_filter": "match_any", 00:04:59.737 "admin_cmd_passthru": { 00:04:59.737 "identify_ctrlr": false 00:04:59.737 } 00:04:59.737 } 00:04:59.737 }, 00:04:59.737 { 00:04:59.737 "method": "nvmf_set_max_subsystems", 00:04:59.737 "params": { 00:04:59.737 "max_subsystems": 1024 00:04:59.737 } 00:04:59.737 }, 00:04:59.737 { 00:04:59.737 "method": "nvmf_set_crdt", 00:04:59.737 "params": { 00:04:59.737 "crdt1": 0, 00:04:59.737 "crdt2": 0, 00:04:59.737 "crdt3": 0 00:04:59.737 } 00:04:59.737 }, 00:04:59.737 { 00:04:59.737 "method": "nvmf_create_transport", 00:04:59.737 "params": { 00:04:59.737 "trtype": "TCP", 00:04:59.737 "max_queue_depth": 128, 00:04:59.737 "max_io_qpairs_per_ctrlr": 127, 00:04:59.737 "in_capsule_data_size": 4096, 00:04:59.737 "max_io_size": 131072, 00:04:59.737 "io_unit_size": 131072, 00:04:59.737 "max_aq_depth": 128, 00:04:59.737 "num_shared_buffers": 511, 00:04:59.737 "buf_cache_size": 4294967295, 00:04:59.737 "dif_insert_or_strip": false, 00:04:59.737 "zcopy": false, 00:04:59.737 "c2h_success": true, 00:04:59.737 "sock_priority": 0, 00:04:59.737 "abort_timeout_sec": 1, 00:04:59.737 "ack_timeout": 0, 00:04:59.737 "data_wr_pool_size": 0 00:04:59.737 } 00:04:59.737 } 00:04:59.737 ] 00:04:59.737 }, 00:04:59.737 { 00:04:59.737 "subsystem": "iscsi", 00:04:59.737 "config": [ 00:04:59.737 { 00:04:59.737 "method": "iscsi_set_options", 00:04:59.737 "params": { 00:04:59.737 "node_base": "iqn.2016-06.io.spdk", 00:04:59.737 "max_sessions": 128, 00:04:59.737 "max_connections_per_session": 2, 00:04:59.737 "max_queue_depth": 64, 00:04:59.737 "default_time2wait": 2, 00:04:59.737 "default_time2retain": 20, 00:04:59.737 "first_burst_length": 8192, 00:04:59.737 "immediate_data": true, 00:04:59.737 "allow_duplicated_isid": false, 00:04:59.737 "error_recovery_level": 0, 00:04:59.737 "nop_timeout": 60, 00:04:59.737 "nop_in_interval": 30, 00:04:59.737 "disable_chap": false, 00:04:59.737 "require_chap": false, 00:04:59.737 "mutual_chap": false, 00:04:59.737 "chap_group": 0, 00:04:59.737 "max_large_datain_per_connection": 64, 00:04:59.737 "max_r2t_per_connection": 4, 00:04:59.737 "pdu_pool_size": 36864, 00:04:59.737 "immediate_data_pool_size": 16384, 00:04:59.737 "data_out_pool_size": 2048 00:04:59.737 } 00:04:59.737 } 00:04:59.737 ] 00:04:59.737 } 00:04:59.737 ] 00:04:59.737 } 00:04:59.737 08:38:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:59.737 08:38:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 833285 00:04:59.737 08:38:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 833285 ']' 00:04:59.737 08:38:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 833285 00:04:59.737 08:38:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:04:59.737 08:38:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:59.737 08:38:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 833285 00:04:59.737 08:38:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:59.737 08:38:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:59.737 08:38:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 833285' 00:04:59.737 killing process with pid 833285 00:04:59.737 08:38:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 833285 00:04:59.737 08:38:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 833285 00:05:00.307 08:38:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=833427 00:05:00.307 08:38:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:00.307 08:38:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:05.626 08:38:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 833427 00:05:05.626 08:38:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 833427 ']' 00:05:05.626 08:38:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 833427 00:05:05.626 08:38:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:05:05.626 08:38:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:05.626 08:38:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 833427 00:05:05.626 08:38:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:05.626 08:38:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:05.626 08:38:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 833427' 00:05:05.626 killing process with pid 833427 00:05:05.626 08:38:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 833427 00:05:05.626 08:38:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 833427 00:05:05.626 08:38:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:05.626 08:38:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:05.626 00:05:05.626 real 0m6.501s 00:05:05.626 user 0m6.094s 00:05:05.626 sys 0m0.685s 00:05:05.626 08:38:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:05.626 08:38:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:05.626 ************************************ 00:05:05.626 END TEST skip_rpc_with_json 00:05:05.626 ************************************ 00:05:05.626 08:38:23 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:05.626 08:38:23 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:05.626 08:38:23 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:05.626 08:38:23 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:05.626 ************************************ 00:05:05.626 START TEST skip_rpc_with_delay 00:05:05.626 ************************************ 00:05:05.626 08:38:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:05:05.626 08:38:24 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:05.626 08:38:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:05:05.626 08:38:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:05.626 08:38:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:05.626 08:38:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:05.626 08:38:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:05.626 08:38:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:05.626 08:38:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:05.626 08:38:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:05.626 08:38:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:05.626 08:38:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:05.626 08:38:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:05.626 [2024-07-26 08:38:24.074754] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:05.626 [2024-07-26 08:38:24.074872] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:05.886 08:38:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:05:05.886 08:38:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:05.886 08:38:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:05.886 08:38:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:05.886 00:05:05.886 real 0m0.068s 00:05:05.886 user 0m0.043s 00:05:05.886 sys 0m0.025s 00:05:05.886 08:38:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:05.886 08:38:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:05.886 ************************************ 00:05:05.886 END TEST skip_rpc_with_delay 00:05:05.886 ************************************ 00:05:05.886 08:38:24 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:05.886 08:38:24 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:05.886 08:38:24 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:05.886 08:38:24 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:05.886 08:38:24 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:05.886 08:38:24 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:05.886 ************************************ 00:05:05.886 START TEST exit_on_failed_rpc_init 00:05:05.886 ************************************ 00:05:05.886 08:38:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:05:05.886 08:38:24 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=834141 00:05:05.886 08:38:24 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:05.886 08:38:24 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 834141 00:05:05.886 08:38:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 834141 ']' 00:05:05.887 08:38:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:05.887 08:38:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:05.887 08:38:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:05.887 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:05.887 08:38:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:05.887 08:38:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:05.887 [2024-07-26 08:38:24.185985] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:05:05.887 [2024-07-26 08:38:24.186099] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid834141 ] 00:05:05.887 EAL: No free 2048 kB hugepages reported on node 1 00:05:05.887 [2024-07-26 08:38:24.218678] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:05.887 [2024-07-26 08:38:24.244725] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:05.887 [2024-07-26 08:38:24.333687] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.145 08:38:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:06.145 08:38:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:05:06.145 08:38:24 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:06.145 08:38:24 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:06.145 08:38:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:05:06.145 08:38:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:06.145 08:38:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:06.145 08:38:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:06.145 08:38:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:06.145 08:38:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:06.145 08:38:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:06.145 08:38:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:06.145 08:38:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:06.145 08:38:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:06.145 08:38:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:06.404 [2024-07-26 08:38:24.638435] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:05:06.405 [2024-07-26 08:38:24.638534] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid834156 ] 00:05:06.405 EAL: No free 2048 kB hugepages reported on node 1 00:05:06.405 [2024-07-26 08:38:24.670661] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:06.405 [2024-07-26 08:38:24.700636] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:06.405 [2024-07-26 08:38:24.792582] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:06.405 [2024-07-26 08:38:24.792714] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:06.405 [2024-07-26 08:38:24.792736] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:06.405 [2024-07-26 08:38:24.792749] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:06.664 08:38:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:05:06.665 08:38:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:06.665 08:38:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:05:06.665 08:38:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:05:06.665 08:38:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:05:06.665 08:38:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:06.665 08:38:24 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:06.665 08:38:24 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 834141 00:05:06.665 08:38:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 834141 ']' 00:05:06.665 08:38:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 834141 00:05:06.665 08:38:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:05:06.665 08:38:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:06.665 08:38:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 834141 00:05:06.665 08:38:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:06.665 08:38:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:06.665 08:38:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 834141' 00:05:06.665 killing process with pid 834141 00:05:06.665 08:38:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 834141 00:05:06.665 08:38:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 834141 00:05:06.924 00:05:06.924 real 0m1.178s 00:05:06.924 user 0m1.280s 00:05:06.924 sys 0m0.456s 00:05:06.924 08:38:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:06.924 08:38:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:06.924 ************************************ 00:05:06.924 END TEST exit_on_failed_rpc_init 00:05:06.924 ************************************ 00:05:06.924 08:38:25 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:06.924 00:05:06.924 real 0m13.430s 00:05:06.924 user 0m12.642s 00:05:06.924 sys 0m1.640s 00:05:06.924 08:38:25 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:06.924 08:38:25 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:06.924 ************************************ 00:05:06.924 END TEST skip_rpc 00:05:06.924 ************************************ 00:05:06.924 08:38:25 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:06.924 08:38:25 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:06.924 08:38:25 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:06.924 08:38:25 -- common/autotest_common.sh@10 -- # set +x 00:05:07.183 ************************************ 00:05:07.183 START TEST rpc_client 00:05:07.183 ************************************ 00:05:07.183 08:38:25 rpc_client -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:07.183 * Looking for test storage... 00:05:07.183 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:05:07.183 08:38:25 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:07.183 OK 00:05:07.183 08:38:25 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:07.183 00:05:07.183 real 0m0.067s 00:05:07.183 user 0m0.028s 00:05:07.183 sys 0m0.044s 00:05:07.183 08:38:25 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:07.183 08:38:25 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:07.183 ************************************ 00:05:07.183 END TEST rpc_client 00:05:07.183 ************************************ 00:05:07.183 08:38:25 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:07.183 08:38:25 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:07.183 08:38:25 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:07.183 08:38:25 -- common/autotest_common.sh@10 -- # set +x 00:05:07.183 ************************************ 00:05:07.183 START TEST json_config 00:05:07.183 ************************************ 00:05:07.183 08:38:25 json_config -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:07.183 08:38:25 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:07.183 08:38:25 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:07.183 08:38:25 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:07.183 08:38:25 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:07.183 08:38:25 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:07.183 08:38:25 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:07.183 08:38:25 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:07.183 08:38:25 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:07.183 08:38:25 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:07.183 08:38:25 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:07.183 08:38:25 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:07.183 08:38:25 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:07.183 08:38:25 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:07.183 08:38:25 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:07.183 08:38:25 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:07.183 08:38:25 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:07.183 08:38:25 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:07.183 08:38:25 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:07.183 08:38:25 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:07.183 08:38:25 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:07.183 08:38:25 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:07.183 08:38:25 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:07.183 08:38:25 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:07.183 08:38:25 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:07.183 08:38:25 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:07.183 08:38:25 json_config -- paths/export.sh@5 -- # export PATH 00:05:07.183 08:38:25 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:07.183 08:38:25 json_config -- nvmf/common.sh@47 -- # : 0 00:05:07.183 08:38:25 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:07.183 08:38:25 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:07.183 08:38:25 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:07.183 08:38:25 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:07.183 08:38:25 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:07.183 08:38:25 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:07.183 08:38:25 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:07.183 08:38:25 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:07.183 08:38:25 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:07.183 08:38:25 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:07.183 08:38:25 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:07.183 08:38:25 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:07.183 08:38:25 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:07.183 08:38:25 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:07.183 08:38:25 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:07.183 08:38:25 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:07.183 08:38:25 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:07.183 08:38:25 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:07.183 08:38:25 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:07.183 08:38:25 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:05:07.183 08:38:25 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:07.183 08:38:25 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:07.183 08:38:25 json_config -- json_config/json_config.sh@359 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:07.183 08:38:25 json_config -- json_config/json_config.sh@360 -- # echo 'INFO: JSON configuration test init' 00:05:07.183 INFO: JSON configuration test init 00:05:07.183 08:38:25 json_config -- json_config/json_config.sh@361 -- # json_config_test_init 00:05:07.183 08:38:25 json_config -- json_config/json_config.sh@266 -- # timing_enter json_config_test_init 00:05:07.183 08:38:25 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:07.183 08:38:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:07.183 08:38:25 json_config -- json_config/json_config.sh@267 -- # timing_enter json_config_setup_target 00:05:07.184 08:38:25 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:07.184 08:38:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:07.184 08:38:25 json_config -- json_config/json_config.sh@269 -- # json_config_test_start_app target --wait-for-rpc 00:05:07.184 08:38:25 json_config -- json_config/common.sh@9 -- # local app=target 00:05:07.184 08:38:25 json_config -- json_config/common.sh@10 -- # shift 00:05:07.184 08:38:25 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:07.184 08:38:25 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:07.184 08:38:25 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:07.184 08:38:25 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:07.184 08:38:25 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:07.184 08:38:25 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=834394 00:05:07.184 08:38:25 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:07.184 08:38:25 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:07.184 Waiting for target to run... 00:05:07.184 08:38:25 json_config -- json_config/common.sh@25 -- # waitforlisten 834394 /var/tmp/spdk_tgt.sock 00:05:07.184 08:38:25 json_config -- common/autotest_common.sh@831 -- # '[' -z 834394 ']' 00:05:07.184 08:38:25 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:07.184 08:38:25 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:07.184 08:38:25 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:07.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:07.184 08:38:25 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:07.184 08:38:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:07.184 [2024-07-26 08:38:25.611589] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:05:07.184 [2024-07-26 08:38:25.611670] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid834394 ] 00:05:07.184 EAL: No free 2048 kB hugepages reported on node 1 00:05:07.751 [2024-07-26 08:38:26.088853] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:07.751 [2024-07-26 08:38:26.123120] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:07.751 [2024-07-26 08:38:26.203668] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.316 08:38:26 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:08.316 08:38:26 json_config -- common/autotest_common.sh@864 -- # return 0 00:05:08.316 08:38:26 json_config -- json_config/common.sh@26 -- # echo '' 00:05:08.316 00:05:08.316 08:38:26 json_config -- json_config/json_config.sh@273 -- # create_accel_config 00:05:08.316 08:38:26 json_config -- json_config/json_config.sh@97 -- # timing_enter create_accel_config 00:05:08.316 08:38:26 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:08.316 08:38:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:08.316 08:38:26 json_config -- json_config/json_config.sh@99 -- # [[ 0 -eq 1 ]] 00:05:08.316 08:38:26 json_config -- json_config/json_config.sh@105 -- # timing_exit create_accel_config 00:05:08.316 08:38:26 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:08.316 08:38:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:08.316 08:38:26 json_config -- json_config/json_config.sh@277 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:08.316 08:38:26 json_config -- json_config/json_config.sh@278 -- # tgt_rpc load_config 00:05:08.316 08:38:26 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:11.608 08:38:29 json_config -- json_config/json_config.sh@280 -- # tgt_check_notification_types 00:05:11.608 08:38:29 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:11.608 08:38:29 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:11.608 08:38:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:11.608 08:38:29 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:11.608 08:38:29 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:11.608 08:38:29 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:11.608 08:38:29 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:05:11.608 08:38:29 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:05:11.608 08:38:29 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:11.608 08:38:29 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:11.608 08:38:29 json_config -- json_config/json_config.sh@48 -- # local get_types 00:05:11.608 08:38:29 json_config -- json_config/json_config.sh@50 -- # local type_diff 00:05:11.608 08:38:30 json_config -- json_config/json_config.sh@51 -- # echo bdev_register bdev_unregister bdev_register bdev_unregister 00:05:11.608 08:38:30 json_config -- json_config/json_config.sh@51 -- # tr ' ' '\n' 00:05:11.608 08:38:30 json_config -- json_config/json_config.sh@51 -- # sort 00:05:11.608 08:38:30 json_config -- json_config/json_config.sh@51 -- # uniq -u 00:05:11.608 08:38:30 json_config -- json_config/json_config.sh@51 -- # type_diff= 00:05:11.608 08:38:30 json_config -- json_config/json_config.sh@53 -- # [[ -n '' ]] 00:05:11.608 08:38:30 json_config -- json_config/json_config.sh@58 -- # timing_exit tgt_check_notification_types 00:05:11.608 08:38:30 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:11.608 08:38:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:11.608 08:38:30 json_config -- json_config/json_config.sh@59 -- # return 0 00:05:11.608 08:38:30 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:05:11.608 08:38:30 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:05:11.608 08:38:30 json_config -- json_config/json_config.sh@290 -- # [[ 0 -eq 1 ]] 00:05:11.608 08:38:30 json_config -- json_config/json_config.sh@294 -- # [[ 1 -eq 1 ]] 00:05:11.608 08:38:30 json_config -- json_config/json_config.sh@295 -- # create_nvmf_subsystem_config 00:05:11.608 08:38:30 json_config -- json_config/json_config.sh@234 -- # timing_enter create_nvmf_subsystem_config 00:05:11.608 08:38:30 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:11.608 08:38:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:11.608 08:38:30 json_config -- json_config/json_config.sh@236 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:11.608 08:38:30 json_config -- json_config/json_config.sh@237 -- # [[ tcp == \r\d\m\a ]] 00:05:11.608 08:38:30 json_config -- json_config/json_config.sh@241 -- # [[ -z 127.0.0.1 ]] 00:05:11.608 08:38:30 json_config -- json_config/json_config.sh@246 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:11.608 08:38:30 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:11.866 MallocForNvmf0 00:05:11.866 08:38:30 json_config -- json_config/json_config.sh@247 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:11.866 08:38:30 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:12.125 MallocForNvmf1 00:05:12.125 08:38:30 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:12.125 08:38:30 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:12.384 [2024-07-26 08:38:30.745348] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:12.384 08:38:30 json_config -- json_config/json_config.sh@250 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:12.384 08:38:30 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:12.643 08:38:30 json_config -- json_config/json_config.sh@251 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:12.643 08:38:30 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:12.900 08:38:31 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:12.900 08:38:31 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:13.158 08:38:31 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:13.158 08:38:31 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:13.416 [2024-07-26 08:38:31.724589] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:13.416 08:38:31 json_config -- json_config/json_config.sh@255 -- # timing_exit create_nvmf_subsystem_config 00:05:13.416 08:38:31 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:13.416 08:38:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:13.416 08:38:31 json_config -- json_config/json_config.sh@297 -- # timing_exit json_config_setup_target 00:05:13.416 08:38:31 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:13.416 08:38:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:13.416 08:38:31 json_config -- json_config/json_config.sh@299 -- # [[ 0 -eq 1 ]] 00:05:13.416 08:38:31 json_config -- json_config/json_config.sh@304 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:13.416 08:38:31 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:13.673 MallocBdevForConfigChangeCheck 00:05:13.673 08:38:32 json_config -- json_config/json_config.sh@306 -- # timing_exit json_config_test_init 00:05:13.673 08:38:32 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:13.674 08:38:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:13.674 08:38:32 json_config -- json_config/json_config.sh@363 -- # tgt_rpc save_config 00:05:13.674 08:38:32 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:14.241 08:38:32 json_config -- json_config/json_config.sh@365 -- # echo 'INFO: shutting down applications...' 00:05:14.241 INFO: shutting down applications... 00:05:14.241 08:38:32 json_config -- json_config/json_config.sh@366 -- # [[ 0 -eq 1 ]] 00:05:14.241 08:38:32 json_config -- json_config/json_config.sh@372 -- # json_config_clear target 00:05:14.241 08:38:32 json_config -- json_config/json_config.sh@336 -- # [[ -n 22 ]] 00:05:14.241 08:38:32 json_config -- json_config/json_config.sh@337 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:15.618 Calling clear_iscsi_subsystem 00:05:15.618 Calling clear_nvmf_subsystem 00:05:15.618 Calling clear_nbd_subsystem 00:05:15.618 Calling clear_ublk_subsystem 00:05:15.618 Calling clear_vhost_blk_subsystem 00:05:15.618 Calling clear_vhost_scsi_subsystem 00:05:15.618 Calling clear_bdev_subsystem 00:05:15.618 08:38:34 json_config -- json_config/json_config.sh@341 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:15.618 08:38:34 json_config -- json_config/json_config.sh@347 -- # count=100 00:05:15.618 08:38:34 json_config -- json_config/json_config.sh@348 -- # '[' 100 -gt 0 ']' 00:05:15.618 08:38:34 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:15.618 08:38:34 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:15.618 08:38:34 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:16.187 08:38:34 json_config -- json_config/json_config.sh@349 -- # break 00:05:16.187 08:38:34 json_config -- json_config/json_config.sh@354 -- # '[' 100 -eq 0 ']' 00:05:16.187 08:38:34 json_config -- json_config/json_config.sh@373 -- # json_config_test_shutdown_app target 00:05:16.187 08:38:34 json_config -- json_config/common.sh@31 -- # local app=target 00:05:16.187 08:38:34 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:16.187 08:38:34 json_config -- json_config/common.sh@35 -- # [[ -n 834394 ]] 00:05:16.187 08:38:34 json_config -- json_config/common.sh@38 -- # kill -SIGINT 834394 00:05:16.187 08:38:34 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:16.187 08:38:34 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:16.187 08:38:34 json_config -- json_config/common.sh@41 -- # kill -0 834394 00:05:16.187 08:38:34 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:16.756 08:38:34 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:16.756 08:38:34 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:16.756 08:38:34 json_config -- json_config/common.sh@41 -- # kill -0 834394 00:05:16.756 08:38:34 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:16.756 08:38:34 json_config -- json_config/common.sh@43 -- # break 00:05:16.756 08:38:34 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:16.756 08:38:34 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:16.756 SPDK target shutdown done 00:05:16.756 08:38:34 json_config -- json_config/json_config.sh@375 -- # echo 'INFO: relaunching applications...' 00:05:16.756 INFO: relaunching applications... 00:05:16.756 08:38:34 json_config -- json_config/json_config.sh@376 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:16.756 08:38:34 json_config -- json_config/common.sh@9 -- # local app=target 00:05:16.756 08:38:34 json_config -- json_config/common.sh@10 -- # shift 00:05:16.756 08:38:34 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:16.756 08:38:34 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:16.756 08:38:34 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:16.756 08:38:34 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:16.756 08:38:34 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:16.756 08:38:34 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=835593 00:05:16.756 08:38:34 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:16.756 08:38:34 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:16.756 Waiting for target to run... 00:05:16.756 08:38:34 json_config -- json_config/common.sh@25 -- # waitforlisten 835593 /var/tmp/spdk_tgt.sock 00:05:16.756 08:38:34 json_config -- common/autotest_common.sh@831 -- # '[' -z 835593 ']' 00:05:16.756 08:38:34 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:16.756 08:38:34 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:16.756 08:38:34 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:16.756 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:16.756 08:38:34 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:16.756 08:38:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:16.756 [2024-07-26 08:38:35.011930] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:05:16.756 [2024-07-26 08:38:35.012020] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid835593 ] 00:05:16.756 EAL: No free 2048 kB hugepages reported on node 1 00:05:17.323 [2024-07-26 08:38:35.500441] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:17.323 [2024-07-26 08:38:35.534303] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:17.323 [2024-07-26 08:38:35.616282] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.650 [2024-07-26 08:38:38.650845] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:20.650 [2024-07-26 08:38:38.683383] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:21.219 08:38:39 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:21.219 08:38:39 json_config -- common/autotest_common.sh@864 -- # return 0 00:05:21.219 08:38:39 json_config -- json_config/common.sh@26 -- # echo '' 00:05:21.219 00:05:21.219 08:38:39 json_config -- json_config/json_config.sh@377 -- # [[ 0 -eq 1 ]] 00:05:21.219 08:38:39 json_config -- json_config/json_config.sh@381 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:21.219 INFO: Checking if target configuration is the same... 00:05:21.219 08:38:39 json_config -- json_config/json_config.sh@382 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:21.219 08:38:39 json_config -- json_config/json_config.sh@382 -- # tgt_rpc save_config 00:05:21.219 08:38:39 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:21.219 + '[' 2 -ne 2 ']' 00:05:21.219 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:21.219 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:21.219 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:21.219 +++ basename /dev/fd/62 00:05:21.219 ++ mktemp /tmp/62.XXX 00:05:21.219 + tmp_file_1=/tmp/62.yqg 00:05:21.219 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:21.219 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:21.219 + tmp_file_2=/tmp/spdk_tgt_config.json.gsZ 00:05:21.219 + ret=0 00:05:21.219 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:21.478 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:21.478 + diff -u /tmp/62.yqg /tmp/spdk_tgt_config.json.gsZ 00:05:21.478 + echo 'INFO: JSON config files are the same' 00:05:21.478 INFO: JSON config files are the same 00:05:21.478 + rm /tmp/62.yqg /tmp/spdk_tgt_config.json.gsZ 00:05:21.478 + exit 0 00:05:21.478 08:38:39 json_config -- json_config/json_config.sh@383 -- # [[ 0 -eq 1 ]] 00:05:21.478 08:38:39 json_config -- json_config/json_config.sh@388 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:21.478 INFO: changing configuration and checking if this can be detected... 00:05:21.478 08:38:39 json_config -- json_config/json_config.sh@390 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:21.478 08:38:39 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:21.736 08:38:40 json_config -- json_config/json_config.sh@391 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:21.736 08:38:40 json_config -- json_config/json_config.sh@391 -- # tgt_rpc save_config 00:05:21.736 08:38:40 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:21.736 + '[' 2 -ne 2 ']' 00:05:21.736 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:21.736 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:21.736 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:21.736 +++ basename /dev/fd/62 00:05:21.736 ++ mktemp /tmp/62.XXX 00:05:21.736 + tmp_file_1=/tmp/62.FKr 00:05:21.736 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:21.736 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:21.736 + tmp_file_2=/tmp/spdk_tgt_config.json.xkK 00:05:21.736 + ret=0 00:05:21.736 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:22.306 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:22.306 + diff -u /tmp/62.FKr /tmp/spdk_tgt_config.json.xkK 00:05:22.306 + ret=1 00:05:22.306 + echo '=== Start of file: /tmp/62.FKr ===' 00:05:22.306 + cat /tmp/62.FKr 00:05:22.306 + echo '=== End of file: /tmp/62.FKr ===' 00:05:22.306 + echo '' 00:05:22.306 + echo '=== Start of file: /tmp/spdk_tgt_config.json.xkK ===' 00:05:22.306 + cat /tmp/spdk_tgt_config.json.xkK 00:05:22.306 + echo '=== End of file: /tmp/spdk_tgt_config.json.xkK ===' 00:05:22.306 + echo '' 00:05:22.306 + rm /tmp/62.FKr /tmp/spdk_tgt_config.json.xkK 00:05:22.306 + exit 1 00:05:22.306 08:38:40 json_config -- json_config/json_config.sh@395 -- # echo 'INFO: configuration change detected.' 00:05:22.306 INFO: configuration change detected. 00:05:22.306 08:38:40 json_config -- json_config/json_config.sh@398 -- # json_config_test_fini 00:05:22.306 08:38:40 json_config -- json_config/json_config.sh@310 -- # timing_enter json_config_test_fini 00:05:22.306 08:38:40 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:22.306 08:38:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:22.306 08:38:40 json_config -- json_config/json_config.sh@311 -- # local ret=0 00:05:22.306 08:38:40 json_config -- json_config/json_config.sh@313 -- # [[ -n '' ]] 00:05:22.306 08:38:40 json_config -- json_config/json_config.sh@321 -- # [[ -n 835593 ]] 00:05:22.306 08:38:40 json_config -- json_config/json_config.sh@324 -- # cleanup_bdev_subsystem_config 00:05:22.306 08:38:40 json_config -- json_config/json_config.sh@188 -- # timing_enter cleanup_bdev_subsystem_config 00:05:22.306 08:38:40 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:22.306 08:38:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:22.306 08:38:40 json_config -- json_config/json_config.sh@190 -- # [[ 0 -eq 1 ]] 00:05:22.306 08:38:40 json_config -- json_config/json_config.sh@197 -- # uname -s 00:05:22.306 08:38:40 json_config -- json_config/json_config.sh@197 -- # [[ Linux = Linux ]] 00:05:22.306 08:38:40 json_config -- json_config/json_config.sh@198 -- # rm -f /sample_aio 00:05:22.306 08:38:40 json_config -- json_config/json_config.sh@201 -- # [[ 0 -eq 1 ]] 00:05:22.306 08:38:40 json_config -- json_config/json_config.sh@205 -- # timing_exit cleanup_bdev_subsystem_config 00:05:22.306 08:38:40 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:22.306 08:38:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:22.306 08:38:40 json_config -- json_config/json_config.sh@327 -- # killprocess 835593 00:05:22.306 08:38:40 json_config -- common/autotest_common.sh@950 -- # '[' -z 835593 ']' 00:05:22.306 08:38:40 json_config -- common/autotest_common.sh@954 -- # kill -0 835593 00:05:22.306 08:38:40 json_config -- common/autotest_common.sh@955 -- # uname 00:05:22.306 08:38:40 json_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:22.306 08:38:40 json_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 835593 00:05:22.306 08:38:40 json_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:22.306 08:38:40 json_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:22.306 08:38:40 json_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 835593' 00:05:22.306 killing process with pid 835593 00:05:22.306 08:38:40 json_config -- common/autotest_common.sh@969 -- # kill 835593 00:05:22.306 08:38:40 json_config -- common/autotest_common.sh@974 -- # wait 835593 00:05:24.216 08:38:42 json_config -- json_config/json_config.sh@330 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:24.216 08:38:42 json_config -- json_config/json_config.sh@331 -- # timing_exit json_config_test_fini 00:05:24.216 08:38:42 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:24.216 08:38:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:24.216 08:38:42 json_config -- json_config/json_config.sh@332 -- # return 0 00:05:24.216 08:38:42 json_config -- json_config/json_config.sh@400 -- # echo 'INFO: Success' 00:05:24.216 INFO: Success 00:05:24.216 00:05:24.216 real 0m16.772s 00:05:24.216 user 0m18.545s 00:05:24.216 sys 0m2.268s 00:05:24.216 08:38:42 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:24.216 08:38:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:24.216 ************************************ 00:05:24.216 END TEST json_config 00:05:24.216 ************************************ 00:05:24.216 08:38:42 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:24.217 08:38:42 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:24.217 08:38:42 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:24.217 08:38:42 -- common/autotest_common.sh@10 -- # set +x 00:05:24.217 ************************************ 00:05:24.217 START TEST json_config_extra_key 00:05:24.217 ************************************ 00:05:24.217 08:38:42 json_config_extra_key -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:24.217 08:38:42 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:24.217 08:38:42 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:24.217 08:38:42 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:24.217 08:38:42 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:24.217 08:38:42 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:24.217 08:38:42 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:24.217 08:38:42 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:24.217 08:38:42 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:24.217 08:38:42 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:24.217 08:38:42 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:24.217 08:38:42 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:24.217 08:38:42 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:24.217 08:38:42 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:24.217 08:38:42 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:24.217 08:38:42 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:24.217 08:38:42 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:24.217 08:38:42 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:24.217 08:38:42 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:24.217 08:38:42 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:24.217 08:38:42 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:24.217 08:38:42 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:24.217 08:38:42 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:24.217 08:38:42 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:24.217 08:38:42 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:24.217 08:38:42 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:24.217 08:38:42 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:24.217 08:38:42 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:24.217 08:38:42 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:05:24.217 08:38:42 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:24.217 08:38:42 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:24.217 08:38:42 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:24.217 08:38:42 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:24.217 08:38:42 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:24.217 08:38:42 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:24.217 08:38:42 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:24.217 08:38:42 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:24.217 08:38:42 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:24.217 08:38:42 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:24.217 08:38:42 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:24.217 08:38:42 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:24.217 08:38:42 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:24.217 08:38:42 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:24.217 08:38:42 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:24.217 08:38:42 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:24.217 08:38:42 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:24.217 08:38:42 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:24.217 08:38:42 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:24.217 INFO: launching applications... 00:05:24.217 08:38:42 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:24.217 08:38:42 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:24.217 08:38:42 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:24.217 08:38:42 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:24.217 08:38:42 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:24.217 08:38:42 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:24.217 08:38:42 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:24.217 08:38:42 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:24.217 08:38:42 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=836641 00:05:24.217 08:38:42 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:24.217 08:38:42 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:24.217 Waiting for target to run... 00:05:24.217 08:38:42 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 836641 /var/tmp/spdk_tgt.sock 00:05:24.217 08:38:42 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 836641 ']' 00:05:24.217 08:38:42 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:24.217 08:38:42 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:24.217 08:38:42 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:24.217 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:24.217 08:38:42 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:24.217 08:38:42 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:24.217 [2024-07-26 08:38:42.414357] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:05:24.217 [2024-07-26 08:38:42.414476] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid836641 ] 00:05:24.217 EAL: No free 2048 kB hugepages reported on node 1 00:05:24.476 [2024-07-26 08:38:42.721775] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:24.476 [2024-07-26 08:38:42.755364] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.476 [2024-07-26 08:38:42.818817] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.053 08:38:43 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:25.053 08:38:43 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:05:25.053 08:38:43 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:25.053 00:05:25.053 08:38:43 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:25.053 INFO: shutting down applications... 00:05:25.053 08:38:43 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:25.053 08:38:43 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:25.053 08:38:43 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:25.053 08:38:43 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 836641 ]] 00:05:25.053 08:38:43 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 836641 00:05:25.053 08:38:43 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:25.053 08:38:43 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:25.053 08:38:43 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 836641 00:05:25.053 08:38:43 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:25.626 08:38:43 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:25.626 08:38:43 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:25.626 08:38:43 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 836641 00:05:25.626 08:38:43 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:25.626 08:38:43 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:25.626 08:38:43 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:25.626 08:38:43 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:25.626 SPDK target shutdown done 00:05:25.626 08:38:43 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:25.626 Success 00:05:25.626 00:05:25.626 real 0m1.532s 00:05:25.626 user 0m1.510s 00:05:25.626 sys 0m0.413s 00:05:25.626 08:38:43 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:25.626 08:38:43 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:25.626 ************************************ 00:05:25.626 END TEST json_config_extra_key 00:05:25.626 ************************************ 00:05:25.626 08:38:43 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:25.626 08:38:43 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:25.626 08:38:43 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:25.626 08:38:43 -- common/autotest_common.sh@10 -- # set +x 00:05:25.626 ************************************ 00:05:25.626 START TEST alias_rpc 00:05:25.626 ************************************ 00:05:25.626 08:38:43 alias_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:25.626 * Looking for test storage... 00:05:25.626 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:25.626 08:38:43 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:25.626 08:38:43 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=836822 00:05:25.626 08:38:43 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:25.626 08:38:43 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 836822 00:05:25.626 08:38:43 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 836822 ']' 00:05:25.626 08:38:43 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:25.626 08:38:43 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:25.626 08:38:43 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:25.626 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:25.626 08:38:43 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:25.626 08:38:43 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:25.626 [2024-07-26 08:38:44.005502] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:05:25.626 [2024-07-26 08:38:44.005601] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid836822 ] 00:05:25.626 EAL: No free 2048 kB hugepages reported on node 1 00:05:25.626 [2024-07-26 08:38:44.038528] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:25.626 [2024-07-26 08:38:44.065273] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:25.885 [2024-07-26 08:38:44.150112] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.145 08:38:44 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:26.145 08:38:44 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:26.145 08:38:44 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:26.405 08:38:44 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 836822 00:05:26.405 08:38:44 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 836822 ']' 00:05:26.405 08:38:44 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 836822 00:05:26.405 08:38:44 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:05:26.405 08:38:44 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:26.405 08:38:44 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 836822 00:05:26.406 08:38:44 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:26.406 08:38:44 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:26.406 08:38:44 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 836822' 00:05:26.406 killing process with pid 836822 00:05:26.406 08:38:44 alias_rpc -- common/autotest_common.sh@969 -- # kill 836822 00:05:26.406 08:38:44 alias_rpc -- common/autotest_common.sh@974 -- # wait 836822 00:05:26.665 00:05:26.665 real 0m1.208s 00:05:26.665 user 0m1.295s 00:05:26.665 sys 0m0.416s 00:05:26.665 08:38:45 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:26.665 08:38:45 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:26.665 ************************************ 00:05:26.665 END TEST alias_rpc 00:05:26.665 ************************************ 00:05:26.923 08:38:45 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:05:26.923 08:38:45 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:26.923 08:38:45 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:26.923 08:38:45 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:26.923 08:38:45 -- common/autotest_common.sh@10 -- # set +x 00:05:26.923 ************************************ 00:05:26.923 START TEST spdkcli_tcp 00:05:26.923 ************************************ 00:05:26.923 08:38:45 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:26.923 * Looking for test storage... 00:05:26.923 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:26.923 08:38:45 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:26.923 08:38:45 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:26.923 08:38:45 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:26.923 08:38:45 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:26.923 08:38:45 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:26.923 08:38:45 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:26.923 08:38:45 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:26.923 08:38:45 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:26.923 08:38:45 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:26.923 08:38:45 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=837126 00:05:26.923 08:38:45 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:26.923 08:38:45 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 837126 00:05:26.923 08:38:45 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 837126 ']' 00:05:26.923 08:38:45 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:26.923 08:38:45 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:26.923 08:38:45 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:26.923 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:26.923 08:38:45 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:26.923 08:38:45 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:26.923 [2024-07-26 08:38:45.263075] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:05:26.923 [2024-07-26 08:38:45.263156] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid837126 ] 00:05:26.923 EAL: No free 2048 kB hugepages reported on node 1 00:05:26.923 [2024-07-26 08:38:45.295110] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:26.923 [2024-07-26 08:38:45.322150] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:27.182 [2024-07-26 08:38:45.409681] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:27.182 [2024-07-26 08:38:45.409684] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.442 08:38:45 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:27.442 08:38:45 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:05:27.442 08:38:45 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=837144 00:05:27.442 08:38:45 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:27.442 08:38:45 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:27.442 [ 00:05:27.442 "bdev_malloc_delete", 00:05:27.442 "bdev_malloc_create", 00:05:27.442 "bdev_null_resize", 00:05:27.442 "bdev_null_delete", 00:05:27.442 "bdev_null_create", 00:05:27.442 "bdev_nvme_cuse_unregister", 00:05:27.442 "bdev_nvme_cuse_register", 00:05:27.442 "bdev_opal_new_user", 00:05:27.442 "bdev_opal_set_lock_state", 00:05:27.442 "bdev_opal_delete", 00:05:27.442 "bdev_opal_get_info", 00:05:27.442 "bdev_opal_create", 00:05:27.442 "bdev_nvme_opal_revert", 00:05:27.442 "bdev_nvme_opal_init", 00:05:27.442 "bdev_nvme_send_cmd", 00:05:27.442 "bdev_nvme_get_path_iostat", 00:05:27.442 "bdev_nvme_get_mdns_discovery_info", 00:05:27.442 "bdev_nvme_stop_mdns_discovery", 00:05:27.442 "bdev_nvme_start_mdns_discovery", 00:05:27.442 "bdev_nvme_set_multipath_policy", 00:05:27.442 "bdev_nvme_set_preferred_path", 00:05:27.442 "bdev_nvme_get_io_paths", 00:05:27.442 "bdev_nvme_remove_error_injection", 00:05:27.442 "bdev_nvme_add_error_injection", 00:05:27.442 "bdev_nvme_get_discovery_info", 00:05:27.442 "bdev_nvme_stop_discovery", 00:05:27.442 "bdev_nvme_start_discovery", 00:05:27.442 "bdev_nvme_get_controller_health_info", 00:05:27.442 "bdev_nvme_disable_controller", 00:05:27.442 "bdev_nvme_enable_controller", 00:05:27.442 "bdev_nvme_reset_controller", 00:05:27.442 "bdev_nvme_get_transport_statistics", 00:05:27.442 "bdev_nvme_apply_firmware", 00:05:27.442 "bdev_nvme_detach_controller", 00:05:27.442 "bdev_nvme_get_controllers", 00:05:27.442 "bdev_nvme_attach_controller", 00:05:27.442 "bdev_nvme_set_hotplug", 00:05:27.442 "bdev_nvme_set_options", 00:05:27.442 "bdev_passthru_delete", 00:05:27.442 "bdev_passthru_create", 00:05:27.442 "bdev_lvol_set_parent_bdev", 00:05:27.442 "bdev_lvol_set_parent", 00:05:27.442 "bdev_lvol_check_shallow_copy", 00:05:27.442 "bdev_lvol_start_shallow_copy", 00:05:27.442 "bdev_lvol_grow_lvstore", 00:05:27.442 "bdev_lvol_get_lvols", 00:05:27.442 "bdev_lvol_get_lvstores", 00:05:27.442 "bdev_lvol_delete", 00:05:27.442 "bdev_lvol_set_read_only", 00:05:27.442 "bdev_lvol_resize", 00:05:27.442 "bdev_lvol_decouple_parent", 00:05:27.442 "bdev_lvol_inflate", 00:05:27.442 "bdev_lvol_rename", 00:05:27.442 "bdev_lvol_clone_bdev", 00:05:27.442 "bdev_lvol_clone", 00:05:27.442 "bdev_lvol_snapshot", 00:05:27.442 "bdev_lvol_create", 00:05:27.442 "bdev_lvol_delete_lvstore", 00:05:27.442 "bdev_lvol_rename_lvstore", 00:05:27.442 "bdev_lvol_create_lvstore", 00:05:27.442 "bdev_raid_set_options", 00:05:27.442 "bdev_raid_remove_base_bdev", 00:05:27.442 "bdev_raid_add_base_bdev", 00:05:27.442 "bdev_raid_delete", 00:05:27.442 "bdev_raid_create", 00:05:27.442 "bdev_raid_get_bdevs", 00:05:27.442 "bdev_error_inject_error", 00:05:27.442 "bdev_error_delete", 00:05:27.442 "bdev_error_create", 00:05:27.442 "bdev_split_delete", 00:05:27.442 "bdev_split_create", 00:05:27.442 "bdev_delay_delete", 00:05:27.442 "bdev_delay_create", 00:05:27.442 "bdev_delay_update_latency", 00:05:27.442 "bdev_zone_block_delete", 00:05:27.442 "bdev_zone_block_create", 00:05:27.442 "blobfs_create", 00:05:27.442 "blobfs_detect", 00:05:27.442 "blobfs_set_cache_size", 00:05:27.442 "bdev_aio_delete", 00:05:27.442 "bdev_aio_rescan", 00:05:27.442 "bdev_aio_create", 00:05:27.442 "bdev_ftl_set_property", 00:05:27.442 "bdev_ftl_get_properties", 00:05:27.442 "bdev_ftl_get_stats", 00:05:27.442 "bdev_ftl_unmap", 00:05:27.442 "bdev_ftl_unload", 00:05:27.442 "bdev_ftl_delete", 00:05:27.442 "bdev_ftl_load", 00:05:27.442 "bdev_ftl_create", 00:05:27.442 "bdev_virtio_attach_controller", 00:05:27.442 "bdev_virtio_scsi_get_devices", 00:05:27.442 "bdev_virtio_detach_controller", 00:05:27.442 "bdev_virtio_blk_set_hotplug", 00:05:27.442 "bdev_iscsi_delete", 00:05:27.442 "bdev_iscsi_create", 00:05:27.442 "bdev_iscsi_set_options", 00:05:27.442 "accel_error_inject_error", 00:05:27.442 "ioat_scan_accel_module", 00:05:27.442 "dsa_scan_accel_module", 00:05:27.442 "iaa_scan_accel_module", 00:05:27.442 "vfu_virtio_create_scsi_endpoint", 00:05:27.442 "vfu_virtio_scsi_remove_target", 00:05:27.442 "vfu_virtio_scsi_add_target", 00:05:27.442 "vfu_virtio_create_blk_endpoint", 00:05:27.442 "vfu_virtio_delete_endpoint", 00:05:27.442 "keyring_file_remove_key", 00:05:27.442 "keyring_file_add_key", 00:05:27.442 "keyring_linux_set_options", 00:05:27.442 "iscsi_get_histogram", 00:05:27.442 "iscsi_enable_histogram", 00:05:27.442 "iscsi_set_options", 00:05:27.442 "iscsi_get_auth_groups", 00:05:27.442 "iscsi_auth_group_remove_secret", 00:05:27.442 "iscsi_auth_group_add_secret", 00:05:27.442 "iscsi_delete_auth_group", 00:05:27.442 "iscsi_create_auth_group", 00:05:27.442 "iscsi_set_discovery_auth", 00:05:27.442 "iscsi_get_options", 00:05:27.442 "iscsi_target_node_request_logout", 00:05:27.442 "iscsi_target_node_set_redirect", 00:05:27.442 "iscsi_target_node_set_auth", 00:05:27.442 "iscsi_target_node_add_lun", 00:05:27.442 "iscsi_get_stats", 00:05:27.442 "iscsi_get_connections", 00:05:27.442 "iscsi_portal_group_set_auth", 00:05:27.442 "iscsi_start_portal_group", 00:05:27.442 "iscsi_delete_portal_group", 00:05:27.442 "iscsi_create_portal_group", 00:05:27.442 "iscsi_get_portal_groups", 00:05:27.442 "iscsi_delete_target_node", 00:05:27.442 "iscsi_target_node_remove_pg_ig_maps", 00:05:27.442 "iscsi_target_node_add_pg_ig_maps", 00:05:27.442 "iscsi_create_target_node", 00:05:27.442 "iscsi_get_target_nodes", 00:05:27.442 "iscsi_delete_initiator_group", 00:05:27.442 "iscsi_initiator_group_remove_initiators", 00:05:27.442 "iscsi_initiator_group_add_initiators", 00:05:27.442 "iscsi_create_initiator_group", 00:05:27.442 "iscsi_get_initiator_groups", 00:05:27.442 "nvmf_set_crdt", 00:05:27.442 "nvmf_set_config", 00:05:27.442 "nvmf_set_max_subsystems", 00:05:27.442 "nvmf_stop_mdns_prr", 00:05:27.442 "nvmf_publish_mdns_prr", 00:05:27.442 "nvmf_subsystem_get_listeners", 00:05:27.442 "nvmf_subsystem_get_qpairs", 00:05:27.442 "nvmf_subsystem_get_controllers", 00:05:27.442 "nvmf_get_stats", 00:05:27.442 "nvmf_get_transports", 00:05:27.442 "nvmf_create_transport", 00:05:27.442 "nvmf_get_targets", 00:05:27.442 "nvmf_delete_target", 00:05:27.442 "nvmf_create_target", 00:05:27.442 "nvmf_subsystem_allow_any_host", 00:05:27.442 "nvmf_subsystem_remove_host", 00:05:27.442 "nvmf_subsystem_add_host", 00:05:27.442 "nvmf_ns_remove_host", 00:05:27.442 "nvmf_ns_add_host", 00:05:27.442 "nvmf_subsystem_remove_ns", 00:05:27.442 "nvmf_subsystem_add_ns", 00:05:27.442 "nvmf_subsystem_listener_set_ana_state", 00:05:27.442 "nvmf_discovery_get_referrals", 00:05:27.442 "nvmf_discovery_remove_referral", 00:05:27.442 "nvmf_discovery_add_referral", 00:05:27.442 "nvmf_subsystem_remove_listener", 00:05:27.442 "nvmf_subsystem_add_listener", 00:05:27.442 "nvmf_delete_subsystem", 00:05:27.442 "nvmf_create_subsystem", 00:05:27.442 "nvmf_get_subsystems", 00:05:27.442 "env_dpdk_get_mem_stats", 00:05:27.442 "nbd_get_disks", 00:05:27.443 "nbd_stop_disk", 00:05:27.443 "nbd_start_disk", 00:05:27.443 "ublk_recover_disk", 00:05:27.443 "ublk_get_disks", 00:05:27.443 "ublk_stop_disk", 00:05:27.443 "ublk_start_disk", 00:05:27.443 "ublk_destroy_target", 00:05:27.443 "ublk_create_target", 00:05:27.443 "virtio_blk_create_transport", 00:05:27.443 "virtio_blk_get_transports", 00:05:27.443 "vhost_controller_set_coalescing", 00:05:27.443 "vhost_get_controllers", 00:05:27.443 "vhost_delete_controller", 00:05:27.443 "vhost_create_blk_controller", 00:05:27.443 "vhost_scsi_controller_remove_target", 00:05:27.443 "vhost_scsi_controller_add_target", 00:05:27.443 "vhost_start_scsi_controller", 00:05:27.443 "vhost_create_scsi_controller", 00:05:27.443 "thread_set_cpumask", 00:05:27.443 "framework_get_governor", 00:05:27.443 "framework_get_scheduler", 00:05:27.443 "framework_set_scheduler", 00:05:27.443 "framework_get_reactors", 00:05:27.443 "thread_get_io_channels", 00:05:27.443 "thread_get_pollers", 00:05:27.443 "thread_get_stats", 00:05:27.443 "framework_monitor_context_switch", 00:05:27.443 "spdk_kill_instance", 00:05:27.443 "log_enable_timestamps", 00:05:27.443 "log_get_flags", 00:05:27.443 "log_clear_flag", 00:05:27.443 "log_set_flag", 00:05:27.443 "log_get_level", 00:05:27.443 "log_set_level", 00:05:27.443 "log_get_print_level", 00:05:27.443 "log_set_print_level", 00:05:27.443 "framework_enable_cpumask_locks", 00:05:27.443 "framework_disable_cpumask_locks", 00:05:27.443 "framework_wait_init", 00:05:27.443 "framework_start_init", 00:05:27.443 "scsi_get_devices", 00:05:27.443 "bdev_get_histogram", 00:05:27.443 "bdev_enable_histogram", 00:05:27.443 "bdev_set_qos_limit", 00:05:27.443 "bdev_set_qd_sampling_period", 00:05:27.443 "bdev_get_bdevs", 00:05:27.443 "bdev_reset_iostat", 00:05:27.443 "bdev_get_iostat", 00:05:27.443 "bdev_examine", 00:05:27.443 "bdev_wait_for_examine", 00:05:27.443 "bdev_set_options", 00:05:27.443 "notify_get_notifications", 00:05:27.443 "notify_get_types", 00:05:27.443 "accel_get_stats", 00:05:27.443 "accel_set_options", 00:05:27.443 "accel_set_driver", 00:05:27.443 "accel_crypto_key_destroy", 00:05:27.443 "accel_crypto_keys_get", 00:05:27.443 "accel_crypto_key_create", 00:05:27.443 "accel_assign_opc", 00:05:27.443 "accel_get_module_info", 00:05:27.443 "accel_get_opc_assignments", 00:05:27.443 "vmd_rescan", 00:05:27.443 "vmd_remove_device", 00:05:27.443 "vmd_enable", 00:05:27.443 "sock_get_default_impl", 00:05:27.443 "sock_set_default_impl", 00:05:27.443 "sock_impl_set_options", 00:05:27.443 "sock_impl_get_options", 00:05:27.443 "iobuf_get_stats", 00:05:27.443 "iobuf_set_options", 00:05:27.443 "keyring_get_keys", 00:05:27.443 "framework_get_pci_devices", 00:05:27.443 "framework_get_config", 00:05:27.443 "framework_get_subsystems", 00:05:27.443 "vfu_tgt_set_base_path", 00:05:27.443 "trace_get_info", 00:05:27.443 "trace_get_tpoint_group_mask", 00:05:27.443 "trace_disable_tpoint_group", 00:05:27.443 "trace_enable_tpoint_group", 00:05:27.443 "trace_clear_tpoint_mask", 00:05:27.443 "trace_set_tpoint_mask", 00:05:27.443 "spdk_get_version", 00:05:27.443 "rpc_get_methods" 00:05:27.443 ] 00:05:27.443 08:38:45 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:27.443 08:38:45 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:27.443 08:38:45 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:27.702 08:38:45 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:27.702 08:38:45 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 837126 00:05:27.702 08:38:45 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 837126 ']' 00:05:27.702 08:38:45 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 837126 00:05:27.702 08:38:45 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:05:27.702 08:38:45 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:27.702 08:38:45 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 837126 00:05:27.703 08:38:45 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:27.703 08:38:45 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:27.703 08:38:45 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 837126' 00:05:27.703 killing process with pid 837126 00:05:27.703 08:38:45 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 837126 00:05:27.703 08:38:45 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 837126 00:05:27.962 00:05:27.962 real 0m1.200s 00:05:27.962 user 0m2.130s 00:05:27.962 sys 0m0.441s 00:05:27.962 08:38:46 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:27.962 08:38:46 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:27.962 ************************************ 00:05:27.962 END TEST spdkcli_tcp 00:05:27.962 ************************************ 00:05:27.962 08:38:46 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:27.962 08:38:46 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:27.962 08:38:46 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:27.962 08:38:46 -- common/autotest_common.sh@10 -- # set +x 00:05:27.962 ************************************ 00:05:27.962 START TEST dpdk_mem_utility 00:05:27.962 ************************************ 00:05:27.962 08:38:46 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:28.222 * Looking for test storage... 00:05:28.222 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:28.222 08:38:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:28.222 08:38:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=837334 00:05:28.222 08:38:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:28.222 08:38:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 837334 00:05:28.222 08:38:46 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 837334 ']' 00:05:28.222 08:38:46 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:28.222 08:38:46 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:28.222 08:38:46 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:28.222 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:28.222 08:38:46 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:28.222 08:38:46 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:28.222 [2024-07-26 08:38:46.499887] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:05:28.222 [2024-07-26 08:38:46.499990] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid837334 ] 00:05:28.222 EAL: No free 2048 kB hugepages reported on node 1 00:05:28.222 [2024-07-26 08:38:46.532202] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:28.222 [2024-07-26 08:38:46.558140] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.222 [2024-07-26 08:38:46.642233] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.482 08:38:46 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:28.482 08:38:46 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:05:28.482 08:38:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:28.482 08:38:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:28.482 08:38:46 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:28.482 08:38:46 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:28.482 { 00:05:28.482 "filename": "/tmp/spdk_mem_dump.txt" 00:05:28.482 } 00:05:28.482 08:38:46 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:28.482 08:38:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:28.743 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:28.743 1 heaps totaling size 814.000000 MiB 00:05:28.743 size: 814.000000 MiB heap id: 0 00:05:28.743 end heaps---------- 00:05:28.743 8 mempools totaling size 598.116089 MiB 00:05:28.743 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:28.743 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:28.743 size: 84.521057 MiB name: bdev_io_837334 00:05:28.743 size: 51.011292 MiB name: evtpool_837334 00:05:28.743 size: 50.003479 MiB name: msgpool_837334 00:05:28.743 size: 21.763794 MiB name: PDU_Pool 00:05:28.743 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:28.743 size: 0.026123 MiB name: Session_Pool 00:05:28.743 end mempools------- 00:05:28.743 6 memzones totaling size 4.142822 MiB 00:05:28.743 size: 1.000366 MiB name: RG_ring_0_837334 00:05:28.743 size: 1.000366 MiB name: RG_ring_1_837334 00:05:28.743 size: 1.000366 MiB name: RG_ring_4_837334 00:05:28.743 size: 1.000366 MiB name: RG_ring_5_837334 00:05:28.743 size: 0.125366 MiB name: RG_ring_2_837334 00:05:28.743 size: 0.015991 MiB name: RG_ring_3_837334 00:05:28.743 end memzones------- 00:05:28.743 08:38:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:28.743 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:05:28.743 list of free elements. size: 12.519348 MiB 00:05:28.743 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:28.743 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:28.743 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:28.743 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:28.743 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:28.743 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:28.743 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:28.743 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:28.743 element at address: 0x200000200000 with size: 0.841614 MiB 00:05:28.743 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:05:28.743 element at address: 0x20000b200000 with size: 0.490723 MiB 00:05:28.743 element at address: 0x200000800000 with size: 0.487793 MiB 00:05:28.743 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:28.743 element at address: 0x200027e00000 with size: 0.410034 MiB 00:05:28.743 element at address: 0x200003a00000 with size: 0.355530 MiB 00:05:28.743 list of standard malloc elements. size: 199.218079 MiB 00:05:28.743 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:28.743 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:28.743 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:28.743 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:28.743 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:28.743 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:28.743 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:28.743 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:28.743 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:28.743 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:05:28.743 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:05:28.743 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:05:28.743 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:28.743 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:28.743 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:28.743 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:28.743 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:28.743 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:28.743 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:28.743 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:28.743 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:28.743 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:28.743 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:28.743 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:28.743 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:28.743 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:28.743 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:28.743 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:28.743 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:28.743 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:28.743 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:28.744 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:28.744 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:28.744 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:28.744 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:28.744 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:28.744 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:05:28.744 element at address: 0x200027e69040 with size: 0.000183 MiB 00:05:28.744 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:05:28.744 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:28.744 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:28.744 list of memzone associated elements. size: 602.262573 MiB 00:05:28.744 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:28.744 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:28.744 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:28.744 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:28.744 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:28.744 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_837334_0 00:05:28.744 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:28.744 associated memzone info: size: 48.002930 MiB name: MP_evtpool_837334_0 00:05:28.744 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:28.744 associated memzone info: size: 48.002930 MiB name: MP_msgpool_837334_0 00:05:28.744 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:28.744 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:28.744 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:28.744 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:28.744 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:28.744 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_837334 00:05:28.744 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:28.744 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_837334 00:05:28.744 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:28.744 associated memzone info: size: 1.007996 MiB name: MP_evtpool_837334 00:05:28.744 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:28.744 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:28.744 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:28.744 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:28.744 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:28.744 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:28.744 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:28.744 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:28.744 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:28.744 associated memzone info: size: 1.000366 MiB name: RG_ring_0_837334 00:05:28.744 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:28.744 associated memzone info: size: 1.000366 MiB name: RG_ring_1_837334 00:05:28.744 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:28.744 associated memzone info: size: 1.000366 MiB name: RG_ring_4_837334 00:05:28.744 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:28.744 associated memzone info: size: 1.000366 MiB name: RG_ring_5_837334 00:05:28.744 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:28.744 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_837334 00:05:28.744 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:28.744 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:28.744 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:28.744 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:28.744 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:28.744 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:28.744 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:28.744 associated memzone info: size: 0.125366 MiB name: RG_ring_2_837334 00:05:28.744 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:28.744 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:28.744 element at address: 0x200027e69100 with size: 0.023743 MiB 00:05:28.744 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:28.744 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:28.744 associated memzone info: size: 0.015991 MiB name: RG_ring_3_837334 00:05:28.744 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:05:28.744 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:28.744 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:05:28.744 associated memzone info: size: 0.000183 MiB name: MP_msgpool_837334 00:05:28.744 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:28.744 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_837334 00:05:28.744 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:05:28.744 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:28.744 08:38:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:28.744 08:38:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 837334 00:05:28.744 08:38:47 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 837334 ']' 00:05:28.744 08:38:47 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 837334 00:05:28.744 08:38:47 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:05:28.744 08:38:47 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:28.744 08:38:47 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 837334 00:05:28.744 08:38:47 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:28.744 08:38:47 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:28.744 08:38:47 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 837334' 00:05:28.744 killing process with pid 837334 00:05:28.744 08:38:47 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 837334 00:05:28.744 08:38:47 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 837334 00:05:29.002 00:05:29.002 real 0m1.059s 00:05:29.002 user 0m1.023s 00:05:29.002 sys 0m0.392s 00:05:29.002 08:38:47 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:29.002 08:38:47 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:29.002 ************************************ 00:05:29.002 END TEST dpdk_mem_utility 00:05:29.002 ************************************ 00:05:29.261 08:38:47 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:29.261 08:38:47 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:29.261 08:38:47 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:29.261 08:38:47 -- common/autotest_common.sh@10 -- # set +x 00:05:29.261 ************************************ 00:05:29.261 START TEST event 00:05:29.261 ************************************ 00:05:29.261 08:38:47 event -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:29.261 * Looking for test storage... 00:05:29.261 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:29.261 08:38:47 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:29.261 08:38:47 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:29.261 08:38:47 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:29.261 08:38:47 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:05:29.261 08:38:47 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:29.261 08:38:47 event -- common/autotest_common.sh@10 -- # set +x 00:05:29.261 ************************************ 00:05:29.261 START TEST event_perf 00:05:29.261 ************************************ 00:05:29.261 08:38:47 event.event_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:29.261 Running I/O for 1 seconds...[2024-07-26 08:38:47.587000] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:05:29.261 [2024-07-26 08:38:47.587101] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid837521 ] 00:05:29.261 EAL: No free 2048 kB hugepages reported on node 1 00:05:29.261 [2024-07-26 08:38:47.618650] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:29.261 [2024-07-26 08:38:47.647780] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:29.520 [2024-07-26 08:38:47.742659] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:29.520 [2024-07-26 08:38:47.742725] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:29.520 [2024-07-26 08:38:47.742815] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:29.520 [2024-07-26 08:38:47.742818] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.458 Running I/O for 1 seconds... 00:05:30.458 lcore 0: 231876 00:05:30.458 lcore 1: 231877 00:05:30.458 lcore 2: 231877 00:05:30.458 lcore 3: 231877 00:05:30.458 done. 00:05:30.458 00:05:30.458 real 0m1.248s 00:05:30.458 user 0m4.159s 00:05:30.458 sys 0m0.084s 00:05:30.458 08:38:48 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:30.458 08:38:48 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:30.458 ************************************ 00:05:30.458 END TEST event_perf 00:05:30.458 ************************************ 00:05:30.458 08:38:48 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:30.458 08:38:48 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:30.458 08:38:48 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:30.458 08:38:48 event -- common/autotest_common.sh@10 -- # set +x 00:05:30.458 ************************************ 00:05:30.458 START TEST event_reactor 00:05:30.458 ************************************ 00:05:30.458 08:38:48 event.event_reactor -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:30.458 [2024-07-26 08:38:48.886410] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:05:30.458 [2024-07-26 08:38:48.886479] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid837681 ] 00:05:30.458 EAL: No free 2048 kB hugepages reported on node 1 00:05:30.717 [2024-07-26 08:38:48.919545] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:30.717 [2024-07-26 08:38:48.951021] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.717 [2024-07-26 08:38:49.040346] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.093 test_start 00:05:32.093 oneshot 00:05:32.093 tick 100 00:05:32.093 tick 100 00:05:32.093 tick 250 00:05:32.093 tick 100 00:05:32.093 tick 100 00:05:32.093 tick 100 00:05:32.093 tick 250 00:05:32.093 tick 500 00:05:32.093 tick 100 00:05:32.093 tick 100 00:05:32.093 tick 250 00:05:32.093 tick 100 00:05:32.093 tick 100 00:05:32.093 test_end 00:05:32.093 00:05:32.093 real 0m1.243s 00:05:32.093 user 0m1.154s 00:05:32.093 sys 0m0.085s 00:05:32.093 08:38:50 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:32.093 08:38:50 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:32.093 ************************************ 00:05:32.093 END TEST event_reactor 00:05:32.093 ************************************ 00:05:32.093 08:38:50 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:32.093 08:38:50 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:32.093 08:38:50 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:32.093 08:38:50 event -- common/autotest_common.sh@10 -- # set +x 00:05:32.093 ************************************ 00:05:32.093 START TEST event_reactor_perf 00:05:32.093 ************************************ 00:05:32.093 08:38:50 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:32.093 [2024-07-26 08:38:50.177751] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:05:32.093 [2024-07-26 08:38:50.177818] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid837838 ] 00:05:32.093 EAL: No free 2048 kB hugepages reported on node 1 00:05:32.093 [2024-07-26 08:38:50.210430] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:32.093 [2024-07-26 08:38:50.240072] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.093 [2024-07-26 08:38:50.332842] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.028 test_start 00:05:33.028 test_end 00:05:33.028 Performance: 355479 events per second 00:05:33.028 00:05:33.028 real 0m1.251s 00:05:33.028 user 0m1.163s 00:05:33.028 sys 0m0.083s 00:05:33.028 08:38:51 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:33.028 08:38:51 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:33.028 ************************************ 00:05:33.028 END TEST event_reactor_perf 00:05:33.028 ************************************ 00:05:33.028 08:38:51 event -- event/event.sh@49 -- # uname -s 00:05:33.028 08:38:51 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:33.028 08:38:51 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:33.028 08:38:51 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:33.028 08:38:51 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:33.028 08:38:51 event -- common/autotest_common.sh@10 -- # set +x 00:05:33.028 ************************************ 00:05:33.028 START TEST event_scheduler 00:05:33.028 ************************************ 00:05:33.028 08:38:51 event.event_scheduler -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:33.286 * Looking for test storage... 00:05:33.286 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:33.286 08:38:51 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:33.286 08:38:51 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=838024 00:05:33.286 08:38:51 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:33.286 08:38:51 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:33.286 08:38:51 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 838024 00:05:33.286 08:38:51 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 838024 ']' 00:05:33.286 08:38:51 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:33.286 08:38:51 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:33.286 08:38:51 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:33.286 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:33.286 08:38:51 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:33.286 08:38:51 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:33.286 [2024-07-26 08:38:51.555911] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:05:33.286 [2024-07-26 08:38:51.556008] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid838024 ] 00:05:33.286 EAL: No free 2048 kB hugepages reported on node 1 00:05:33.286 [2024-07-26 08:38:51.591164] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:33.286 [2024-07-26 08:38:51.617201] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:33.286 [2024-07-26 08:38:51.703719] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.286 [2024-07-26 08:38:51.703800] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:33.286 [2024-07-26 08:38:51.703747] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:33.286 [2024-07-26 08:38:51.703802] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:33.545 08:38:51 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:33.545 08:38:51 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:05:33.545 08:38:51 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:33.545 08:38:51 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:33.545 08:38:51 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:33.545 [2024-07-26 08:38:51.780708] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:05:33.545 [2024-07-26 08:38:51.780731] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:05:33.545 [2024-07-26 08:38:51.780747] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:33.545 [2024-07-26 08:38:51.780758] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:33.545 [2024-07-26 08:38:51.780767] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:33.545 08:38:51 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:33.545 08:38:51 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:33.545 08:38:51 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:33.545 08:38:51 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:33.545 [2024-07-26 08:38:51.874236] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:33.545 08:38:51 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:33.545 08:38:51 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:33.545 08:38:51 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:33.545 08:38:51 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:33.545 08:38:51 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:33.545 ************************************ 00:05:33.545 START TEST scheduler_create_thread 00:05:33.545 ************************************ 00:05:33.545 08:38:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:05:33.545 08:38:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:33.545 08:38:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:33.545 08:38:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:33.545 2 00:05:33.545 08:38:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:33.545 08:38:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:33.545 08:38:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:33.545 08:38:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:33.545 3 00:05:33.545 08:38:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:33.545 08:38:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:33.545 08:38:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:33.545 08:38:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:33.545 4 00:05:33.545 08:38:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:33.545 08:38:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:33.545 08:38:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:33.545 08:38:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:33.545 5 00:05:33.545 08:38:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:33.545 08:38:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:33.545 08:38:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:33.545 08:38:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:33.545 6 00:05:33.545 08:38:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:33.545 08:38:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:33.545 08:38:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:33.545 08:38:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:33.545 7 00:05:33.545 08:38:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:33.545 08:38:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:33.545 08:38:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:33.545 08:38:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:33.545 8 00:05:33.545 08:38:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:33.545 08:38:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:33.545 08:38:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:33.545 08:38:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:33.545 9 00:05:33.545 08:38:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:33.545 08:38:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:33.545 08:38:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:33.546 08:38:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:33.546 10 00:05:33.546 08:38:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:33.546 08:38:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:33.546 08:38:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:33.546 08:38:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:33.546 08:38:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:33.546 08:38:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:33.546 08:38:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:33.546 08:38:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:33.546 08:38:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:33.546 08:38:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:33.546 08:38:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:33.546 08:38:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:33.546 08:38:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:33.546 08:38:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:33.546 08:38:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:33.546 08:38:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:33.546 08:38:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:33.546 08:38:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:34.114 08:38:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:34.114 00:05:34.114 real 0m0.590s 00:05:34.114 user 0m0.011s 00:05:34.114 sys 0m0.002s 00:05:34.114 08:38:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:34.114 08:38:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:34.114 ************************************ 00:05:34.114 END TEST scheduler_create_thread 00:05:34.114 ************************************ 00:05:34.114 08:38:52 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:34.114 08:38:52 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 838024 00:05:34.114 08:38:52 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 838024 ']' 00:05:34.114 08:38:52 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 838024 00:05:34.114 08:38:52 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:05:34.114 08:38:52 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:34.114 08:38:52 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 838024 00:05:34.114 08:38:52 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:05:34.114 08:38:52 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:05:34.114 08:38:52 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 838024' 00:05:34.114 killing process with pid 838024 00:05:34.114 08:38:52 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 838024 00:05:34.114 08:38:52 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 838024 00:05:34.704 [2024-07-26 08:38:52.974222] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:34.968 00:05:34.968 real 0m1.722s 00:05:34.968 user 0m2.289s 00:05:34.968 sys 0m0.305s 00:05:34.968 08:38:53 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:34.968 08:38:53 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:34.968 ************************************ 00:05:34.968 END TEST event_scheduler 00:05:34.968 ************************************ 00:05:34.968 08:38:53 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:34.968 08:38:53 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:34.968 08:38:53 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:34.968 08:38:53 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:34.968 08:38:53 event -- common/autotest_common.sh@10 -- # set +x 00:05:34.968 ************************************ 00:05:34.968 START TEST app_repeat 00:05:34.968 ************************************ 00:05:34.968 08:38:53 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:05:34.968 08:38:53 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:34.968 08:38:53 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:34.968 08:38:53 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:34.968 08:38:53 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:34.968 08:38:53 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:34.968 08:38:53 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:34.968 08:38:53 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:34.968 08:38:53 event.app_repeat -- event/event.sh@19 -- # repeat_pid=838324 00:05:34.968 08:38:53 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:34.968 08:38:53 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:34.968 08:38:53 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 838324' 00:05:34.968 Process app_repeat pid: 838324 00:05:34.968 08:38:53 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:34.968 08:38:53 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:34.968 spdk_app_start Round 0 00:05:34.968 08:38:53 event.app_repeat -- event/event.sh@25 -- # waitforlisten 838324 /var/tmp/spdk-nbd.sock 00:05:34.968 08:38:53 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 838324 ']' 00:05:34.968 08:38:53 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:34.968 08:38:53 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:34.969 08:38:53 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:34.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:34.969 08:38:53 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:34.969 08:38:53 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:34.969 [2024-07-26 08:38:53.264669] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:05:34.969 [2024-07-26 08:38:53.264734] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid838324 ] 00:05:34.969 EAL: No free 2048 kB hugepages reported on node 1 00:05:34.969 [2024-07-26 08:38:53.296614] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:34.969 [2024-07-26 08:38:53.326576] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:34.969 [2024-07-26 08:38:53.419537] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:34.969 [2024-07-26 08:38:53.419541] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.226 08:38:53 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:35.226 08:38:53 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:35.226 08:38:53 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:35.483 Malloc0 00:05:35.483 08:38:53 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:35.741 Malloc1 00:05:35.741 08:38:54 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:35.741 08:38:54 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:35.741 08:38:54 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:35.741 08:38:54 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:35.741 08:38:54 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:35.741 08:38:54 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:35.741 08:38:54 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:35.741 08:38:54 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:35.742 08:38:54 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:35.742 08:38:54 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:35.742 08:38:54 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:35.742 08:38:54 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:35.742 08:38:54 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:35.742 08:38:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:35.742 08:38:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:35.742 08:38:54 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:35.999 /dev/nbd0 00:05:35.999 08:38:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:35.999 08:38:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:35.999 08:38:54 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:35.999 08:38:54 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:35.999 08:38:54 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:35.999 08:38:54 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:35.999 08:38:54 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:35.999 08:38:54 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:35.999 08:38:54 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:35.999 08:38:54 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:35.999 08:38:54 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:35.999 1+0 records in 00:05:35.999 1+0 records out 00:05:35.999 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000171162 s, 23.9 MB/s 00:05:35.999 08:38:54 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:35.999 08:38:54 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:35.999 08:38:54 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:35.999 08:38:54 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:35.999 08:38:54 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:35.999 08:38:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:35.999 08:38:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:36.000 08:38:54 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:36.257 /dev/nbd1 00:05:36.257 08:38:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:36.257 08:38:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:36.257 08:38:54 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:36.257 08:38:54 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:36.257 08:38:54 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:36.257 08:38:54 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:36.257 08:38:54 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:36.257 08:38:54 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:36.257 08:38:54 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:36.257 08:38:54 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:36.258 08:38:54 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:36.258 1+0 records in 00:05:36.258 1+0 records out 00:05:36.258 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000196643 s, 20.8 MB/s 00:05:36.258 08:38:54 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:36.258 08:38:54 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:36.258 08:38:54 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:36.258 08:38:54 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:36.258 08:38:54 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:36.258 08:38:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:36.258 08:38:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:36.258 08:38:54 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:36.258 08:38:54 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:36.258 08:38:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:36.515 08:38:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:36.515 { 00:05:36.515 "nbd_device": "/dev/nbd0", 00:05:36.515 "bdev_name": "Malloc0" 00:05:36.515 }, 00:05:36.515 { 00:05:36.515 "nbd_device": "/dev/nbd1", 00:05:36.515 "bdev_name": "Malloc1" 00:05:36.515 } 00:05:36.515 ]' 00:05:36.515 08:38:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:36.515 { 00:05:36.515 "nbd_device": "/dev/nbd0", 00:05:36.515 "bdev_name": "Malloc0" 00:05:36.515 }, 00:05:36.515 { 00:05:36.515 "nbd_device": "/dev/nbd1", 00:05:36.515 "bdev_name": "Malloc1" 00:05:36.515 } 00:05:36.515 ]' 00:05:36.515 08:38:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:36.515 08:38:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:36.515 /dev/nbd1' 00:05:36.515 08:38:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:36.515 /dev/nbd1' 00:05:36.515 08:38:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:36.515 08:38:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:36.515 08:38:54 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:36.515 08:38:54 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:36.515 08:38:54 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:36.515 08:38:54 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:36.515 08:38:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:36.515 08:38:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:36.515 08:38:54 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:36.515 08:38:54 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:36.515 08:38:54 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:36.515 08:38:54 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:36.515 256+0 records in 00:05:36.515 256+0 records out 00:05:36.515 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0051106 s, 205 MB/s 00:05:36.515 08:38:54 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:36.515 08:38:54 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:36.515 256+0 records in 00:05:36.515 256+0 records out 00:05:36.515 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0206928 s, 50.7 MB/s 00:05:36.515 08:38:54 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:36.515 08:38:54 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:36.515 256+0 records in 00:05:36.515 256+0 records out 00:05:36.515 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0234529 s, 44.7 MB/s 00:05:36.515 08:38:54 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:36.515 08:38:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:36.515 08:38:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:36.515 08:38:54 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:36.515 08:38:54 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:36.515 08:38:54 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:36.515 08:38:54 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:36.516 08:38:54 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:36.516 08:38:54 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:36.516 08:38:54 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:36.516 08:38:54 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:36.516 08:38:54 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:36.516 08:38:54 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:36.516 08:38:54 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:36.516 08:38:54 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:36.516 08:38:54 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:36.516 08:38:54 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:36.516 08:38:54 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:36.516 08:38:54 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:36.774 08:38:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:36.774 08:38:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:36.774 08:38:55 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:36.774 08:38:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:36.774 08:38:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:37.033 08:38:55 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:37.033 08:38:55 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:37.033 08:38:55 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:37.033 08:38:55 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:37.033 08:38:55 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:37.291 08:38:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:37.291 08:38:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:37.291 08:38:55 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:37.291 08:38:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:37.291 08:38:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:37.291 08:38:55 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:37.291 08:38:55 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:37.291 08:38:55 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:37.291 08:38:55 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:37.291 08:38:55 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:37.291 08:38:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:37.549 08:38:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:37.549 08:38:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:37.549 08:38:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:37.549 08:38:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:37.549 08:38:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:37.549 08:38:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:37.549 08:38:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:37.549 08:38:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:37.549 08:38:55 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:37.549 08:38:55 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:37.549 08:38:55 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:37.549 08:38:55 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:37.549 08:38:55 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:37.808 08:38:56 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:38.067 [2024-07-26 08:38:56.310236] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:38.067 [2024-07-26 08:38:56.400003] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:38.067 [2024-07-26 08:38:56.400007] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.067 [2024-07-26 08:38:56.458073] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:38.067 [2024-07-26 08:38:56.458156] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:41.347 08:38:59 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:41.347 08:38:59 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:41.347 spdk_app_start Round 1 00:05:41.347 08:38:59 event.app_repeat -- event/event.sh@25 -- # waitforlisten 838324 /var/tmp/spdk-nbd.sock 00:05:41.347 08:38:59 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 838324 ']' 00:05:41.347 08:38:59 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:41.347 08:38:59 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:41.347 08:38:59 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:41.347 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:41.347 08:38:59 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:41.347 08:38:59 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:41.347 08:38:59 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:41.347 08:38:59 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:41.347 08:38:59 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:41.347 Malloc0 00:05:41.347 08:38:59 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:41.604 Malloc1 00:05:41.604 08:38:59 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:41.604 08:38:59 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:41.604 08:38:59 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:41.604 08:38:59 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:41.604 08:38:59 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:41.604 08:38:59 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:41.604 08:38:59 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:41.604 08:38:59 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:41.604 08:38:59 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:41.604 08:38:59 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:41.604 08:38:59 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:41.604 08:38:59 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:41.604 08:38:59 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:41.604 08:38:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:41.605 08:38:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:41.605 08:38:59 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:41.862 /dev/nbd0 00:05:41.862 08:39:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:41.862 08:39:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:41.862 08:39:00 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:41.862 08:39:00 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:41.862 08:39:00 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:41.862 08:39:00 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:41.862 08:39:00 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:41.862 08:39:00 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:41.862 08:39:00 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:41.862 08:39:00 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:41.862 08:39:00 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:41.862 1+0 records in 00:05:41.862 1+0 records out 00:05:41.862 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000183568 s, 22.3 MB/s 00:05:41.862 08:39:00 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:41.862 08:39:00 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:41.862 08:39:00 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:41.862 08:39:00 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:41.862 08:39:00 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:41.862 08:39:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:41.862 08:39:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:41.862 08:39:00 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:42.120 /dev/nbd1 00:05:42.120 08:39:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:42.120 08:39:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:42.120 08:39:00 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:42.120 08:39:00 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:42.120 08:39:00 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:42.120 08:39:00 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:42.120 08:39:00 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:42.120 08:39:00 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:42.120 08:39:00 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:42.120 08:39:00 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:42.120 08:39:00 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:42.120 1+0 records in 00:05:42.120 1+0 records out 00:05:42.120 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000167644 s, 24.4 MB/s 00:05:42.120 08:39:00 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:42.120 08:39:00 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:42.120 08:39:00 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:42.120 08:39:00 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:42.120 08:39:00 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:42.120 08:39:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:42.120 08:39:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:42.120 08:39:00 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:42.120 08:39:00 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:42.120 08:39:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:42.378 08:39:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:42.378 { 00:05:42.378 "nbd_device": "/dev/nbd0", 00:05:42.378 "bdev_name": "Malloc0" 00:05:42.378 }, 00:05:42.378 { 00:05:42.378 "nbd_device": "/dev/nbd1", 00:05:42.378 "bdev_name": "Malloc1" 00:05:42.378 } 00:05:42.378 ]' 00:05:42.378 08:39:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:42.378 { 00:05:42.378 "nbd_device": "/dev/nbd0", 00:05:42.378 "bdev_name": "Malloc0" 00:05:42.378 }, 00:05:42.378 { 00:05:42.378 "nbd_device": "/dev/nbd1", 00:05:42.378 "bdev_name": "Malloc1" 00:05:42.378 } 00:05:42.378 ]' 00:05:42.378 08:39:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:42.378 08:39:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:42.378 /dev/nbd1' 00:05:42.378 08:39:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:42.378 /dev/nbd1' 00:05:42.378 08:39:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:42.378 08:39:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:42.378 08:39:00 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:42.378 08:39:00 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:42.378 08:39:00 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:42.378 08:39:00 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:42.378 08:39:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:42.378 08:39:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:42.378 08:39:00 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:42.378 08:39:00 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:42.378 08:39:00 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:42.378 08:39:00 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:42.378 256+0 records in 00:05:42.378 256+0 records out 00:05:42.378 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00411805 s, 255 MB/s 00:05:42.378 08:39:00 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:42.378 08:39:00 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:42.378 256+0 records in 00:05:42.378 256+0 records out 00:05:42.378 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0212119 s, 49.4 MB/s 00:05:42.378 08:39:00 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:42.378 08:39:00 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:42.378 256+0 records in 00:05:42.378 256+0 records out 00:05:42.378 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0267734 s, 39.2 MB/s 00:05:42.378 08:39:00 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:42.378 08:39:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:42.378 08:39:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:42.378 08:39:00 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:42.378 08:39:00 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:42.378 08:39:00 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:42.378 08:39:00 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:42.378 08:39:00 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:42.378 08:39:00 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:42.378 08:39:00 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:42.378 08:39:00 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:42.378 08:39:00 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:42.378 08:39:00 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:42.378 08:39:00 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:42.378 08:39:00 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:42.378 08:39:00 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:42.378 08:39:00 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:42.378 08:39:00 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:42.378 08:39:00 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:42.635 08:39:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:42.893 08:39:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:42.893 08:39:01 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:42.893 08:39:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:42.893 08:39:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:42.893 08:39:01 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:42.893 08:39:01 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:42.893 08:39:01 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:42.893 08:39:01 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:42.893 08:39:01 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:43.150 08:39:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:43.150 08:39:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:43.150 08:39:01 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:43.150 08:39:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:43.150 08:39:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:43.150 08:39:01 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:43.150 08:39:01 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:43.150 08:39:01 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:43.150 08:39:01 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:43.150 08:39:01 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:43.150 08:39:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:43.407 08:39:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:43.407 08:39:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:43.407 08:39:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:43.407 08:39:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:43.407 08:39:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:43.407 08:39:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:43.407 08:39:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:43.407 08:39:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:43.407 08:39:01 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:43.407 08:39:01 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:43.407 08:39:01 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:43.407 08:39:01 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:43.407 08:39:01 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:43.666 08:39:01 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:43.924 [2024-07-26 08:39:02.182208] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:43.924 [2024-07-26 08:39:02.272653] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:43.924 [2024-07-26 08:39:02.272657] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.924 [2024-07-26 08:39:02.335263] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:43.924 [2024-07-26 08:39:02.335340] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:47.199 08:39:04 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:47.199 08:39:04 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:47.199 spdk_app_start Round 2 00:05:47.199 08:39:04 event.app_repeat -- event/event.sh@25 -- # waitforlisten 838324 /var/tmp/spdk-nbd.sock 00:05:47.199 08:39:04 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 838324 ']' 00:05:47.199 08:39:04 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:47.199 08:39:04 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:47.199 08:39:04 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:47.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:47.199 08:39:04 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:47.199 08:39:04 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:47.199 08:39:05 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:47.199 08:39:05 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:47.199 08:39:05 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:47.199 Malloc0 00:05:47.199 08:39:05 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:47.457 Malloc1 00:05:47.457 08:39:05 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:47.457 08:39:05 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:47.457 08:39:05 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:47.457 08:39:05 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:47.457 08:39:05 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:47.457 08:39:05 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:47.457 08:39:05 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:47.457 08:39:05 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:47.457 08:39:05 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:47.457 08:39:05 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:47.457 08:39:05 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:47.457 08:39:05 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:47.457 08:39:05 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:47.457 08:39:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:47.457 08:39:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:47.457 08:39:05 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:47.713 /dev/nbd0 00:05:47.714 08:39:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:47.714 08:39:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:47.714 08:39:06 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:47.714 08:39:06 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:47.714 08:39:06 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:47.714 08:39:06 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:47.714 08:39:06 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:47.714 08:39:06 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:47.714 08:39:06 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:47.714 08:39:06 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:47.714 08:39:06 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:47.714 1+0 records in 00:05:47.714 1+0 records out 00:05:47.714 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000150651 s, 27.2 MB/s 00:05:47.714 08:39:06 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:47.714 08:39:06 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:47.714 08:39:06 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:47.714 08:39:06 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:47.714 08:39:06 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:47.714 08:39:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:47.714 08:39:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:47.714 08:39:06 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:47.971 /dev/nbd1 00:05:47.971 08:39:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:47.971 08:39:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:47.971 08:39:06 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:47.971 08:39:06 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:47.971 08:39:06 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:47.971 08:39:06 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:47.971 08:39:06 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:47.971 08:39:06 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:47.971 08:39:06 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:47.971 08:39:06 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:47.971 08:39:06 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:47.971 1+0 records in 00:05:47.971 1+0 records out 00:05:47.971 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00022166 s, 18.5 MB/s 00:05:47.971 08:39:06 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:47.971 08:39:06 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:47.971 08:39:06 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:47.971 08:39:06 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:47.971 08:39:06 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:47.971 08:39:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:47.971 08:39:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:47.971 08:39:06 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:47.971 08:39:06 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:47.971 08:39:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:48.229 08:39:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:48.229 { 00:05:48.229 "nbd_device": "/dev/nbd0", 00:05:48.229 "bdev_name": "Malloc0" 00:05:48.229 }, 00:05:48.229 { 00:05:48.229 "nbd_device": "/dev/nbd1", 00:05:48.229 "bdev_name": "Malloc1" 00:05:48.229 } 00:05:48.229 ]' 00:05:48.229 08:39:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:48.229 { 00:05:48.229 "nbd_device": "/dev/nbd0", 00:05:48.229 "bdev_name": "Malloc0" 00:05:48.229 }, 00:05:48.229 { 00:05:48.229 "nbd_device": "/dev/nbd1", 00:05:48.229 "bdev_name": "Malloc1" 00:05:48.229 } 00:05:48.229 ]' 00:05:48.229 08:39:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:48.229 08:39:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:48.229 /dev/nbd1' 00:05:48.229 08:39:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:48.229 /dev/nbd1' 00:05:48.229 08:39:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:48.229 08:39:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:48.229 08:39:06 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:48.229 08:39:06 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:48.229 08:39:06 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:48.229 08:39:06 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:48.229 08:39:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:48.229 08:39:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:48.229 08:39:06 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:48.229 08:39:06 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:48.229 08:39:06 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:48.229 08:39:06 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:48.229 256+0 records in 00:05:48.229 256+0 records out 00:05:48.229 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00467097 s, 224 MB/s 00:05:48.229 08:39:06 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:48.229 08:39:06 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:48.229 256+0 records in 00:05:48.229 256+0 records out 00:05:48.229 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0215395 s, 48.7 MB/s 00:05:48.229 08:39:06 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:48.229 08:39:06 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:48.229 256+0 records in 00:05:48.229 256+0 records out 00:05:48.229 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.024926 s, 42.1 MB/s 00:05:48.229 08:39:06 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:48.229 08:39:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:48.229 08:39:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:48.229 08:39:06 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:48.229 08:39:06 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:48.229 08:39:06 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:48.229 08:39:06 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:48.229 08:39:06 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:48.229 08:39:06 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:48.487 08:39:06 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:48.487 08:39:06 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:48.487 08:39:06 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:48.487 08:39:06 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:48.487 08:39:06 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:48.487 08:39:06 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:48.487 08:39:06 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:48.487 08:39:06 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:48.487 08:39:06 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:48.487 08:39:06 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:48.744 08:39:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:48.744 08:39:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:48.744 08:39:06 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:48.744 08:39:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:48.744 08:39:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:48.744 08:39:06 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:48.744 08:39:06 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:48.744 08:39:06 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:48.744 08:39:06 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:48.744 08:39:06 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:49.002 08:39:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:49.002 08:39:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:49.002 08:39:07 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:49.002 08:39:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:49.002 08:39:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:49.002 08:39:07 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:49.002 08:39:07 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:49.002 08:39:07 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:49.002 08:39:07 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:49.002 08:39:07 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:49.002 08:39:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:49.259 08:39:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:49.259 08:39:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:49.259 08:39:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:49.259 08:39:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:49.259 08:39:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:49.259 08:39:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:49.259 08:39:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:49.259 08:39:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:49.259 08:39:07 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:49.259 08:39:07 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:49.259 08:39:07 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:49.259 08:39:07 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:49.259 08:39:07 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:49.517 08:39:07 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:49.776 [2024-07-26 08:39:08.056645] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:49.776 [2024-07-26 08:39:08.149707] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:49.776 [2024-07-26 08:39:08.149711] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.776 [2024-07-26 08:39:08.207072] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:49.776 [2024-07-26 08:39:08.207166] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:53.093 08:39:10 event.app_repeat -- event/event.sh@38 -- # waitforlisten 838324 /var/tmp/spdk-nbd.sock 00:05:53.093 08:39:10 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 838324 ']' 00:05:53.093 08:39:10 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:53.093 08:39:10 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:53.093 08:39:10 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:53.093 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:53.093 08:39:10 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:53.093 08:39:10 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:53.093 08:39:11 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:53.093 08:39:11 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:53.093 08:39:11 event.app_repeat -- event/event.sh@39 -- # killprocess 838324 00:05:53.093 08:39:11 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 838324 ']' 00:05:53.093 08:39:11 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 838324 00:05:53.093 08:39:11 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:05:53.093 08:39:11 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:53.093 08:39:11 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 838324 00:05:53.093 08:39:11 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:53.093 08:39:11 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:53.093 08:39:11 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 838324' 00:05:53.093 killing process with pid 838324 00:05:53.093 08:39:11 event.app_repeat -- common/autotest_common.sh@969 -- # kill 838324 00:05:53.093 08:39:11 event.app_repeat -- common/autotest_common.sh@974 -- # wait 838324 00:05:53.093 spdk_app_start is called in Round 0. 00:05:53.093 Shutdown signal received, stop current app iteration 00:05:53.093 Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 reinitialization... 00:05:53.093 spdk_app_start is called in Round 1. 00:05:53.093 Shutdown signal received, stop current app iteration 00:05:53.093 Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 reinitialization... 00:05:53.093 spdk_app_start is called in Round 2. 00:05:53.093 Shutdown signal received, stop current app iteration 00:05:53.093 Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 reinitialization... 00:05:53.093 spdk_app_start is called in Round 3. 00:05:53.093 Shutdown signal received, stop current app iteration 00:05:53.093 08:39:11 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:53.093 08:39:11 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:53.093 00:05:53.093 real 0m18.077s 00:05:53.093 user 0m39.539s 00:05:53.093 sys 0m3.156s 00:05:53.093 08:39:11 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:53.093 08:39:11 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:53.093 ************************************ 00:05:53.093 END TEST app_repeat 00:05:53.093 ************************************ 00:05:53.094 08:39:11 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:53.094 08:39:11 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:53.094 08:39:11 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:53.094 08:39:11 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:53.094 08:39:11 event -- common/autotest_common.sh@10 -- # set +x 00:05:53.094 ************************************ 00:05:53.094 START TEST cpu_locks 00:05:53.094 ************************************ 00:05:53.094 08:39:11 event.cpu_locks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:53.094 * Looking for test storage... 00:05:53.094 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:53.094 08:39:11 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:53.094 08:39:11 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:53.094 08:39:11 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:53.094 08:39:11 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:53.094 08:39:11 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:53.094 08:39:11 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:53.094 08:39:11 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:53.094 ************************************ 00:05:53.094 START TEST default_locks 00:05:53.094 ************************************ 00:05:53.094 08:39:11 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:05:53.094 08:39:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=841300 00:05:53.094 08:39:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:53.094 08:39:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 841300 00:05:53.094 08:39:11 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 841300 ']' 00:05:53.094 08:39:11 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:53.094 08:39:11 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:53.094 08:39:11 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:53.094 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:53.094 08:39:11 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:53.094 08:39:11 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:53.094 [2024-07-26 08:39:11.503887] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:05:53.094 [2024-07-26 08:39:11.503965] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid841300 ] 00:05:53.094 EAL: No free 2048 kB hugepages reported on node 1 00:05:53.094 [2024-07-26 08:39:11.534707] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:53.352 [2024-07-26 08:39:11.562864] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.352 [2024-07-26 08:39:11.653094] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.610 08:39:11 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:53.610 08:39:11 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:05:53.610 08:39:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 841300 00:05:53.610 08:39:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 841300 00:05:53.610 08:39:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:53.867 lslocks: write error 00:05:53.867 08:39:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 841300 00:05:53.867 08:39:12 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 841300 ']' 00:05:53.867 08:39:12 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 841300 00:05:53.867 08:39:12 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:05:53.867 08:39:12 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:53.867 08:39:12 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 841300 00:05:53.867 08:39:12 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:53.867 08:39:12 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:53.867 08:39:12 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 841300' 00:05:53.867 killing process with pid 841300 00:05:53.867 08:39:12 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 841300 00:05:53.867 08:39:12 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 841300 00:05:54.431 08:39:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 841300 00:05:54.431 08:39:12 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:05:54.431 08:39:12 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 841300 00:05:54.431 08:39:12 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:54.431 08:39:12 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:54.431 08:39:12 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:54.431 08:39:12 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:54.431 08:39:12 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 841300 00:05:54.431 08:39:12 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 841300 ']' 00:05:54.431 08:39:12 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:54.431 08:39:12 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:54.431 08:39:12 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:54.431 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:54.431 08:39:12 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:54.431 08:39:12 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:54.431 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (841300) - No such process 00:05:54.431 ERROR: process (pid: 841300) is no longer running 00:05:54.431 08:39:12 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:54.431 08:39:12 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:05:54.431 08:39:12 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:05:54.431 08:39:12 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:54.431 08:39:12 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:54.431 08:39:12 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:54.431 08:39:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:54.431 08:39:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:54.431 08:39:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:54.431 08:39:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:54.431 00:05:54.431 real 0m1.266s 00:05:54.431 user 0m1.213s 00:05:54.431 sys 0m0.535s 00:05:54.431 08:39:12 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:54.431 08:39:12 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:54.431 ************************************ 00:05:54.431 END TEST default_locks 00:05:54.431 ************************************ 00:05:54.431 08:39:12 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:54.431 08:39:12 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:54.431 08:39:12 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:54.431 08:39:12 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:54.431 ************************************ 00:05:54.431 START TEST default_locks_via_rpc 00:05:54.431 ************************************ 00:05:54.431 08:39:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:05:54.431 08:39:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=841471 00:05:54.431 08:39:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:54.431 08:39:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 841471 00:05:54.431 08:39:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 841471 ']' 00:05:54.431 08:39:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:54.431 08:39:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:54.431 08:39:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:54.431 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:54.431 08:39:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:54.431 08:39:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:54.431 [2024-07-26 08:39:12.817749] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:05:54.431 [2024-07-26 08:39:12.817832] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid841471 ] 00:05:54.431 EAL: No free 2048 kB hugepages reported on node 1 00:05:54.431 [2024-07-26 08:39:12.848785] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:54.431 [2024-07-26 08:39:12.880502] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.690 [2024-07-26 08:39:12.968473] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.948 08:39:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:54.948 08:39:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:54.948 08:39:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:54.948 08:39:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.948 08:39:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:54.948 08:39:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.948 08:39:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:54.948 08:39:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:54.948 08:39:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:54.948 08:39:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:54.948 08:39:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:54.948 08:39:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.948 08:39:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:54.948 08:39:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.948 08:39:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 841471 00:05:54.948 08:39:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 841471 00:05:54.948 08:39:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:55.205 08:39:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 841471 00:05:55.205 08:39:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 841471 ']' 00:05:55.205 08:39:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 841471 00:05:55.205 08:39:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:05:55.205 08:39:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:55.205 08:39:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 841471 00:05:55.205 08:39:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:55.205 08:39:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:55.205 08:39:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 841471' 00:05:55.205 killing process with pid 841471 00:05:55.205 08:39:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 841471 00:05:55.205 08:39:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 841471 00:05:55.464 00:05:55.464 real 0m1.132s 00:05:55.464 user 0m1.059s 00:05:55.464 sys 0m0.517s 00:05:55.464 08:39:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:55.464 08:39:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:55.464 ************************************ 00:05:55.464 END TEST default_locks_via_rpc 00:05:55.464 ************************************ 00:05:55.464 08:39:13 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:55.464 08:39:13 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:55.464 08:39:13 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:55.464 08:39:13 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:55.722 ************************************ 00:05:55.722 START TEST non_locking_app_on_locked_coremask 00:05:55.722 ************************************ 00:05:55.722 08:39:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:05:55.722 08:39:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=841631 00:05:55.722 08:39:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:55.722 08:39:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 841631 /var/tmp/spdk.sock 00:05:55.722 08:39:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 841631 ']' 00:05:55.722 08:39:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:55.722 08:39:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:55.722 08:39:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:55.722 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:55.722 08:39:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:55.722 08:39:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:55.722 [2024-07-26 08:39:14.001082] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:05:55.722 [2024-07-26 08:39:14.001177] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid841631 ] 00:05:55.722 EAL: No free 2048 kB hugepages reported on node 1 00:05:55.722 [2024-07-26 08:39:14.032176] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:55.722 [2024-07-26 08:39:14.063789] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.722 [2024-07-26 08:39:14.151352] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.980 08:39:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:55.980 08:39:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:55.980 08:39:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=841639 00:05:55.980 08:39:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:55.980 08:39:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 841639 /var/tmp/spdk2.sock 00:05:55.980 08:39:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 841639 ']' 00:05:55.980 08:39:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:55.980 08:39:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:55.980 08:39:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:55.980 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:55.980 08:39:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:55.980 08:39:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:56.237 [2024-07-26 08:39:14.452717] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:05:56.237 [2024-07-26 08:39:14.452821] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid841639 ] 00:05:56.237 EAL: No free 2048 kB hugepages reported on node 1 00:05:56.237 [2024-07-26 08:39:14.486859] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:56.237 [2024-07-26 08:39:14.551442] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:56.237 [2024-07-26 08:39:14.551473] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.495 [2024-07-26 08:39:14.735915] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.059 08:39:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:57.059 08:39:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:57.059 08:39:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 841631 00:05:57.059 08:39:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 841631 00:05:57.059 08:39:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:57.624 lslocks: write error 00:05:57.624 08:39:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 841631 00:05:57.624 08:39:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 841631 ']' 00:05:57.624 08:39:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 841631 00:05:57.624 08:39:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:57.624 08:39:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:57.624 08:39:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 841631 00:05:57.624 08:39:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:57.624 08:39:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:57.624 08:39:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 841631' 00:05:57.624 killing process with pid 841631 00:05:57.624 08:39:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 841631 00:05:57.624 08:39:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 841631 00:05:58.557 08:39:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 841639 00:05:58.557 08:39:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 841639 ']' 00:05:58.557 08:39:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 841639 00:05:58.557 08:39:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:58.557 08:39:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:58.557 08:39:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 841639 00:05:58.557 08:39:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:58.557 08:39:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:58.557 08:39:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 841639' 00:05:58.557 killing process with pid 841639 00:05:58.557 08:39:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 841639 00:05:58.557 08:39:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 841639 00:05:58.816 00:05:58.816 real 0m3.201s 00:05:58.816 user 0m3.331s 00:05:58.816 sys 0m1.040s 00:05:58.816 08:39:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:58.816 08:39:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:58.816 ************************************ 00:05:58.816 END TEST non_locking_app_on_locked_coremask 00:05:58.816 ************************************ 00:05:58.816 08:39:17 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:58.816 08:39:17 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:58.816 08:39:17 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:58.816 08:39:17 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:58.816 ************************************ 00:05:58.816 START TEST locking_app_on_unlocked_coremask 00:05:58.816 ************************************ 00:05:58.816 08:39:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:05:58.816 08:39:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=842065 00:05:58.816 08:39:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:58.816 08:39:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 842065 /var/tmp/spdk.sock 00:05:58.816 08:39:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 842065 ']' 00:05:58.816 08:39:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:58.816 08:39:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:58.816 08:39:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:58.816 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:58.816 08:39:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:58.816 08:39:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:58.816 [2024-07-26 08:39:17.252934] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:05:58.816 [2024-07-26 08:39:17.253017] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid842065 ] 00:05:59.074 EAL: No free 2048 kB hugepages reported on node 1 00:05:59.074 [2024-07-26 08:39:17.284458] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:59.074 [2024-07-26 08:39:17.316294] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:59.074 [2024-07-26 08:39:17.316324] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.074 [2024-07-26 08:39:17.405304] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.332 08:39:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:59.332 08:39:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:59.332 08:39:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=842070 00:05:59.332 08:39:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:59.332 08:39:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 842070 /var/tmp/spdk2.sock 00:05:59.332 08:39:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 842070 ']' 00:05:59.332 08:39:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:59.332 08:39:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:59.332 08:39:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:59.332 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:59.332 08:39:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:59.332 08:39:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:59.332 [2024-07-26 08:39:17.710465] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:05:59.332 [2024-07-26 08:39:17.710541] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid842070 ] 00:05:59.332 EAL: No free 2048 kB hugepages reported on node 1 00:05:59.332 [2024-07-26 08:39:17.745893] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:59.589 [2024-07-26 08:39:17.809686] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.589 [2024-07-26 08:39:17.995817] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.520 08:39:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:00.520 08:39:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:00.520 08:39:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 842070 00:06:00.520 08:39:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 842070 00:06:00.520 08:39:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:00.778 lslocks: write error 00:06:00.778 08:39:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 842065 00:06:00.778 08:39:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 842065 ']' 00:06:00.778 08:39:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 842065 00:06:00.778 08:39:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:00.778 08:39:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:00.778 08:39:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 842065 00:06:00.778 08:39:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:00.778 08:39:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:00.778 08:39:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 842065' 00:06:00.778 killing process with pid 842065 00:06:00.778 08:39:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 842065 00:06:00.778 08:39:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 842065 00:06:01.711 08:39:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 842070 00:06:01.711 08:39:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 842070 ']' 00:06:01.711 08:39:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 842070 00:06:01.711 08:39:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:01.711 08:39:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:01.711 08:39:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 842070 00:06:01.711 08:39:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:01.711 08:39:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:01.711 08:39:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 842070' 00:06:01.711 killing process with pid 842070 00:06:01.711 08:39:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 842070 00:06:01.711 08:39:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 842070 00:06:02.279 00:06:02.279 real 0m3.252s 00:06:02.279 user 0m3.407s 00:06:02.279 sys 0m1.083s 00:06:02.279 08:39:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:02.279 08:39:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:02.279 ************************************ 00:06:02.279 END TEST locking_app_on_unlocked_coremask 00:06:02.279 ************************************ 00:06:02.279 08:39:20 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:02.279 08:39:20 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:02.279 08:39:20 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:02.279 08:39:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:02.279 ************************************ 00:06:02.279 START TEST locking_app_on_locked_coremask 00:06:02.279 ************************************ 00:06:02.279 08:39:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:06:02.279 08:39:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=842498 00:06:02.279 08:39:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:02.279 08:39:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 842498 /var/tmp/spdk.sock 00:06:02.279 08:39:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 842498 ']' 00:06:02.279 08:39:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:02.279 08:39:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:02.279 08:39:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:02.279 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:02.279 08:39:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:02.279 08:39:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:02.279 [2024-07-26 08:39:20.549151] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:06:02.279 [2024-07-26 08:39:20.549245] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid842498 ] 00:06:02.279 EAL: No free 2048 kB hugepages reported on node 1 00:06:02.279 [2024-07-26 08:39:20.581305] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:02.279 [2024-07-26 08:39:20.607371] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.279 [2024-07-26 08:39:20.696171] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.537 08:39:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:02.537 08:39:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:02.537 08:39:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=842508 00:06:02.537 08:39:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:02.537 08:39:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 842508 /var/tmp/spdk2.sock 00:06:02.537 08:39:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:02.538 08:39:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 842508 /var/tmp/spdk2.sock 00:06:02.538 08:39:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:02.538 08:39:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:02.538 08:39:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:02.538 08:39:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:02.538 08:39:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 842508 /var/tmp/spdk2.sock 00:06:02.538 08:39:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 842508 ']' 00:06:02.538 08:39:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:02.538 08:39:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:02.538 08:39:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:02.538 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:02.538 08:39:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:02.538 08:39:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:02.795 [2024-07-26 08:39:20.999005] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:06:02.795 [2024-07-26 08:39:20.999121] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid842508 ] 00:06:02.795 EAL: No free 2048 kB hugepages reported on node 1 00:06:02.795 [2024-07-26 08:39:21.034782] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:02.795 [2024-07-26 08:39:21.093288] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 842498 has claimed it. 00:06:02.795 [2024-07-26 08:39:21.093335] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:03.362 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (842508) - No such process 00:06:03.362 ERROR: process (pid: 842508) is no longer running 00:06:03.363 08:39:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:03.363 08:39:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:03.363 08:39:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:03.363 08:39:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:03.363 08:39:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:03.363 08:39:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:03.363 08:39:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 842498 00:06:03.363 08:39:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 842498 00:06:03.363 08:39:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:03.927 lslocks: write error 00:06:03.927 08:39:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 842498 00:06:03.927 08:39:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 842498 ']' 00:06:03.927 08:39:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 842498 00:06:03.927 08:39:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:03.927 08:39:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:03.927 08:39:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 842498 00:06:03.927 08:39:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:03.927 08:39:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:03.927 08:39:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 842498' 00:06:03.927 killing process with pid 842498 00:06:03.927 08:39:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 842498 00:06:03.927 08:39:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 842498 00:06:04.186 00:06:04.186 real 0m2.091s 00:06:04.186 user 0m2.264s 00:06:04.186 sys 0m0.661s 00:06:04.186 08:39:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:04.186 08:39:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:04.186 ************************************ 00:06:04.186 END TEST locking_app_on_locked_coremask 00:06:04.186 ************************************ 00:06:04.186 08:39:22 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:04.186 08:39:22 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:04.186 08:39:22 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:04.186 08:39:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:04.186 ************************************ 00:06:04.186 START TEST locking_overlapped_coremask 00:06:04.186 ************************************ 00:06:04.186 08:39:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:06:04.186 08:39:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=842792 00:06:04.186 08:39:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:04.186 08:39:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 842792 /var/tmp/spdk.sock 00:06:04.186 08:39:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 842792 ']' 00:06:04.186 08:39:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:04.186 08:39:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:04.186 08:39:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:04.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:04.186 08:39:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:04.186 08:39:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:04.446 [2024-07-26 08:39:22.694461] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:06:04.446 [2024-07-26 08:39:22.694530] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid842792 ] 00:06:04.446 EAL: No free 2048 kB hugepages reported on node 1 00:06:04.446 [2024-07-26 08:39:22.727824] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:04.446 [2024-07-26 08:39:22.755854] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:04.446 [2024-07-26 08:39:22.849332] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:04.446 [2024-07-26 08:39:22.849401] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:04.446 [2024-07-26 08:39:22.849404] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.705 08:39:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:04.705 08:39:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:04.705 08:39:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=842808 00:06:04.705 08:39:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:04.705 08:39:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 842808 /var/tmp/spdk2.sock 00:06:04.705 08:39:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:04.705 08:39:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 842808 /var/tmp/spdk2.sock 00:06:04.705 08:39:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:04.705 08:39:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:04.705 08:39:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:04.705 08:39:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:04.705 08:39:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 842808 /var/tmp/spdk2.sock 00:06:04.705 08:39:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 842808 ']' 00:06:04.705 08:39:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:04.705 08:39:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:04.705 08:39:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:04.705 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:04.705 08:39:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:04.705 08:39:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:04.705 [2024-07-26 08:39:23.152285] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:06:04.705 [2024-07-26 08:39:23.152364] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid842808 ] 00:06:04.983 EAL: No free 2048 kB hugepages reported on node 1 00:06:04.983 [2024-07-26 08:39:23.186132] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:04.983 [2024-07-26 08:39:23.240495] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 842792 has claimed it. 00:06:04.984 [2024-07-26 08:39:23.240549] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:05.564 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (842808) - No such process 00:06:05.564 ERROR: process (pid: 842808) is no longer running 00:06:05.564 08:39:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:05.564 08:39:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:05.564 08:39:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:05.564 08:39:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:05.564 08:39:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:05.564 08:39:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:05.564 08:39:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:05.564 08:39:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:05.564 08:39:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:05.564 08:39:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:05.564 08:39:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 842792 00:06:05.564 08:39:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 842792 ']' 00:06:05.564 08:39:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 842792 00:06:05.564 08:39:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:06:05.564 08:39:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:05.564 08:39:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 842792 00:06:05.564 08:39:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:05.564 08:39:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:05.564 08:39:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 842792' 00:06:05.564 killing process with pid 842792 00:06:05.564 08:39:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 842792 00:06:05.564 08:39:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 842792 00:06:05.822 00:06:05.822 real 0m1.620s 00:06:05.822 user 0m4.348s 00:06:05.822 sys 0m0.472s 00:06:05.823 08:39:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:05.823 08:39:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:05.823 ************************************ 00:06:05.823 END TEST locking_overlapped_coremask 00:06:05.823 ************************************ 00:06:06.084 08:39:24 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:06.084 08:39:24 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:06.084 08:39:24 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:06.084 08:39:24 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:06.084 ************************************ 00:06:06.084 START TEST locking_overlapped_coremask_via_rpc 00:06:06.084 ************************************ 00:06:06.084 08:39:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:06:06.084 08:39:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=842975 00:06:06.084 08:39:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:06.084 08:39:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 842975 /var/tmp/spdk.sock 00:06:06.084 08:39:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 842975 ']' 00:06:06.084 08:39:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:06.084 08:39:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:06.084 08:39:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:06.084 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:06.084 08:39:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:06.084 08:39:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:06.084 [2024-07-26 08:39:24.363476] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:06:06.084 [2024-07-26 08:39:24.363561] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid842975 ] 00:06:06.084 EAL: No free 2048 kB hugepages reported on node 1 00:06:06.084 [2024-07-26 08:39:24.394569] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:06.085 [2024-07-26 08:39:24.425935] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:06.085 [2024-07-26 08:39:24.425965] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:06.085 [2024-07-26 08:39:24.515676] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:06.085 [2024-07-26 08:39:24.515743] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:06.085 [2024-07-26 08:39:24.515746] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.343 08:39:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:06.343 08:39:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:06.343 08:39:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=843100 00:06:06.343 08:39:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:06.343 08:39:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 843100 /var/tmp/spdk2.sock 00:06:06.343 08:39:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 843100 ']' 00:06:06.343 08:39:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:06.343 08:39:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:06.343 08:39:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:06.343 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:06.343 08:39:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:06.343 08:39:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:06.601 [2024-07-26 08:39:24.819393] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:06:06.601 [2024-07-26 08:39:24.819472] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid843100 ] 00:06:06.601 EAL: No free 2048 kB hugepages reported on node 1 00:06:06.601 [2024-07-26 08:39:24.855074] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:06.601 [2024-07-26 08:39:24.909761] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:06.601 [2024-07-26 08:39:24.909786] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:06.860 [2024-07-26 08:39:25.080697] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:06.860 [2024-07-26 08:39:25.084093] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:06:06.860 [2024-07-26 08:39:25.084096] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:07.427 08:39:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:07.427 08:39:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:07.427 08:39:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:07.427 08:39:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:07.427 08:39:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:07.427 08:39:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:07.427 08:39:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:07.427 08:39:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:06:07.427 08:39:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:07.427 08:39:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:07.427 08:39:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:07.427 08:39:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:07.427 08:39:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:07.427 08:39:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:07.427 08:39:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:07.427 08:39:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:07.427 [2024-07-26 08:39:25.774165] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 842975 has claimed it. 00:06:07.427 request: 00:06:07.427 { 00:06:07.427 "method": "framework_enable_cpumask_locks", 00:06:07.427 "req_id": 1 00:06:07.427 } 00:06:07.427 Got JSON-RPC error response 00:06:07.427 response: 00:06:07.427 { 00:06:07.427 "code": -32603, 00:06:07.427 "message": "Failed to claim CPU core: 2" 00:06:07.427 } 00:06:07.427 08:39:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:07.427 08:39:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:06:07.427 08:39:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:07.427 08:39:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:07.427 08:39:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:07.427 08:39:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 842975 /var/tmp/spdk.sock 00:06:07.427 08:39:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 842975 ']' 00:06:07.427 08:39:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:07.427 08:39:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:07.427 08:39:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:07.427 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:07.427 08:39:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:07.427 08:39:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:07.686 08:39:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:07.686 08:39:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:07.686 08:39:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 843100 /var/tmp/spdk2.sock 00:06:07.686 08:39:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 843100 ']' 00:06:07.686 08:39:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:07.686 08:39:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:07.686 08:39:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:07.686 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:07.686 08:39:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:07.686 08:39:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:07.946 08:39:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:07.946 08:39:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:07.946 08:39:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:07.946 08:39:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:07.946 08:39:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:07.946 08:39:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:07.946 00:06:07.946 real 0m1.971s 00:06:07.946 user 0m1.030s 00:06:07.946 sys 0m0.168s 00:06:07.946 08:39:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:07.946 08:39:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:07.946 ************************************ 00:06:07.946 END TEST locking_overlapped_coremask_via_rpc 00:06:07.946 ************************************ 00:06:07.946 08:39:26 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:07.946 08:39:26 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 842975 ]] 00:06:07.946 08:39:26 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 842975 00:06:07.946 08:39:26 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 842975 ']' 00:06:07.946 08:39:26 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 842975 00:06:07.946 08:39:26 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:07.946 08:39:26 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:07.946 08:39:26 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 842975 00:06:07.946 08:39:26 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:07.946 08:39:26 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:07.946 08:39:26 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 842975' 00:06:07.946 killing process with pid 842975 00:06:07.946 08:39:26 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 842975 00:06:07.946 08:39:26 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 842975 00:06:08.516 08:39:26 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 843100 ]] 00:06:08.516 08:39:26 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 843100 00:06:08.516 08:39:26 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 843100 ']' 00:06:08.516 08:39:26 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 843100 00:06:08.516 08:39:26 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:08.516 08:39:26 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:08.516 08:39:26 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 843100 00:06:08.516 08:39:26 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:06:08.517 08:39:26 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:06:08.517 08:39:26 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 843100' 00:06:08.517 killing process with pid 843100 00:06:08.517 08:39:26 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 843100 00:06:08.517 08:39:26 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 843100 00:06:08.775 08:39:27 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:08.775 08:39:27 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:08.775 08:39:27 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 842975 ]] 00:06:08.775 08:39:27 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 842975 00:06:08.775 08:39:27 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 842975 ']' 00:06:08.775 08:39:27 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 842975 00:06:08.775 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (842975) - No such process 00:06:08.775 08:39:27 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 842975 is not found' 00:06:08.775 Process with pid 842975 is not found 00:06:08.775 08:39:27 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 843100 ]] 00:06:08.775 08:39:27 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 843100 00:06:08.775 08:39:27 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 843100 ']' 00:06:08.775 08:39:27 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 843100 00:06:08.775 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (843100) - No such process 00:06:08.775 08:39:27 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 843100 is not found' 00:06:08.775 Process with pid 843100 is not found 00:06:08.775 08:39:27 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:08.775 00:06:08.775 real 0m15.808s 00:06:08.775 user 0m27.467s 00:06:08.775 sys 0m5.396s 00:06:08.775 08:39:27 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:08.776 08:39:27 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:08.776 ************************************ 00:06:08.776 END TEST cpu_locks 00:06:08.776 ************************************ 00:06:08.776 00:06:08.776 real 0m39.699s 00:06:08.776 user 1m15.920s 00:06:08.776 sys 0m9.331s 00:06:08.776 08:39:27 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:08.776 08:39:27 event -- common/autotest_common.sh@10 -- # set +x 00:06:08.776 ************************************ 00:06:08.776 END TEST event 00:06:08.776 ************************************ 00:06:08.776 08:39:27 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:08.776 08:39:27 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:08.776 08:39:27 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:08.776 08:39:27 -- common/autotest_common.sh@10 -- # set +x 00:06:09.036 ************************************ 00:06:09.036 START TEST thread 00:06:09.036 ************************************ 00:06:09.036 08:39:27 thread -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:09.036 * Looking for test storage... 00:06:09.036 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:09.036 08:39:27 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:09.036 08:39:27 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:09.036 08:39:27 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:09.036 08:39:27 thread -- common/autotest_common.sh@10 -- # set +x 00:06:09.036 ************************************ 00:06:09.036 START TEST thread_poller_perf 00:06:09.036 ************************************ 00:06:09.036 08:39:27 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:09.036 [2024-07-26 08:39:27.317849] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:06:09.036 [2024-07-26 08:39:27.317909] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid843473 ] 00:06:09.036 EAL: No free 2048 kB hugepages reported on node 1 00:06:09.036 [2024-07-26 08:39:27.350875] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:09.036 [2024-07-26 08:39:27.378241] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.036 [2024-07-26 08:39:27.468649] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.036 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:10.415 ====================================== 00:06:10.415 busy:2712361227 (cyc) 00:06:10.415 total_run_count: 294000 00:06:10.415 tsc_hz: 2700000000 (cyc) 00:06:10.415 ====================================== 00:06:10.415 poller_cost: 9225 (cyc), 3416 (nsec) 00:06:10.415 00:06:10.415 real 0m1.254s 00:06:10.415 user 0m1.169s 00:06:10.415 sys 0m0.077s 00:06:10.415 08:39:28 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:10.415 08:39:28 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:10.415 ************************************ 00:06:10.415 END TEST thread_poller_perf 00:06:10.415 ************************************ 00:06:10.415 08:39:28 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:10.415 08:39:28 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:10.415 08:39:28 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:10.415 08:39:28 thread -- common/autotest_common.sh@10 -- # set +x 00:06:10.415 ************************************ 00:06:10.415 START TEST thread_poller_perf 00:06:10.415 ************************************ 00:06:10.415 08:39:28 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:10.415 [2024-07-26 08:39:28.625203] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:06:10.415 [2024-07-26 08:39:28.625267] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid843626 ] 00:06:10.415 EAL: No free 2048 kB hugepages reported on node 1 00:06:10.415 [2024-07-26 08:39:28.656418] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:10.415 [2024-07-26 08:39:28.688281] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.415 [2024-07-26 08:39:28.778171] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.415 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:11.797 ====================================== 00:06:11.797 busy:2702720188 (cyc) 00:06:11.797 total_run_count: 3862000 00:06:11.797 tsc_hz: 2700000000 (cyc) 00:06:11.797 ====================================== 00:06:11.797 poller_cost: 699 (cyc), 258 (nsec) 00:06:11.797 00:06:11.797 real 0m1.252s 00:06:11.797 user 0m1.160s 00:06:11.797 sys 0m0.087s 00:06:11.797 08:39:29 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:11.797 08:39:29 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:11.797 ************************************ 00:06:11.797 END TEST thread_poller_perf 00:06:11.797 ************************************ 00:06:11.797 08:39:29 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:11.797 00:06:11.797 real 0m2.644s 00:06:11.797 user 0m2.392s 00:06:11.797 sys 0m0.249s 00:06:11.797 08:39:29 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:11.797 08:39:29 thread -- common/autotest_common.sh@10 -- # set +x 00:06:11.797 ************************************ 00:06:11.797 END TEST thread 00:06:11.797 ************************************ 00:06:11.797 08:39:29 -- spdk/autotest.sh@184 -- # [[ 0 -eq 1 ]] 00:06:11.797 08:39:29 -- spdk/autotest.sh@189 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:11.797 08:39:29 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:11.797 08:39:29 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:11.797 08:39:29 -- common/autotest_common.sh@10 -- # set +x 00:06:11.797 ************************************ 00:06:11.797 START TEST app_cmdline 00:06:11.797 ************************************ 00:06:11.797 08:39:29 app_cmdline -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:11.797 * Looking for test storage... 00:06:11.797 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:11.797 08:39:29 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:11.797 08:39:29 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=843823 00:06:11.797 08:39:29 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:11.797 08:39:29 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 843823 00:06:11.797 08:39:29 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 843823 ']' 00:06:11.797 08:39:29 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:11.797 08:39:29 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:11.797 08:39:29 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:11.797 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:11.797 08:39:29 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:11.797 08:39:29 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:11.797 [2024-07-26 08:39:30.046633] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:06:11.797 [2024-07-26 08:39:30.046740] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid843823 ] 00:06:11.797 EAL: No free 2048 kB hugepages reported on node 1 00:06:11.797 [2024-07-26 08:39:30.079909] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:11.797 [2024-07-26 08:39:30.106151] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.797 [2024-07-26 08:39:30.192025] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.057 08:39:30 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:12.057 08:39:30 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:06:12.057 08:39:30 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:12.315 { 00:06:12.316 "version": "SPDK v24.09-pre git sha1 704257090", 00:06:12.316 "fields": { 00:06:12.316 "major": 24, 00:06:12.316 "minor": 9, 00:06:12.316 "patch": 0, 00:06:12.316 "suffix": "-pre", 00:06:12.316 "commit": "704257090" 00:06:12.316 } 00:06:12.316 } 00:06:12.316 08:39:30 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:12.316 08:39:30 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:12.316 08:39:30 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:12.316 08:39:30 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:12.316 08:39:30 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:12.316 08:39:30 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:12.316 08:39:30 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:12.316 08:39:30 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:12.316 08:39:30 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:12.316 08:39:30 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:12.316 08:39:30 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:12.316 08:39:30 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:12.316 08:39:30 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:12.316 08:39:30 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:06:12.316 08:39:30 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:12.316 08:39:30 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:12.316 08:39:30 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:12.316 08:39:30 app_cmdline -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:12.316 08:39:30 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:12.316 08:39:30 app_cmdline -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:12.316 08:39:30 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:12.316 08:39:30 app_cmdline -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:12.316 08:39:30 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:12.316 08:39:30 app_cmdline -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:12.575 request: 00:06:12.575 { 00:06:12.575 "method": "env_dpdk_get_mem_stats", 00:06:12.575 "req_id": 1 00:06:12.575 } 00:06:12.575 Got JSON-RPC error response 00:06:12.575 response: 00:06:12.575 { 00:06:12.575 "code": -32601, 00:06:12.575 "message": "Method not found" 00:06:12.575 } 00:06:12.575 08:39:30 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:06:12.575 08:39:30 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:12.575 08:39:30 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:12.575 08:39:30 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:12.575 08:39:30 app_cmdline -- app/cmdline.sh@1 -- # killprocess 843823 00:06:12.575 08:39:30 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 843823 ']' 00:06:12.575 08:39:30 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 843823 00:06:12.575 08:39:30 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:06:12.575 08:39:30 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:12.575 08:39:30 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 843823 00:06:12.575 08:39:30 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:12.575 08:39:30 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:12.575 08:39:30 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 843823' 00:06:12.575 killing process with pid 843823 00:06:12.575 08:39:30 app_cmdline -- common/autotest_common.sh@969 -- # kill 843823 00:06:12.575 08:39:30 app_cmdline -- common/autotest_common.sh@974 -- # wait 843823 00:06:13.142 00:06:13.142 real 0m1.464s 00:06:13.142 user 0m1.787s 00:06:13.142 sys 0m0.460s 00:06:13.142 08:39:31 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:13.142 08:39:31 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:13.142 ************************************ 00:06:13.142 END TEST app_cmdline 00:06:13.142 ************************************ 00:06:13.142 08:39:31 -- spdk/autotest.sh@190 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:13.142 08:39:31 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:13.142 08:39:31 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:13.142 08:39:31 -- common/autotest_common.sh@10 -- # set +x 00:06:13.142 ************************************ 00:06:13.142 START TEST version 00:06:13.142 ************************************ 00:06:13.142 08:39:31 version -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:13.142 * Looking for test storage... 00:06:13.142 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:13.142 08:39:31 version -- app/version.sh@17 -- # get_header_version major 00:06:13.142 08:39:31 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:13.142 08:39:31 version -- app/version.sh@14 -- # cut -f2 00:06:13.142 08:39:31 version -- app/version.sh@14 -- # tr -d '"' 00:06:13.142 08:39:31 version -- app/version.sh@17 -- # major=24 00:06:13.142 08:39:31 version -- app/version.sh@18 -- # get_header_version minor 00:06:13.142 08:39:31 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:13.142 08:39:31 version -- app/version.sh@14 -- # cut -f2 00:06:13.142 08:39:31 version -- app/version.sh@14 -- # tr -d '"' 00:06:13.142 08:39:31 version -- app/version.sh@18 -- # minor=9 00:06:13.142 08:39:31 version -- app/version.sh@19 -- # get_header_version patch 00:06:13.143 08:39:31 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:13.143 08:39:31 version -- app/version.sh@14 -- # cut -f2 00:06:13.143 08:39:31 version -- app/version.sh@14 -- # tr -d '"' 00:06:13.143 08:39:31 version -- app/version.sh@19 -- # patch=0 00:06:13.143 08:39:31 version -- app/version.sh@20 -- # get_header_version suffix 00:06:13.143 08:39:31 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:13.143 08:39:31 version -- app/version.sh@14 -- # cut -f2 00:06:13.143 08:39:31 version -- app/version.sh@14 -- # tr -d '"' 00:06:13.143 08:39:31 version -- app/version.sh@20 -- # suffix=-pre 00:06:13.143 08:39:31 version -- app/version.sh@22 -- # version=24.9 00:06:13.143 08:39:31 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:13.143 08:39:31 version -- app/version.sh@28 -- # version=24.9rc0 00:06:13.143 08:39:31 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:13.143 08:39:31 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:13.143 08:39:31 version -- app/version.sh@30 -- # py_version=24.9rc0 00:06:13.143 08:39:31 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:06:13.143 00:06:13.143 real 0m0.109s 00:06:13.143 user 0m0.055s 00:06:13.143 sys 0m0.075s 00:06:13.143 08:39:31 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:13.143 08:39:31 version -- common/autotest_common.sh@10 -- # set +x 00:06:13.143 ************************************ 00:06:13.143 END TEST version 00:06:13.143 ************************************ 00:06:13.143 08:39:31 -- spdk/autotest.sh@192 -- # '[' 0 -eq 1 ']' 00:06:13.143 08:39:31 -- spdk/autotest.sh@202 -- # uname -s 00:06:13.143 08:39:31 -- spdk/autotest.sh@202 -- # [[ Linux == Linux ]] 00:06:13.143 08:39:31 -- spdk/autotest.sh@203 -- # [[ 0 -eq 1 ]] 00:06:13.143 08:39:31 -- spdk/autotest.sh@203 -- # [[ 0 -eq 1 ]] 00:06:13.143 08:39:31 -- spdk/autotest.sh@215 -- # '[' 0 -eq 1 ']' 00:06:13.143 08:39:31 -- spdk/autotest.sh@260 -- # '[' 0 -eq 1 ']' 00:06:13.143 08:39:31 -- spdk/autotest.sh@264 -- # timing_exit lib 00:06:13.143 08:39:31 -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:13.143 08:39:31 -- common/autotest_common.sh@10 -- # set +x 00:06:13.143 08:39:31 -- spdk/autotest.sh@266 -- # '[' 0 -eq 1 ']' 00:06:13.143 08:39:31 -- spdk/autotest.sh@274 -- # '[' 0 -eq 1 ']' 00:06:13.143 08:39:31 -- spdk/autotest.sh@283 -- # '[' 1 -eq 1 ']' 00:06:13.143 08:39:31 -- spdk/autotest.sh@284 -- # export NET_TYPE 00:06:13.143 08:39:31 -- spdk/autotest.sh@287 -- # '[' tcp = rdma ']' 00:06:13.143 08:39:31 -- spdk/autotest.sh@290 -- # '[' tcp = tcp ']' 00:06:13.143 08:39:31 -- spdk/autotest.sh@291 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:13.143 08:39:31 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:13.143 08:39:31 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:13.143 08:39:31 -- common/autotest_common.sh@10 -- # set +x 00:06:13.402 ************************************ 00:06:13.402 START TEST nvmf_tcp 00:06:13.402 ************************************ 00:06:13.402 08:39:31 nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:13.402 * Looking for test storage... 00:06:13.402 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:13.402 08:39:31 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:06:13.402 08:39:31 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:13.402 08:39:31 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:13.402 08:39:31 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:13.402 08:39:31 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:13.402 08:39:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:13.402 ************************************ 00:06:13.402 START TEST nvmf_target_core 00:06:13.402 ************************************ 00:06:13.402 08:39:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:13.402 * Looking for test storage... 00:06:13.402 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:13.402 08:39:31 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:06:13.402 08:39:31 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:13.402 08:39:31 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:13.402 08:39:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:06:13.402 08:39:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:13.402 08:39:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:13.402 08:39:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:13.402 08:39:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:13.402 08:39:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:13.402 08:39:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:13.402 08:39:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:13.402 08:39:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:13.402 08:39:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:13.402 08:39:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:13.402 08:39:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:13.402 08:39:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:13.403 08:39:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:13.403 08:39:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:13.403 08:39:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:13.403 08:39:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:13.403 08:39:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:13.403 08:39:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:13.403 08:39:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:13.403 08:39:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:13.403 08:39:31 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:13.403 08:39:31 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:13.403 08:39:31 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:13.403 08:39:31 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:06:13.403 08:39:31 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:13.403 08:39:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@47 -- # : 0 00:06:13.403 08:39:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:13.403 08:39:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:13.403 08:39:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:13.403 08:39:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:13.403 08:39:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:13.403 08:39:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:13.403 08:39:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:13.403 08:39:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:13.403 08:39:31 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:13.403 08:39:31 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:06:13.403 08:39:31 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:06:13.403 08:39:31 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:06:13.403 08:39:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:13.403 08:39:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:13.403 08:39:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:13.403 ************************************ 00:06:13.403 START TEST nvmf_abort 00:06:13.403 ************************************ 00:06:13.403 08:39:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:06:13.403 * Looking for test storage... 00:06:13.403 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:13.403 08:39:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:13.403 08:39:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:06:13.403 08:39:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:13.403 08:39:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:13.403 08:39:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:13.403 08:39:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:13.403 08:39:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:13.403 08:39:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:13.403 08:39:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:13.403 08:39:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:13.403 08:39:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:13.403 08:39:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:13.403 08:39:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:13.403 08:39:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:13.403 08:39:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:13.403 08:39:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:13.403 08:39:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:13.403 08:39:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:13.403 08:39:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:13.403 08:39:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:13.403 08:39:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:13.403 08:39:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:13.403 08:39:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:13.403 08:39:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:13.403 08:39:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:13.403 08:39:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:06:13.403 08:39:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:13.403 08:39:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:06:13.403 08:39:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:13.403 08:39:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:13.403 08:39:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:13.403 08:39:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:13.403 08:39:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:13.403 08:39:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:13.403 08:39:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:13.403 08:39:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:13.403 08:39:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:13.403 08:39:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:06:13.403 08:39:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:06:13.403 08:39:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:13.403 08:39:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:13.403 08:39:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:13.403 08:39:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:13.403 08:39:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:13.403 08:39:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:13.403 08:39:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:13.403 08:39:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:13.403 08:39:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:13.403 08:39:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:13.403 08:39:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:06:13.403 08:39:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:15.940 08:39:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:15.940 08:39:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:06:15.940 08:39:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:15.940 08:39:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:15.940 08:39:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:15.940 08:39:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:15.940 08:39:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:15.941 08:39:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:06:15.941 08:39:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:15.941 08:39:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:06:15.941 08:39:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:06:15.941 08:39:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:06:15.941 08:39:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:06:15.941 08:39:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:06:15.941 08:39:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:06:15.941 08:39:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:15.941 08:39:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:15.941 08:39:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:15.941 08:39:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:15.941 08:39:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:15.941 08:39:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:15.941 08:39:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:15.941 08:39:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:15.941 08:39:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:15.941 08:39:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:15.941 08:39:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:15.941 08:39:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:15.941 08:39:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:15.941 08:39:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:15.941 08:39:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:15.941 08:39:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:15.941 08:39:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:15.941 08:39:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:15.941 08:39:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:15.941 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:15.941 08:39:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:15.941 08:39:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:15.941 08:39:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:15.941 08:39:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:15.941 08:39:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:15.941 08:39:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:15.941 08:39:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:15.941 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:15.941 08:39:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:15.941 08:39:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:15.941 08:39:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:15.941 08:39:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:15.941 08:39:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:15.941 08:39:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:15.941 08:39:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:15.941 08:39:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:15.941 08:39:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:15.941 08:39:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:15.941 08:39:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:15.941 08:39:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:15.941 08:39:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:15.941 08:39:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:15.941 08:39:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:15.941 08:39:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:15.941 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:15.941 08:39:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:15.941 08:39:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:15.941 08:39:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:15.941 08:39:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:15.941 08:39:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:15.941 08:39:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:15.941 08:39:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:15.941 08:39:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:15.941 08:39:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:15.941 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:15.941 08:39:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:15.941 08:39:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:15.941 08:39:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:06:15.941 08:39:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:15.941 08:39:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:15.941 08:39:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:15.941 08:39:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:15.941 08:39:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:15.941 08:39:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:15.941 08:39:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:15.941 08:39:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:15.941 08:39:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:15.941 08:39:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:15.941 08:39:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:15.941 08:39:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:15.941 08:39:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:15.941 08:39:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:15.941 08:39:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:15.941 08:39:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:15.941 08:39:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:15.941 08:39:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:15.941 08:39:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:15.941 08:39:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:15.941 08:39:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:15.941 08:39:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:15.941 08:39:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:15.941 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:15.941 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.177 ms 00:06:15.941 00:06:15.941 --- 10.0.0.2 ping statistics --- 00:06:15.941 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:15.941 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:06:15.941 08:39:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:15.941 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:15.941 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.149 ms 00:06:15.941 00:06:15.941 --- 10.0.0.1 ping statistics --- 00:06:15.941 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:15.941 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:06:15.941 08:39:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:15.941 08:39:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:06:15.941 08:39:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:15.941 08:39:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:15.941 08:39:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:15.941 08:39:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:15.941 08:39:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:15.941 08:39:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:15.941 08:39:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:15.941 08:39:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:06:15.941 08:39:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:15.941 08:39:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:15.941 08:39:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:15.941 08:39:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=845867 00:06:15.942 08:39:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:15.942 08:39:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 845867 00:06:15.942 08:39:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@831 -- # '[' -z 845867 ']' 00:06:15.942 08:39:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:15.942 08:39:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:15.942 08:39:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:15.942 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:15.942 08:39:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:15.942 08:39:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:15.942 [2024-07-26 08:39:34.034329] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:06:15.942 [2024-07-26 08:39:34.034417] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:15.942 EAL: No free 2048 kB hugepages reported on node 1 00:06:15.942 [2024-07-26 08:39:34.072311] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:15.942 [2024-07-26 08:39:34.100558] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:15.942 [2024-07-26 08:39:34.186055] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:15.942 [2024-07-26 08:39:34.186115] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:15.942 [2024-07-26 08:39:34.186132] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:15.942 [2024-07-26 08:39:34.186146] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:15.942 [2024-07-26 08:39:34.186157] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:15.942 [2024-07-26 08:39:34.186263] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:15.942 [2024-07-26 08:39:34.186353] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:15.942 [2024-07-26 08:39:34.186356] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:15.942 08:39:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:15.942 08:39:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # return 0 00:06:15.942 08:39:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:15.942 08:39:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:15.942 08:39:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:15.942 08:39:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:15.942 08:39:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:06:15.942 08:39:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:15.942 08:39:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:15.942 [2024-07-26 08:39:34.329984] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:15.942 08:39:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:15.942 08:39:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:06:15.942 08:39:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:15.942 08:39:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:15.942 Malloc0 00:06:15.942 08:39:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:15.942 08:39:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:15.942 08:39:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:15.942 08:39:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:15.942 Delay0 00:06:15.942 08:39:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:15.942 08:39:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:15.942 08:39:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:15.942 08:39:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:15.942 08:39:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:15.942 08:39:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:06:15.942 08:39:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:15.942 08:39:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:15.942 08:39:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:15.942 08:39:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:15.942 08:39:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:15.942 08:39:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:16.202 [2024-07-26 08:39:34.401046] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:16.202 08:39:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:16.202 08:39:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:16.202 08:39:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:16.202 08:39:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:16.202 08:39:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:16.202 08:39:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:06:16.202 EAL: No free 2048 kB hugepages reported on node 1 00:06:16.202 [2024-07-26 08:39:34.547198] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:06:18.744 Initializing NVMe Controllers 00:06:18.744 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:06:18.744 controller IO queue size 128 less than required 00:06:18.744 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:06:18.744 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:06:18.744 Initialization complete. Launching workers. 00:06:18.744 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 32690 00:06:18.744 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 32751, failed to submit 62 00:06:18.744 success 32694, unsuccess 57, failed 0 00:06:18.744 08:39:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:18.744 08:39:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:18.744 08:39:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:18.744 08:39:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:18.744 08:39:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:06:18.744 08:39:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:06:18.744 08:39:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:18.744 08:39:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:06:18.744 08:39:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:18.744 08:39:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:06:18.744 08:39:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:18.744 08:39:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:18.744 rmmod nvme_tcp 00:06:18.744 rmmod nvme_fabrics 00:06:18.744 rmmod nvme_keyring 00:06:18.744 08:39:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:18.744 08:39:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:06:18.744 08:39:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:06:18.744 08:39:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 845867 ']' 00:06:18.744 08:39:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 845867 00:06:18.744 08:39:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@950 -- # '[' -z 845867 ']' 00:06:18.744 08:39:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # kill -0 845867 00:06:18.744 08:39:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # uname 00:06:18.744 08:39:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:18.744 08:39:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 845867 00:06:18.744 08:39:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:06:18.744 08:39:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:06:18.744 08:39:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 845867' 00:06:18.744 killing process with pid 845867 00:06:18.744 08:39:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@969 -- # kill 845867 00:06:18.744 08:39:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@974 -- # wait 845867 00:06:18.744 08:39:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:18.744 08:39:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:18.744 08:39:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:18.744 08:39:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:18.744 08:39:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:18.744 08:39:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:18.744 08:39:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:18.744 08:39:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:20.653 08:39:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:20.653 00:06:20.653 real 0m7.199s 00:06:20.653 user 0m10.474s 00:06:20.653 sys 0m2.468s 00:06:20.653 08:39:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:20.653 08:39:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:20.653 ************************************ 00:06:20.653 END TEST nvmf_abort 00:06:20.653 ************************************ 00:06:20.653 08:39:39 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:20.653 08:39:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:20.653 08:39:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:20.653 08:39:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:20.653 ************************************ 00:06:20.653 START TEST nvmf_ns_hotplug_stress 00:06:20.653 ************************************ 00:06:20.653 08:39:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:20.653 * Looking for test storage... 00:06:20.653 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:20.653 08:39:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:20.653 08:39:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:06:20.653 08:39:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:20.653 08:39:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:20.653 08:39:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:20.653 08:39:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:20.653 08:39:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:20.653 08:39:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:20.653 08:39:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:20.653 08:39:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:20.654 08:39:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:20.654 08:39:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:20.654 08:39:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:20.654 08:39:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:20.654 08:39:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:20.654 08:39:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:20.654 08:39:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:20.654 08:39:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:20.654 08:39:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:20.654 08:39:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:20.654 08:39:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:20.654 08:39:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:20.654 08:39:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:20.654 08:39:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:20.654 08:39:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:20.654 08:39:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:06:20.654 08:39:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:20.654 08:39:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:06:20.654 08:39:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:20.654 08:39:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:20.654 08:39:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:20.654 08:39:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:20.654 08:39:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:20.654 08:39:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:20.654 08:39:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:20.654 08:39:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:20.654 08:39:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:20.654 08:39:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:06:20.654 08:39:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:20.654 08:39:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:20.654 08:39:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:20.654 08:39:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:20.654 08:39:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:20.654 08:39:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:20.654 08:39:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:20.654 08:39:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:20.654 08:39:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:20.654 08:39:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:20.654 08:39:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:06:20.654 08:39:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:23.218 08:39:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:23.218 08:39:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:06:23.218 08:39:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:23.218 08:39:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:23.218 08:39:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:23.218 08:39:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:23.218 08:39:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:23.218 08:39:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:06:23.218 08:39:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:23.218 08:39:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:06:23.218 08:39:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:06:23.218 08:39:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:06:23.218 08:39:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:06:23.218 08:39:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:06:23.218 08:39:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:06:23.218 08:39:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:23.218 08:39:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:23.218 08:39:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:23.218 08:39:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:23.218 08:39:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:23.218 08:39:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:23.218 08:39:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:23.218 08:39:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:23.218 08:39:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:23.218 08:39:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:23.218 08:39:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:23.218 08:39:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:23.218 08:39:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:23.218 08:39:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:23.218 08:39:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:23.218 08:39:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:23.218 08:39:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:23.218 08:39:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:23.218 08:39:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:23.218 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:23.218 08:39:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:23.218 08:39:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:23.218 08:39:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:23.218 08:39:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:23.218 08:39:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:23.218 08:39:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:23.218 08:39:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:23.218 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:23.218 08:39:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:23.218 08:39:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:23.218 08:39:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:23.218 08:39:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:23.218 08:39:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:23.218 08:39:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:23.218 08:39:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:23.218 08:39:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:23.218 08:39:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:23.218 08:39:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:23.218 08:39:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:23.218 08:39:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:23.218 08:39:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:23.218 08:39:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:23.218 08:39:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:23.218 08:39:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:23.218 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:23.218 08:39:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:23.218 08:39:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:23.218 08:39:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:23.219 08:39:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:23.219 08:39:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:23.219 08:39:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:23.219 08:39:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:23.219 08:39:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:23.219 08:39:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:23.219 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:23.219 08:39:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:23.219 08:39:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:23.219 08:39:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:06:23.219 08:39:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:23.219 08:39:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:23.219 08:39:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:23.219 08:39:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:23.219 08:39:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:23.219 08:39:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:23.219 08:39:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:23.219 08:39:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:23.219 08:39:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:23.219 08:39:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:23.219 08:39:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:23.219 08:39:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:23.219 08:39:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:23.219 08:39:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:23.219 08:39:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:23.219 08:39:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:23.219 08:39:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:23.219 08:39:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:23.219 08:39:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:23.219 08:39:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:23.219 08:39:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:23.219 08:39:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:23.219 08:39:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:23.219 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:23.219 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.259 ms 00:06:23.219 00:06:23.219 --- 10.0.0.2 ping statistics --- 00:06:23.219 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:23.219 rtt min/avg/max/mdev = 0.259/0.259/0.259/0.000 ms 00:06:23.219 08:39:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:23.219 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:23.219 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.165 ms 00:06:23.219 00:06:23.219 --- 10.0.0.1 ping statistics --- 00:06:23.219 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:23.219 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:06:23.219 08:39:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:23.219 08:39:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:06:23.219 08:39:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:23.219 08:39:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:23.219 08:39:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:23.219 08:39:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:23.219 08:39:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:23.219 08:39:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:23.219 08:39:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:23.219 08:39:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:06:23.219 08:39:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:23.219 08:39:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:23.219 08:39:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:23.219 08:39:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=848095 00:06:23.219 08:39:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:23.219 08:39:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 848095 00:06:23.219 08:39:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # '[' -z 848095 ']' 00:06:23.219 08:39:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:23.219 08:39:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:23.219 08:39:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:23.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:23.219 08:39:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:23.219 08:39:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:23.219 [2024-07-26 08:39:41.302943] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:06:23.219 [2024-07-26 08:39:41.303042] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:23.219 EAL: No free 2048 kB hugepages reported on node 1 00:06:23.219 [2024-07-26 08:39:41.342317] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:23.219 [2024-07-26 08:39:41.368447] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:23.219 [2024-07-26 08:39:41.457161] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:23.219 [2024-07-26 08:39:41.457222] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:23.219 [2024-07-26 08:39:41.457251] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:23.219 [2024-07-26 08:39:41.457263] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:23.219 [2024-07-26 08:39:41.457273] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:23.219 [2024-07-26 08:39:41.457430] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:23.219 [2024-07-26 08:39:41.457493] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:23.219 [2024-07-26 08:39:41.457495] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:23.219 08:39:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:23.219 08:39:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # return 0 00:06:23.219 08:39:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:23.219 08:39:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:23.219 08:39:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:23.219 08:39:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:23.219 08:39:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:06:23.219 08:39:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:23.477 [2024-07-26 08:39:41.822424] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:23.477 08:39:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:23.736 08:39:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:23.994 [2024-07-26 08:39:42.322790] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:23.994 08:39:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:24.251 08:39:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:06:24.510 Malloc0 00:06:24.510 08:39:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:24.767 Delay0 00:06:24.767 08:39:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:25.023 08:39:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:06:25.280 NULL1 00:06:25.280 08:39:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:06:25.536 08:39:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=848494 00:06:25.536 08:39:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:06:25.536 08:39:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 848494 00:06:25.536 08:39:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:25.536 EAL: No free 2048 kB hugepages reported on node 1 00:06:25.794 08:39:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:26.051 08:39:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:06:26.051 08:39:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:06:26.309 true 00:06:26.309 08:39:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 848494 00:06:26.309 08:39:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:26.566 08:39:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:26.826 08:39:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:06:26.826 08:39:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:06:27.083 true 00:06:27.083 08:39:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 848494 00:06:27.084 08:39:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:27.651 Read completed with error (sct=0, sc=11) 00:06:27.909 08:39:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:27.909 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:27.909 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:27.909 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:27.909 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:27.909 08:39:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:06:27.909 08:39:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:06:28.167 true 00:06:28.167 08:39:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 848494 00:06:28.167 08:39:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:28.424 08:39:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:28.682 08:39:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:06:28.682 08:39:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:06:28.939 true 00:06:28.939 08:39:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 848494 00:06:28.939 08:39:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:29.878 08:39:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:30.135 08:39:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:06:30.135 08:39:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:06:30.393 true 00:06:30.393 08:39:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 848494 00:06:30.393 08:39:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:30.683 08:39:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:30.941 08:39:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:06:30.941 08:39:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:06:31.199 true 00:06:31.199 08:39:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 848494 00:06:31.199 08:39:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:32.138 08:39:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:32.138 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:32.395 08:39:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:06:32.395 08:39:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:06:32.654 true 00:06:32.654 08:39:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 848494 00:06:32.654 08:39:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:32.912 08:39:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:33.169 08:39:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:06:33.170 08:39:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:06:33.427 true 00:06:33.427 08:39:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 848494 00:06:33.427 08:39:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:34.364 08:39:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:34.364 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:34.364 08:39:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:06:34.364 08:39:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:06:34.622 true 00:06:34.622 08:39:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 848494 00:06:34.622 08:39:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:34.880 08:39:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:35.163 08:39:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:06:35.163 08:39:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:06:35.419 true 00:06:35.419 08:39:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 848494 00:06:35.419 08:39:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:36.354 08:39:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:36.354 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:36.354 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:36.611 08:39:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:06:36.611 08:39:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:06:36.868 true 00:06:36.869 08:39:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 848494 00:06:36.869 08:39:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:37.126 08:39:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:37.384 08:39:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:06:37.384 08:39:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:06:37.642 true 00:06:37.642 08:39:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 848494 00:06:37.642 08:39:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:38.578 08:39:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:38.578 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:38.578 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:38.837 08:39:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:06:38.837 08:39:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:06:39.096 true 00:06:39.096 08:39:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 848494 00:06:39.096 08:39:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:39.355 08:39:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:39.355 08:39:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:06:39.355 08:39:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:06:39.613 true 00:06:39.613 08:39:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 848494 00:06:39.613 08:39:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:40.551 08:39:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:40.809 08:39:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:06:40.809 08:39:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:06:41.067 true 00:06:41.067 08:39:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 848494 00:06:41.067 08:39:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:41.324 08:39:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:41.583 08:39:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:06:41.583 08:39:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:06:41.841 true 00:06:41.841 08:40:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 848494 00:06:41.841 08:40:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:42.777 08:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:43.035 08:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:06:43.035 08:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:06:43.292 true 00:06:43.292 08:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 848494 00:06:43.292 08:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:43.550 08:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:43.808 08:40:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:06:43.808 08:40:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:06:44.066 true 00:06:44.066 08:40:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 848494 00:06:44.066 08:40:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:44.324 08:40:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:44.582 08:40:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:06:44.582 08:40:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:06:44.839 true 00:06:44.840 08:40:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 848494 00:06:44.840 08:40:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:45.774 08:40:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:46.032 08:40:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:06:46.032 08:40:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:06:46.290 true 00:06:46.290 08:40:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 848494 00:06:46.290 08:40:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:46.548 08:40:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:46.806 08:40:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:06:46.806 08:40:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:06:47.064 true 00:06:47.064 08:40:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 848494 00:06:47.064 08:40:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:48.001 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:48.001 08:40:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:48.001 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:48.001 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:48.001 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:48.001 08:40:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:06:48.001 08:40:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:06:48.259 true 00:06:48.259 08:40:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 848494 00:06:48.259 08:40:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:48.517 08:40:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:48.803 08:40:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:06:48.803 08:40:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:06:49.061 true 00:06:49.061 08:40:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 848494 00:06:49.061 08:40:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:49.999 08:40:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:50.256 08:40:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:06:50.256 08:40:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:06:50.513 true 00:06:50.513 08:40:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 848494 00:06:50.513 08:40:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:50.771 08:40:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:51.028 08:40:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:06:51.028 08:40:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:06:51.286 true 00:06:51.286 08:40:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 848494 00:06:51.286 08:40:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:52.222 08:40:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:52.222 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:52.222 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:52.479 08:40:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:06:52.479 08:40:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:06:52.736 true 00:06:52.736 08:40:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 848494 00:06:52.736 08:40:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:52.994 08:40:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:53.251 08:40:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:06:53.251 08:40:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:06:53.509 true 00:06:53.509 08:40:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 848494 00:06:53.509 08:40:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:54.444 08:40:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:54.444 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:54.445 08:40:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:06:54.445 08:40:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:06:54.702 true 00:06:54.702 08:40:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 848494 00:06:54.702 08:40:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:54.959 08:40:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:55.217 08:40:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:06:55.217 08:40:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:06:55.474 true 00:06:55.474 08:40:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 848494 00:06:55.474 08:40:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:56.413 Initializing NVMe Controllers 00:06:56.413 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:56.413 Controller IO queue size 128, less than required. 00:06:56.413 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:56.413 Controller IO queue size 128, less than required. 00:06:56.413 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:56.413 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:06:56.413 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:06:56.413 Initialization complete. Launching workers. 00:06:56.413 ======================================================== 00:06:56.413 Latency(us) 00:06:56.413 Device Information : IOPS MiB/s Average min max 00:06:56.413 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 640.53 0.31 102864.80 2179.62 1051841.01 00:06:56.413 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 8699.49 4.25 14670.21 3353.34 449761.35 00:06:56.413 ======================================================== 00:06:56.413 Total : 9340.02 4.56 20718.51 2179.62 1051841.01 00:06:56.413 00:06:56.413 08:40:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:56.671 08:40:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:06:56.671 08:40:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:06:56.928 true 00:06:56.928 08:40:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 848494 00:06:56.928 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (848494) - No such process 00:06:56.928 08:40:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 848494 00:06:56.928 08:40:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:57.186 08:40:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:57.444 08:40:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:06:57.444 08:40:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:06:57.444 08:40:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:06:57.444 08:40:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:57.444 08:40:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:06:57.702 null0 00:06:57.702 08:40:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:57.702 08:40:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:57.702 08:40:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:06:57.960 null1 00:06:57.960 08:40:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:57.960 08:40:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:57.960 08:40:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:06:58.218 null2 00:06:58.218 08:40:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:58.218 08:40:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:58.218 08:40:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:06:58.218 null3 00:06:58.218 08:40:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:58.218 08:40:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:58.218 08:40:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:06:58.476 null4 00:06:58.476 08:40:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:58.476 08:40:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:58.476 08:40:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:06:58.734 null5 00:06:58.734 08:40:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:58.734 08:40:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:58.734 08:40:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:06:58.992 null6 00:06:58.992 08:40:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:58.992 08:40:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:58.992 08:40:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:06:59.250 null7 00:06:59.250 08:40:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:59.250 08:40:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:59.250 08:40:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:06:59.250 08:40:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:59.250 08:40:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:59.250 08:40:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:59.250 08:40:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:59.250 08:40:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:06:59.250 08:40:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:06:59.250 08:40:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:59.250 08:40:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:59.250 08:40:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.250 08:40:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:59.250 08:40:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:06:59.250 08:40:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:59.250 08:40:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:59.250 08:40:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:06:59.250 08:40:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:59.250 08:40:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.250 08:40:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:59.251 08:40:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:59.251 08:40:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:06:59.251 08:40:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:59.251 08:40:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:06:59.251 08:40:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:59.251 08:40:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:59.251 08:40:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.251 08:40:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:59.251 08:40:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:59.251 08:40:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:06:59.251 08:40:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:59.251 08:40:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:59.251 08:40:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:06:59.251 08:40:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:59.251 08:40:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.251 08:40:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:59.251 08:40:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:59.251 08:40:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:06:59.251 08:40:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:59.251 08:40:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:06:59.251 08:40:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:59.251 08:40:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:59.251 08:40:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.251 08:40:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:59.251 08:40:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:59.251 08:40:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:59.251 08:40:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:06:59.251 08:40:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:59.251 08:40:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:06:59.251 08:40:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:59.251 08:40:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.251 08:40:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:59.251 08:40:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:59.251 08:40:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:06:59.251 08:40:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:59.251 08:40:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:06:59.251 08:40:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:59.251 08:40:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:59.251 08:40:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.251 08:40:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:59.251 08:40:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:59.251 08:40:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:06:59.251 08:40:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:59.251 08:40:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:06:59.251 08:40:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:59.251 08:40:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:59.251 08:40:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 852589 852590 852591 852594 852596 852598 852600 852602 00:06:59.251 08:40:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.251 08:40:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:59.509 08:40:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:59.509 08:40:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:59.510 08:40:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:59.510 08:40:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:59.510 08:40:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:59.510 08:40:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:59.510 08:40:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:59.510 08:40:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:59.768 08:40:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:59.768 08:40:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.768 08:40:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:59.768 08:40:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:59.768 08:40:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.768 08:40:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:59.768 08:40:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:59.768 08:40:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.768 08:40:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:59.768 08:40:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:59.768 08:40:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.768 08:40:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:59.768 08:40:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:59.768 08:40:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.768 08:40:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:59.768 08:40:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:59.768 08:40:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.768 08:40:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:59.768 08:40:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:59.768 08:40:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.768 08:40:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:59.768 08:40:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:59.768 08:40:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.768 08:40:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:00.026 08:40:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:00.027 08:40:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:00.027 08:40:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:00.027 08:40:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:00.286 08:40:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:00.286 08:40:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:00.286 08:40:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:00.286 08:40:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:00.286 08:40:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:00.286 08:40:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:00.286 08:40:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:00.545 08:40:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:00.545 08:40:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:00.545 08:40:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:00.545 08:40:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:00.545 08:40:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:00.545 08:40:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:00.545 08:40:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:00.545 08:40:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:00.545 08:40:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:00.545 08:40:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:00.545 08:40:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:00.545 08:40:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:00.545 08:40:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:00.545 08:40:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:00.545 08:40:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:00.545 08:40:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:00.545 08:40:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:00.545 08:40:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:00.545 08:40:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:00.545 08:40:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:00.545 08:40:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:00.804 08:40:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:00.804 08:40:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:00.804 08:40:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:00.804 08:40:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:00.804 08:40:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:00.804 08:40:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:00.804 08:40:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:00.804 08:40:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:01.062 08:40:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:01.062 08:40:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:01.062 08:40:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:01.062 08:40:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:01.062 08:40:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:01.062 08:40:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:01.062 08:40:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:01.062 08:40:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:01.062 08:40:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:01.063 08:40:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:01.063 08:40:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:01.063 08:40:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:01.063 08:40:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:01.063 08:40:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:01.063 08:40:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:01.063 08:40:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:01.063 08:40:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:01.063 08:40:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:01.063 08:40:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:01.063 08:40:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:01.063 08:40:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:01.063 08:40:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:01.063 08:40:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:01.063 08:40:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:01.321 08:40:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:01.321 08:40:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:01.321 08:40:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:01.321 08:40:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:01.321 08:40:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:01.321 08:40:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:01.321 08:40:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:01.321 08:40:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:01.579 08:40:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:01.579 08:40:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:01.579 08:40:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:01.579 08:40:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:01.579 08:40:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:01.579 08:40:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:01.579 08:40:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:01.579 08:40:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:01.579 08:40:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:01.579 08:40:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:01.579 08:40:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:01.579 08:40:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:01.579 08:40:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:01.579 08:40:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:01.579 08:40:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:01.579 08:40:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:01.579 08:40:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:01.579 08:40:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:01.579 08:40:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:01.579 08:40:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:01.579 08:40:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:01.579 08:40:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:01.579 08:40:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:01.579 08:40:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:01.839 08:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:01.839 08:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:01.839 08:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:01.839 08:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:01.839 08:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:01.839 08:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:01.839 08:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:01.839 08:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:02.133 08:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:02.133 08:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:02.133 08:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:02.133 08:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:02.133 08:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:02.133 08:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:02.133 08:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:02.133 08:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:02.133 08:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:02.133 08:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:02.133 08:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:02.133 08:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:02.133 08:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:02.133 08:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:02.133 08:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:02.133 08:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:02.133 08:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:02.133 08:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:02.133 08:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:02.133 08:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:02.133 08:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:02.133 08:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:02.133 08:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:02.133 08:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:02.392 08:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:02.392 08:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:02.392 08:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:02.392 08:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:02.392 08:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:02.392 08:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:02.392 08:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:02.392 08:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:02.650 08:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:02.650 08:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:02.650 08:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:02.650 08:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:02.650 08:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:02.650 08:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:02.650 08:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:02.650 08:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:02.650 08:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:02.650 08:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:02.650 08:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:02.650 08:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:02.650 08:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:02.650 08:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:02.650 08:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:02.650 08:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:02.650 08:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:02.650 08:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:02.650 08:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:02.650 08:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:02.650 08:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:02.650 08:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:02.650 08:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:02.650 08:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:02.909 08:40:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:02.909 08:40:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:02.909 08:40:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:02.909 08:40:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:02.909 08:40:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:02.909 08:40:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:02.909 08:40:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:02.909 08:40:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:03.168 08:40:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:03.168 08:40:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:03.168 08:40:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:03.168 08:40:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:03.168 08:40:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:03.168 08:40:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:03.168 08:40:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:03.168 08:40:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:03.168 08:40:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:03.168 08:40:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:03.168 08:40:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:03.168 08:40:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:03.168 08:40:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:03.168 08:40:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:03.168 08:40:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:03.168 08:40:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:03.168 08:40:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:03.168 08:40:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:03.168 08:40:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:03.168 08:40:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:03.168 08:40:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:03.168 08:40:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:03.168 08:40:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:03.168 08:40:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:03.426 08:40:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:03.426 08:40:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:03.426 08:40:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:03.426 08:40:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:03.427 08:40:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:03.427 08:40:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:03.427 08:40:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:03.427 08:40:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:03.685 08:40:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:03.685 08:40:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:03.685 08:40:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:03.685 08:40:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:03.685 08:40:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:03.685 08:40:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:03.685 08:40:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:03.685 08:40:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:03.685 08:40:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:03.685 08:40:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:03.685 08:40:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:03.685 08:40:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:03.685 08:40:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:03.685 08:40:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:03.685 08:40:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:03.685 08:40:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:03.685 08:40:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:03.685 08:40:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:03.685 08:40:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:03.685 08:40:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:03.685 08:40:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:03.685 08:40:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:03.685 08:40:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:03.685 08:40:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:03.944 08:40:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:03.944 08:40:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:03.944 08:40:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:03.944 08:40:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:03.944 08:40:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:03.944 08:40:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:03.944 08:40:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:03.944 08:40:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:04.202 08:40:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:04.202 08:40:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:04.202 08:40:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:04.202 08:40:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:04.202 08:40:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:04.202 08:40:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:04.202 08:40:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:04.202 08:40:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:04.202 08:40:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:04.202 08:40:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:04.202 08:40:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:04.202 08:40:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:04.202 08:40:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:04.202 08:40:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:04.202 08:40:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:04.202 08:40:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:04.202 08:40:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:04.202 08:40:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:04.202 08:40:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:04.202 08:40:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:04.202 08:40:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:04.202 08:40:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:04.202 08:40:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:04.203 08:40:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:04.461 08:40:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:04.461 08:40:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:04.461 08:40:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:04.461 08:40:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:04.461 08:40:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:04.461 08:40:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:04.461 08:40:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:04.461 08:40:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:04.720 08:40:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:04.720 08:40:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:04.720 08:40:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:04.720 08:40:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:04.720 08:40:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:04.720 08:40:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:04.720 08:40:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:04.720 08:40:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:04.720 08:40:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:04.720 08:40:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:04.720 08:40:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:04.720 08:40:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:04.720 08:40:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:04.720 08:40:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:04.720 08:40:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:04.720 08:40:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:04.720 08:40:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:07:04.720 08:40:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:07:04.720 08:40:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:04.720 08:40:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:07:04.720 08:40:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:04.720 08:40:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:07:04.720 08:40:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:04.720 08:40:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:04.720 rmmod nvme_tcp 00:07:04.720 rmmod nvme_fabrics 00:07:04.979 rmmod nvme_keyring 00:07:04.979 08:40:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:04.979 08:40:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:07:04.979 08:40:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:07:04.979 08:40:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 848095 ']' 00:07:04.979 08:40:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 848095 00:07:04.979 08:40:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # '[' -z 848095 ']' 00:07:04.979 08:40:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # kill -0 848095 00:07:04.979 08:40:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # uname 00:07:04.979 08:40:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:04.979 08:40:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 848095 00:07:04.979 08:40:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:07:04.979 08:40:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:07:04.979 08:40:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 848095' 00:07:04.979 killing process with pid 848095 00:07:04.979 08:40:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@969 -- # kill 848095 00:07:04.979 08:40:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@974 -- # wait 848095 00:07:05.239 08:40:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:05.239 08:40:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:05.239 08:40:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:05.239 08:40:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:05.239 08:40:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:05.239 08:40:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:05.239 08:40:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:05.239 08:40:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:07.146 08:40:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:07.147 00:07:07.147 real 0m46.477s 00:07:07.147 user 3m27.693s 00:07:07.147 sys 0m18.154s 00:07:07.147 08:40:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:07.147 08:40:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:07.147 ************************************ 00:07:07.147 END TEST nvmf_ns_hotplug_stress 00:07:07.147 ************************************ 00:07:07.147 08:40:25 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:07.147 08:40:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:07.147 08:40:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:07.147 08:40:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:07.147 ************************************ 00:07:07.147 START TEST nvmf_delete_subsystem 00:07:07.147 ************************************ 00:07:07.147 08:40:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:07.147 * Looking for test storage... 00:07:07.407 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:07.407 08:40:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:07.407 08:40:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:07:07.407 08:40:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:07.407 08:40:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:07.407 08:40:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:07.407 08:40:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:07.407 08:40:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:07.407 08:40:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:07.407 08:40:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:07.407 08:40:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:07.407 08:40:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:07.407 08:40:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:07.407 08:40:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:07.407 08:40:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:07.407 08:40:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:07.407 08:40:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:07.407 08:40:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:07.407 08:40:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:07.407 08:40:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:07.407 08:40:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:07.407 08:40:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:07.407 08:40:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:07.407 08:40:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:07.407 08:40:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:07.407 08:40:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:07.407 08:40:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:07:07.407 08:40:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:07.407 08:40:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:07:07.407 08:40:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:07.407 08:40:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:07.407 08:40:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:07.407 08:40:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:07.407 08:40:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:07.407 08:40:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:07.407 08:40:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:07.407 08:40:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:07.407 08:40:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:07:07.407 08:40:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:07.407 08:40:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:07.407 08:40:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:07.407 08:40:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:07.407 08:40:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:07.407 08:40:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:07.407 08:40:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:07.407 08:40:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:07.407 08:40:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:07.407 08:40:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:07.407 08:40:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:07:07.407 08:40:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:09.312 08:40:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:09.312 08:40:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:07:09.312 08:40:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:09.312 08:40:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:09.312 08:40:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:09.312 08:40:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:09.312 08:40:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:09.312 08:40:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:07:09.312 08:40:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:09.312 08:40:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:07:09.312 08:40:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:07:09.312 08:40:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:07:09.312 08:40:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:07:09.312 08:40:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:07:09.312 08:40:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:07:09.313 08:40:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:09.313 08:40:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:09.313 08:40:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:09.313 08:40:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:09.313 08:40:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:09.313 08:40:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:09.313 08:40:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:09.313 08:40:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:09.313 08:40:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:09.313 08:40:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:09.313 08:40:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:09.313 08:40:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:09.313 08:40:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:09.313 08:40:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:09.313 08:40:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:09.313 08:40:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:09.313 08:40:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:09.313 08:40:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:09.313 08:40:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:09.313 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:09.313 08:40:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:09.313 08:40:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:09.313 08:40:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:09.313 08:40:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:09.313 08:40:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:09.313 08:40:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:09.313 08:40:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:09.313 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:09.313 08:40:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:09.313 08:40:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:09.313 08:40:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:09.313 08:40:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:09.313 08:40:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:09.313 08:40:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:09.313 08:40:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:09.313 08:40:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:09.313 08:40:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:09.313 08:40:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:09.313 08:40:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:09.313 08:40:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:09.313 08:40:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:09.313 08:40:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:09.313 08:40:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:09.313 08:40:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:09.313 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:09.313 08:40:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:09.313 08:40:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:09.313 08:40:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:09.313 08:40:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:09.313 08:40:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:09.313 08:40:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:09.313 08:40:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:09.313 08:40:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:09.313 08:40:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:09.313 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:09.313 08:40:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:09.313 08:40:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:09.313 08:40:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:07:09.313 08:40:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:09.313 08:40:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:09.313 08:40:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:09.313 08:40:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:09.313 08:40:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:09.313 08:40:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:09.313 08:40:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:09.313 08:40:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:09.313 08:40:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:09.313 08:40:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:09.313 08:40:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:09.313 08:40:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:09.313 08:40:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:09.313 08:40:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:09.313 08:40:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:09.313 08:40:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:09.313 08:40:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:09.313 08:40:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:09.313 08:40:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:09.313 08:40:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:09.313 08:40:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:09.313 08:40:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:09.313 08:40:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:09.313 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:09.313 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.217 ms 00:07:09.313 00:07:09.313 --- 10.0.0.2 ping statistics --- 00:07:09.314 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:09.314 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:07:09.314 08:40:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:09.314 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:09.314 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.087 ms 00:07:09.314 00:07:09.314 --- 10.0.0.1 ping statistics --- 00:07:09.314 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:09.314 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:07:09.314 08:40:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:09.314 08:40:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:07:09.314 08:40:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:09.314 08:40:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:09.314 08:40:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:09.314 08:40:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:09.314 08:40:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:09.314 08:40:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:09.314 08:40:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:09.574 08:40:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:07:09.574 08:40:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:09.574 08:40:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:09.574 08:40:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:09.574 08:40:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=855351 00:07:09.574 08:40:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:07:09.574 08:40:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 855351 00:07:09.574 08:40:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # '[' -z 855351 ']' 00:07:09.574 08:40:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:09.574 08:40:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:09.574 08:40:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:09.574 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:09.574 08:40:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:09.574 08:40:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:09.574 [2024-07-26 08:40:27.833824] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:07:09.574 [2024-07-26 08:40:27.833906] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:09.574 EAL: No free 2048 kB hugepages reported on node 1 00:07:09.574 [2024-07-26 08:40:27.870683] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:09.574 [2024-07-26 08:40:27.902565] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:09.574 [2024-07-26 08:40:27.991713] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:09.574 [2024-07-26 08:40:27.991774] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:09.574 [2024-07-26 08:40:27.991790] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:09.574 [2024-07-26 08:40:27.991803] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:09.574 [2024-07-26 08:40:27.991814] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:09.574 [2024-07-26 08:40:27.991895] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:09.574 [2024-07-26 08:40:27.991901] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.833 08:40:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:09.833 08:40:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # return 0 00:07:09.833 08:40:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:09.833 08:40:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:09.833 08:40:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:09.833 08:40:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:09.833 08:40:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:09.833 08:40:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.833 08:40:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:09.833 [2024-07-26 08:40:28.142499] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:09.833 08:40:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.833 08:40:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:09.833 08:40:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.833 08:40:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:09.833 08:40:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.833 08:40:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:09.833 08:40:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.833 08:40:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:09.833 [2024-07-26 08:40:28.158730] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:09.833 08:40:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.833 08:40:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:07:09.833 08:40:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.833 08:40:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:09.833 NULL1 00:07:09.833 08:40:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.833 08:40:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:09.833 08:40:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.833 08:40:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:09.833 Delay0 00:07:09.833 08:40:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.833 08:40:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:09.833 08:40:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.833 08:40:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:09.833 08:40:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.833 08:40:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=855375 00:07:09.833 08:40:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:07:09.833 08:40:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:09.833 EAL: No free 2048 kB hugepages reported on node 1 00:07:09.833 [2024-07-26 08:40:28.233494] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:11.740 08:40:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:11.740 08:40:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:11.740 08:40:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:11.999 Read completed with error (sct=0, sc=8) 00:07:11.999 Write completed with error (sct=0, sc=8) 00:07:11.999 Read completed with error (sct=0, sc=8) 00:07:11.999 Read completed with error (sct=0, sc=8) 00:07:11.999 starting I/O failed: -6 00:07:11.999 Read completed with error (sct=0, sc=8) 00:07:11.999 Write completed with error (sct=0, sc=8) 00:07:11.999 Write completed with error (sct=0, sc=8) 00:07:11.999 Write completed with error (sct=0, sc=8) 00:07:11.999 starting I/O failed: -6 00:07:11.999 Read completed with error (sct=0, sc=8) 00:07:11.999 Write completed with error (sct=0, sc=8) 00:07:11.999 Read completed with error (sct=0, sc=8) 00:07:11.999 Read completed with error (sct=0, sc=8) 00:07:11.999 starting I/O failed: -6 00:07:11.999 Read completed with error (sct=0, sc=8) 00:07:11.999 Write completed with error (sct=0, sc=8) 00:07:11.999 Read completed with error (sct=0, sc=8) 00:07:11.999 Read completed with error (sct=0, sc=8) 00:07:11.999 starting I/O failed: -6 00:07:11.999 Read completed with error (sct=0, sc=8) 00:07:11.999 Write completed with error (sct=0, sc=8) 00:07:11.999 Read completed with error (sct=0, sc=8) 00:07:11.999 Read completed with error (sct=0, sc=8) 00:07:11.999 starting I/O failed: -6 00:07:11.999 Write completed with error (sct=0, sc=8) 00:07:11.999 Read completed with error (sct=0, sc=8) 00:07:11.999 Read completed with error (sct=0, sc=8) 00:07:11.999 Read completed with error (sct=0, sc=8) 00:07:11.999 starting I/O failed: -6 00:07:11.999 Read completed with error (sct=0, sc=8) 00:07:11.999 Read completed with error (sct=0, sc=8) 00:07:11.999 Read completed with error (sct=0, sc=8) 00:07:11.999 Write completed with error (sct=0, sc=8) 00:07:11.999 starting I/O failed: -6 00:07:11.999 Read completed with error (sct=0, sc=8) 00:07:11.999 Read completed with error (sct=0, sc=8) 00:07:11.999 Write completed with error (sct=0, sc=8) 00:07:11.999 Read completed with error (sct=0, sc=8) 00:07:11.999 starting I/O failed: -6 00:07:11.999 Read completed with error (sct=0, sc=8) 00:07:11.999 Write completed with error (sct=0, sc=8) 00:07:11.999 Read completed with error (sct=0, sc=8) 00:07:11.999 Write completed with error (sct=0, sc=8) 00:07:11.999 starting I/O failed: -6 00:07:11.999 Read completed with error (sct=0, sc=8) 00:07:11.999 Read completed with error (sct=0, sc=8) 00:07:11.999 Write completed with error (sct=0, sc=8) 00:07:11.999 Read completed with error (sct=0, sc=8) 00:07:11.999 Write completed with error (sct=0, sc=8) 00:07:11.999 Read completed with error (sct=0, sc=8) 00:07:11.999 Read completed with error (sct=0, sc=8) 00:07:11.999 Read completed with error (sct=0, sc=8) 00:07:11.999 starting I/O failed: -6 00:07:11.999 Write completed with error (sct=0, sc=8) 00:07:11.999 starting I/O failed: -6 00:07:11.999 Read completed with error (sct=0, sc=8) 00:07:11.999 Write completed with error (sct=0, sc=8) 00:07:11.999 Read completed with error (sct=0, sc=8) 00:07:11.999 Read completed with error (sct=0, sc=8) 00:07:11.999 starting I/O failed: -6 00:07:11.999 Write completed with error (sct=0, sc=8) 00:07:11.999 Read completed with error (sct=0, sc=8) 00:07:11.999 [2024-07-26 08:40:30.417475] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f09f400d330 is same with the state(5) to be set 00:07:11.999 Read completed with error (sct=0, sc=8) 00:07:11.999 Read completed with error (sct=0, sc=8) 00:07:11.999 starting I/O failed: -6 00:07:11.999 Read completed with error (sct=0, sc=8) 00:07:11.999 Read completed with error (sct=0, sc=8) 00:07:11.999 Read completed with error (sct=0, sc=8) 00:07:11.999 Read completed with error (sct=0, sc=8) 00:07:11.999 Read completed with error (sct=0, sc=8) 00:07:11.999 Read completed with error (sct=0, sc=8) 00:07:11.999 starting I/O failed: -6 00:07:11.999 Write completed with error (sct=0, sc=8) 00:07:11.999 Write completed with error (sct=0, sc=8) 00:07:11.999 Write completed with error (sct=0, sc=8) 00:07:11.999 Write completed with error (sct=0, sc=8) 00:07:11.999 Write completed with error (sct=0, sc=8) 00:07:11.999 Read completed with error (sct=0, sc=8) 00:07:11.999 Read completed with error (sct=0, sc=8) 00:07:11.999 Read completed with error (sct=0, sc=8) 00:07:11.999 Read completed with error (sct=0, sc=8) 00:07:11.999 Read completed with error (sct=0, sc=8) 00:07:11.999 Read completed with error (sct=0, sc=8) 00:07:11.999 starting I/O failed: -6 00:07:11.999 Write completed with error (sct=0, sc=8) 00:07:11.999 Read completed with error (sct=0, sc=8) 00:07:11.999 Read completed with error (sct=0, sc=8) 00:07:11.999 Write completed with error (sct=0, sc=8) 00:07:11.999 Read completed with error (sct=0, sc=8) 00:07:11.999 Read completed with error (sct=0, sc=8) 00:07:11.999 Read completed with error (sct=0, sc=8) 00:07:11.999 Write completed with error (sct=0, sc=8) 00:07:11.999 Write completed with error (sct=0, sc=8) 00:07:11.999 starting I/O failed: -6 00:07:11.999 Read completed with error (sct=0, sc=8) 00:07:11.999 Read completed with error (sct=0, sc=8) 00:07:11.999 Read completed with error (sct=0, sc=8) 00:07:11.999 Read completed with error (sct=0, sc=8) 00:07:11.999 Read completed with error (sct=0, sc=8) 00:07:11.999 Read completed with error (sct=0, sc=8) 00:07:11.999 Read completed with error (sct=0, sc=8) 00:07:11.999 Read completed with error (sct=0, sc=8) 00:07:11.999 starting I/O failed: -6 00:07:11.999 Read completed with error (sct=0, sc=8) 00:07:11.999 Write completed with error (sct=0, sc=8) 00:07:11.999 Write completed with error (sct=0, sc=8) 00:07:11.999 Write completed with error (sct=0, sc=8) 00:07:11.999 Read completed with error (sct=0, sc=8) 00:07:11.999 Write completed with error (sct=0, sc=8) 00:07:11.999 Write completed with error (sct=0, sc=8) 00:07:11.999 Read completed with error (sct=0, sc=8) 00:07:11.999 Write completed with error (sct=0, sc=8) 00:07:11.999 Write completed with error (sct=0, sc=8) 00:07:11.999 Read completed with error (sct=0, sc=8) 00:07:11.999 starting I/O failed: -6 00:07:11.999 Read completed with error (sct=0, sc=8) 00:07:11.999 Read completed with error (sct=0, sc=8) 00:07:11.999 Read completed with error (sct=0, sc=8) 00:07:11.999 Read completed with error (sct=0, sc=8) 00:07:11.999 Write completed with error (sct=0, sc=8) 00:07:11.999 Read completed with error (sct=0, sc=8) 00:07:11.999 Read completed with error (sct=0, sc=8) 00:07:11.999 Read completed with error (sct=0, sc=8) 00:07:11.999 Write completed with error (sct=0, sc=8) 00:07:11.999 Read completed with error (sct=0, sc=8) 00:07:11.999 starting I/O failed: -6 00:07:11.999 Write completed with error (sct=0, sc=8) 00:07:11.999 Write completed with error (sct=0, sc=8) 00:07:11.999 Write completed with error (sct=0, sc=8) 00:07:11.999 Read completed with error (sct=0, sc=8) 00:07:11.999 Write completed with error (sct=0, sc=8) 00:07:11.999 Read completed with error (sct=0, sc=8) 00:07:11.999 Read completed with error (sct=0, sc=8) 00:07:11.999 Read completed with error (sct=0, sc=8) 00:07:11.999 Read completed with error (sct=0, sc=8) 00:07:11.999 Read completed with error (sct=0, sc=8) 00:07:11.999 Read completed with error (sct=0, sc=8) 00:07:11.999 starting I/O failed: -6 00:07:11.999 Read completed with error (sct=0, sc=8) 00:07:11.999 Read completed with error (sct=0, sc=8) 00:07:11.999 Read completed with error (sct=0, sc=8) 00:07:11.999 Read completed with error (sct=0, sc=8) 00:07:11.999 Read completed with error (sct=0, sc=8) 00:07:11.999 Read completed with error (sct=0, sc=8) 00:07:11.999 Read completed with error (sct=0, sc=8) 00:07:11.999 Write completed with error (sct=0, sc=8) 00:07:11.999 Read completed with error (sct=0, sc=8) 00:07:11.999 Read completed with error (sct=0, sc=8) 00:07:11.999 Read completed with error (sct=0, sc=8) 00:07:11.999 starting I/O failed: -6 00:07:11.999 Write completed with error (sct=0, sc=8) 00:07:11.999 Read completed with error (sct=0, sc=8) 00:07:11.999 Write completed with error (sct=0, sc=8) 00:07:11.999 Write completed with error (sct=0, sc=8) 00:07:11.999 Write completed with error (sct=0, sc=8) 00:07:11.999 Read completed with error (sct=0, sc=8) 00:07:11.999 Read completed with error (sct=0, sc=8) 00:07:11.999 Read completed with error (sct=0, sc=8) 00:07:11.999 starting I/O failed: -6 00:07:11.999 Read completed with error (sct=0, sc=8) 00:07:11.999 Read completed with error (sct=0, sc=8) 00:07:11.999 starting I/O failed: -6 00:07:11.999 Read completed with error (sct=0, sc=8) 00:07:11.999 Read completed with error (sct=0, sc=8) 00:07:11.999 starting I/O failed: -6 00:07:11.999 Write completed with error (sct=0, sc=8) 00:07:11.999 Read completed with error (sct=0, sc=8) 00:07:11.999 starting I/O failed: -6 00:07:11.999 Read completed with error (sct=0, sc=8) 00:07:11.999 Write completed with error (sct=0, sc=8) 00:07:11.999 starting I/O failed: -6 00:07:11.999 Read completed with error (sct=0, sc=8) 00:07:11.999 Read completed with error (sct=0, sc=8) 00:07:11.999 starting I/O failed: -6 00:07:11.999 Write completed with error (sct=0, sc=8) 00:07:11.999 Write completed with error (sct=0, sc=8) 00:07:11.999 starting I/O failed: -6 00:07:11.999 Read completed with error (sct=0, sc=8) 00:07:11.999 Read completed with error (sct=0, sc=8) 00:07:11.999 starting I/O failed: -6 00:07:11.999 Read completed with error (sct=0, sc=8) 00:07:11.999 Read completed with error (sct=0, sc=8) 00:07:11.999 starting I/O failed: -6 00:07:12.000 Read completed with error (sct=0, sc=8) 00:07:12.000 Read completed with error (sct=0, sc=8) 00:07:12.000 starting I/O failed: -6 00:07:12.000 Write completed with error (sct=0, sc=8) 00:07:12.000 Read completed with error (sct=0, sc=8) 00:07:12.000 starting I/O failed: -6 00:07:12.000 Read completed with error (sct=0, sc=8) 00:07:12.000 Read completed with error (sct=0, sc=8) 00:07:12.000 starting I/O failed: -6 00:07:12.000 Read completed with error (sct=0, sc=8) 00:07:12.000 Write completed with error (sct=0, sc=8) 00:07:12.000 starting I/O failed: -6 00:07:12.000 Read completed with error (sct=0, sc=8) 00:07:12.000 Read completed with error (sct=0, sc=8) 00:07:12.000 starting I/O failed: -6 00:07:12.000 Read completed with error (sct=0, sc=8) 00:07:12.000 Write completed with error (sct=0, sc=8) 00:07:12.000 starting I/O failed: -6 00:07:12.000 Write completed with error (sct=0, sc=8) 00:07:12.000 Write completed with error (sct=0, sc=8) 00:07:12.000 starting I/O failed: -6 00:07:12.000 Read completed with error (sct=0, sc=8) 00:07:12.000 Read completed with error (sct=0, sc=8) 00:07:12.000 starting I/O failed: -6 00:07:12.000 Read completed with error (sct=0, sc=8) 00:07:12.000 Read completed with error (sct=0, sc=8) 00:07:12.000 starting I/O failed: -6 00:07:12.000 Read completed with error (sct=0, sc=8) 00:07:12.000 Read completed with error (sct=0, sc=8) 00:07:12.000 starting I/O failed: -6 00:07:12.000 Read completed with error (sct=0, sc=8) 00:07:12.000 Read completed with error (sct=0, sc=8) 00:07:12.000 starting I/O failed: -6 00:07:12.000 Read completed with error (sct=0, sc=8) 00:07:12.000 Read completed with error (sct=0, sc=8) 00:07:12.000 starting I/O failed: -6 00:07:12.000 Read completed with error (sct=0, sc=8) 00:07:12.000 Read completed with error (sct=0, sc=8) 00:07:12.000 starting I/O failed: -6 00:07:12.000 Write completed with error (sct=0, sc=8) 00:07:12.000 Write completed with error (sct=0, sc=8) 00:07:12.000 starting I/O failed: -6 00:07:12.000 Read completed with error (sct=0, sc=8) 00:07:12.000 Write completed with error (sct=0, sc=8) 00:07:12.000 starting I/O failed: -6 00:07:12.000 Read completed with error (sct=0, sc=8) 00:07:12.000 Read completed with error (sct=0, sc=8) 00:07:12.000 starting I/O failed: -6 00:07:12.000 Read completed with error (sct=0, sc=8) 00:07:12.000 Write completed with error (sct=0, sc=8) 00:07:12.000 starting I/O failed: -6 00:07:12.000 Write completed with error (sct=0, sc=8) 00:07:12.000 Read completed with error (sct=0, sc=8) 00:07:12.000 starting I/O failed: -6 00:07:12.000 starting I/O failed: -6 00:07:12.000 starting I/O failed: -6 00:07:12.000 starting I/O failed: -6 00:07:12.000 starting I/O failed: -6 00:07:12.000 starting I/O failed: -6 00:07:12.937 [2024-07-26 08:40:31.371191] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125bb40 is same with the state(5) to be set 00:07:13.197 Read completed with error (sct=0, sc=8) 00:07:13.197 Write completed with error (sct=0, sc=8) 00:07:13.197 Write completed with error (sct=0, sc=8) 00:07:13.197 Read completed with error (sct=0, sc=8) 00:07:13.197 Read completed with error (sct=0, sc=8) 00:07:13.197 Read completed with error (sct=0, sc=8) 00:07:13.197 Write completed with error (sct=0, sc=8) 00:07:13.197 Write completed with error (sct=0, sc=8) 00:07:13.197 Read completed with error (sct=0, sc=8) 00:07:13.197 Write completed with error (sct=0, sc=8) 00:07:13.197 Write completed with error (sct=0, sc=8) 00:07:13.197 Read completed with error (sct=0, sc=8) 00:07:13.197 Read completed with error (sct=0, sc=8) 00:07:13.198 Write completed with error (sct=0, sc=8) 00:07:13.198 Read completed with error (sct=0, sc=8) 00:07:13.198 Write completed with error (sct=0, sc=8) 00:07:13.198 Read completed with error (sct=0, sc=8) 00:07:13.198 Read completed with error (sct=0, sc=8) 00:07:13.198 Read completed with error (sct=0, sc=8) 00:07:13.198 Read completed with error (sct=0, sc=8) 00:07:13.198 Read completed with error (sct=0, sc=8) 00:07:13.198 Read completed with error (sct=0, sc=8) 00:07:13.198 Read completed with error (sct=0, sc=8) 00:07:13.198 Read completed with error (sct=0, sc=8) 00:07:13.198 Read completed with error (sct=0, sc=8) 00:07:13.198 Write completed with error (sct=0, sc=8) 00:07:13.198 Write completed with error (sct=0, sc=8) 00:07:13.198 Write completed with error (sct=0, sc=8) 00:07:13.198 Read completed with error (sct=0, sc=8) 00:07:13.198 Write completed with error (sct=0, sc=8) 00:07:13.198 Write completed with error (sct=0, sc=8) 00:07:13.198 Read completed with error (sct=0, sc=8) 00:07:13.198 Write completed with error (sct=0, sc=8) 00:07:13.198 Read completed with error (sct=0, sc=8) 00:07:13.198 Read completed with error (sct=0, sc=8) 00:07:13.198 Write completed with error (sct=0, sc=8) 00:07:13.198 Read completed with error (sct=0, sc=8) 00:07:13.198 Write completed with error (sct=0, sc=8) 00:07:13.198 [2024-07-26 08:40:31.415176] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123dd40 is same with the state(5) to be set 00:07:13.198 Read completed with error (sct=0, sc=8) 00:07:13.198 Read completed with error (sct=0, sc=8) 00:07:13.198 Read completed with error (sct=0, sc=8) 00:07:13.198 Read completed with error (sct=0, sc=8) 00:07:13.198 Read completed with error (sct=0, sc=8) 00:07:13.198 Read completed with error (sct=0, sc=8) 00:07:13.198 Write completed with error (sct=0, sc=8) 00:07:13.198 Write completed with error (sct=0, sc=8) 00:07:13.198 Read completed with error (sct=0, sc=8) 00:07:13.198 Write completed with error (sct=0, sc=8) 00:07:13.198 Write completed with error (sct=0, sc=8) 00:07:13.198 Write completed with error (sct=0, sc=8) 00:07:13.198 Read completed with error (sct=0, sc=8) 00:07:13.198 Read completed with error (sct=0, sc=8) 00:07:13.198 Read completed with error (sct=0, sc=8) 00:07:13.198 Read completed with error (sct=0, sc=8) 00:07:13.198 Read completed with error (sct=0, sc=8) 00:07:13.198 Write completed with error (sct=0, sc=8) 00:07:13.198 Write completed with error (sct=0, sc=8) 00:07:13.198 Write completed with error (sct=0, sc=8) 00:07:13.198 Write completed with error (sct=0, sc=8) 00:07:13.198 Read completed with error (sct=0, sc=8) 00:07:13.198 Read completed with error (sct=0, sc=8) 00:07:13.198 Read completed with error (sct=0, sc=8) 00:07:13.198 Read completed with error (sct=0, sc=8) 00:07:13.198 Write completed with error (sct=0, sc=8) 00:07:13.198 Write completed with error (sct=0, sc=8) 00:07:13.198 Write completed with error (sct=0, sc=8) 00:07:13.198 Read completed with error (sct=0, sc=8) 00:07:13.198 Read completed with error (sct=0, sc=8) 00:07:13.198 Read completed with error (sct=0, sc=8) 00:07:13.198 Read completed with error (sct=0, sc=8) 00:07:13.198 Read completed with error (sct=0, sc=8) 00:07:13.198 Read completed with error (sct=0, sc=8) 00:07:13.198 Read completed with error (sct=0, sc=8) 00:07:13.198 Read completed with error (sct=0, sc=8) 00:07:13.198 Read completed with error (sct=0, sc=8) 00:07:13.198 Read completed with error (sct=0, sc=8) 00:07:13.198 [2024-07-26 08:40:31.415444] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e100 is same with the state(5) to be set 00:07:13.198 Read completed with error (sct=0, sc=8) 00:07:13.198 Read completed with error (sct=0, sc=8) 00:07:13.198 Read completed with error (sct=0, sc=8) 00:07:13.198 Read completed with error (sct=0, sc=8) 00:07:13.198 Read completed with error (sct=0, sc=8) 00:07:13.198 Read completed with error (sct=0, sc=8) 00:07:13.198 Read completed with error (sct=0, sc=8) 00:07:13.198 Write completed with error (sct=0, sc=8) 00:07:13.198 Read completed with error (sct=0, sc=8) 00:07:13.198 Read completed with error (sct=0, sc=8) 00:07:13.198 Write completed with error (sct=0, sc=8) 00:07:13.198 Read completed with error (sct=0, sc=8) 00:07:13.198 Read completed with error (sct=0, sc=8) 00:07:13.198 Write completed with error (sct=0, sc=8) 00:07:13.198 Read completed with error (sct=0, sc=8) 00:07:13.198 Write completed with error (sct=0, sc=8) 00:07:13.198 Write completed with error (sct=0, sc=8) 00:07:13.198 Read completed with error (sct=0, sc=8) 00:07:13.198 Read completed with error (sct=0, sc=8) 00:07:13.198 [2024-07-26 08:40:31.417485] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f09f400d660 is same with the state(5) to be set 00:07:13.198 Read completed with error (sct=0, sc=8) 00:07:13.198 Read completed with error (sct=0, sc=8) 00:07:13.198 Read completed with error (sct=0, sc=8) 00:07:13.198 Read completed with error (sct=0, sc=8) 00:07:13.198 Read completed with error (sct=0, sc=8) 00:07:13.198 Read completed with error (sct=0, sc=8) 00:07:13.198 Write completed with error (sct=0, sc=8) 00:07:13.198 Write completed with error (sct=0, sc=8) 00:07:13.198 Read completed with error (sct=0, sc=8) 00:07:13.198 Write completed with error (sct=0, sc=8) 00:07:13.198 Write completed with error (sct=0, sc=8) 00:07:13.198 Write completed with error (sct=0, sc=8) 00:07:13.198 Write completed with error (sct=0, sc=8) 00:07:13.198 Read completed with error (sct=0, sc=8) 00:07:13.198 Write completed with error (sct=0, sc=8) 00:07:13.198 Read completed with error (sct=0, sc=8) 00:07:13.198 Write completed with error (sct=0, sc=8) 00:07:13.198 Read completed with error (sct=0, sc=8) 00:07:13.198 Write completed with error (sct=0, sc=8) 00:07:13.198 [2024-07-26 08:40:31.420473] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f09f400d000 is same with the state(5) to be set 00:07:13.198 Initializing NVMe Controllers 00:07:13.198 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:13.198 Controller IO queue size 128, less than required. 00:07:13.198 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:13.198 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:13.198 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:13.198 Initialization complete. Launching workers. 00:07:13.198 ======================================================== 00:07:13.198 Latency(us) 00:07:13.198 Device Information : IOPS MiB/s Average min max 00:07:13.198 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 181.51 0.09 911829.83 743.54 1013622.81 00:07:13.198 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 159.69 0.08 919330.00 805.97 1011336.17 00:07:13.198 ======================================================== 00:07:13.198 Total : 341.20 0.17 915340.09 743.54 1013622.81 00:07:13.198 00:07:13.198 [2024-07-26 08:40:31.421129] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x125bb40 (9): Bad file descriptor 00:07:13.198 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:07:13.198 08:40:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.198 08:40:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:07:13.198 08:40:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 855375 00:07:13.198 08:40:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:07:13.768 08:40:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:07:13.768 08:40:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 855375 00:07:13.768 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (855375) - No such process 00:07:13.768 08:40:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 855375 00:07:13.768 08:40:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:07:13.768 08:40:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 855375 00:07:13.768 08:40:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:07:13.768 08:40:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:13.768 08:40:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:07:13.768 08:40:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:13.768 08:40:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 855375 00:07:13.768 08:40:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:07:13.768 08:40:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:13.768 08:40:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:13.768 08:40:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:13.768 08:40:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:13.768 08:40:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.768 08:40:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:13.768 08:40:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.768 08:40:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:13.768 08:40:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.768 08:40:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:13.768 [2024-07-26 08:40:31.945250] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:13.768 08:40:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.768 08:40:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:13.768 08:40:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.768 08:40:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:13.768 08:40:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.768 08:40:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=855899 00:07:13.768 08:40:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:07:13.768 08:40:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 855899 00:07:13.768 08:40:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:13.768 08:40:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:13.768 EAL: No free 2048 kB hugepages reported on node 1 00:07:13.768 [2024-07-26 08:40:32.008108] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:14.028 08:40:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:14.028 08:40:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 855899 00:07:14.028 08:40:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:14.596 08:40:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:14.596 08:40:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 855899 00:07:14.596 08:40:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:15.163 08:40:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:15.163 08:40:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 855899 00:07:15.163 08:40:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:15.729 08:40:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:15.729 08:40:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 855899 00:07:15.729 08:40:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:16.298 08:40:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:16.298 08:40:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 855899 00:07:16.298 08:40:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:16.557 08:40:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:16.557 08:40:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 855899 00:07:16.557 08:40:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:16.836 Initializing NVMe Controllers 00:07:16.836 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:16.836 Controller IO queue size 128, less than required. 00:07:16.836 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:16.836 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:16.836 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:16.836 Initialization complete. Launching workers. 00:07:16.836 ======================================================== 00:07:16.836 Latency(us) 00:07:16.836 Device Information : IOPS MiB/s Average min max 00:07:16.836 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1004509.50 1000209.12 1011257.99 00:07:16.836 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005757.71 1000229.22 1042164.24 00:07:16.836 ======================================================== 00:07:16.836 Total : 256.00 0.12 1005133.61 1000209.12 1042164.24 00:07:16.837 00:07:17.102 08:40:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:17.102 08:40:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 855899 00:07:17.102 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (855899) - No such process 00:07:17.102 08:40:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 855899 00:07:17.102 08:40:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:07:17.102 08:40:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:07:17.102 08:40:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:17.102 08:40:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:07:17.102 08:40:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:17.102 08:40:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:07:17.102 08:40:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:17.102 08:40:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:17.102 rmmod nvme_tcp 00:07:17.102 rmmod nvme_fabrics 00:07:17.102 rmmod nvme_keyring 00:07:17.102 08:40:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:17.102 08:40:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:07:17.102 08:40:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:07:17.102 08:40:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 855351 ']' 00:07:17.102 08:40:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 855351 00:07:17.102 08:40:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # '[' -z 855351 ']' 00:07:17.102 08:40:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # kill -0 855351 00:07:17.102 08:40:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # uname 00:07:17.102 08:40:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:17.102 08:40:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 855351 00:07:17.360 08:40:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:17.360 08:40:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:17.360 08:40:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # echo 'killing process with pid 855351' 00:07:17.360 killing process with pid 855351 00:07:17.360 08:40:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@969 -- # kill 855351 00:07:17.360 08:40:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@974 -- # wait 855351 00:07:17.360 08:40:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:17.360 08:40:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:17.360 08:40:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:17.360 08:40:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:17.360 08:40:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:17.360 08:40:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:17.360 08:40:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:17.360 08:40:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:19.897 08:40:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:19.897 00:07:19.897 real 0m12.278s 00:07:19.897 user 0m27.771s 00:07:19.897 sys 0m2.978s 00:07:19.897 08:40:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:19.897 08:40:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:19.897 ************************************ 00:07:19.897 END TEST nvmf_delete_subsystem 00:07:19.897 ************************************ 00:07:19.897 08:40:37 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:19.897 08:40:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:19.897 08:40:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:19.897 08:40:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:19.897 ************************************ 00:07:19.897 START TEST nvmf_host_management 00:07:19.897 ************************************ 00:07:19.897 08:40:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:19.897 * Looking for test storage... 00:07:19.897 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:19.897 08:40:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:19.897 08:40:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:07:19.897 08:40:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:19.897 08:40:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:19.897 08:40:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:19.897 08:40:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:19.897 08:40:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:19.898 08:40:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:19.898 08:40:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:19.898 08:40:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:19.898 08:40:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:19.898 08:40:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:19.898 08:40:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:19.898 08:40:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:19.898 08:40:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:19.898 08:40:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:19.898 08:40:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:19.898 08:40:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:19.898 08:40:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:19.898 08:40:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:19.898 08:40:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:19.898 08:40:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:19.898 08:40:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:19.898 08:40:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:19.898 08:40:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:19.898 08:40:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:07:19.898 08:40:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:19.898 08:40:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:07:19.898 08:40:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:19.898 08:40:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:19.898 08:40:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:19.898 08:40:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:19.898 08:40:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:19.898 08:40:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:19.898 08:40:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:19.898 08:40:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:19.898 08:40:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:19.898 08:40:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:19.898 08:40:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:07:19.898 08:40:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:19.898 08:40:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:19.898 08:40:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:19.898 08:40:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:19.898 08:40:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:19.898 08:40:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:19.898 08:40:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:19.898 08:40:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:19.898 08:40:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:19.898 08:40:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:19.898 08:40:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:07:19.898 08:40:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:21.805 08:40:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:21.805 08:40:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:07:21.805 08:40:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:21.805 08:40:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:21.805 08:40:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:21.805 08:40:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:21.805 08:40:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:21.805 08:40:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:07:21.805 08:40:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:21.805 08:40:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:07:21.805 08:40:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:07:21.805 08:40:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:07:21.805 08:40:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:07:21.805 08:40:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:07:21.805 08:40:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:07:21.805 08:40:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:21.805 08:40:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:21.805 08:40:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:21.805 08:40:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:21.805 08:40:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:21.805 08:40:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:21.805 08:40:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:21.805 08:40:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:21.805 08:40:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:21.805 08:40:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:21.805 08:40:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:21.805 08:40:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:21.805 08:40:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:21.805 08:40:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:21.805 08:40:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:21.805 08:40:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:21.805 08:40:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:21.805 08:40:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:21.805 08:40:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:21.805 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:21.805 08:40:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:21.805 08:40:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:21.805 08:40:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:21.805 08:40:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:21.805 08:40:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:21.805 08:40:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:21.805 08:40:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:21.805 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:21.805 08:40:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:21.805 08:40:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:21.805 08:40:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:21.805 08:40:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:21.805 08:40:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:21.805 08:40:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:21.806 08:40:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:21.806 08:40:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:21.806 08:40:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:21.806 08:40:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:21.806 08:40:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:21.806 08:40:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:21.806 08:40:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:21.806 08:40:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:21.806 08:40:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:21.806 08:40:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:21.806 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:21.806 08:40:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:21.806 08:40:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:21.806 08:40:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:21.806 08:40:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:21.806 08:40:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:21.806 08:40:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:21.806 08:40:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:21.806 08:40:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:21.806 08:40:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:21.806 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:21.806 08:40:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:21.806 08:40:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:21.806 08:40:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:07:21.806 08:40:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:21.806 08:40:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:21.806 08:40:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:21.806 08:40:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:21.806 08:40:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:21.806 08:40:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:21.806 08:40:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:21.806 08:40:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:21.806 08:40:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:21.806 08:40:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:21.806 08:40:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:21.806 08:40:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:21.806 08:40:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:21.806 08:40:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:21.806 08:40:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:21.806 08:40:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:21.806 08:40:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:21.806 08:40:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:21.806 08:40:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:21.806 08:40:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:21.806 08:40:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:21.806 08:40:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:21.806 08:40:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:21.806 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:21.806 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.208 ms 00:07:21.806 00:07:21.806 --- 10.0.0.2 ping statistics --- 00:07:21.806 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:21.806 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:07:21.806 08:40:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:21.806 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:21.806 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.095 ms 00:07:21.806 00:07:21.806 --- 10.0.0.1 ping statistics --- 00:07:21.806 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:21.806 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:07:21.806 08:40:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:21.806 08:40:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:07:21.806 08:40:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:21.806 08:40:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:21.806 08:40:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:21.806 08:40:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:21.806 08:40:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:21.806 08:40:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:21.806 08:40:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:21.806 08:40:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:07:21.806 08:40:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:07:21.806 08:40:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:07:21.806 08:40:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:21.806 08:40:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:21.806 08:40:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:21.806 08:40:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=858245 00:07:21.806 08:40:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:07:21.806 08:40:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 858245 00:07:21.806 08:40:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 858245 ']' 00:07:21.806 08:40:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:21.806 08:40:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:21.806 08:40:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:21.806 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:21.806 08:40:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:21.806 08:40:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:21.806 [2024-07-26 08:40:40.030729] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:07:21.806 [2024-07-26 08:40:40.030813] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:21.806 EAL: No free 2048 kB hugepages reported on node 1 00:07:21.806 [2024-07-26 08:40:40.068804] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:21.806 [2024-07-26 08:40:40.100782] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:21.806 [2024-07-26 08:40:40.192583] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:21.806 [2024-07-26 08:40:40.192649] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:21.806 [2024-07-26 08:40:40.192665] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:21.806 [2024-07-26 08:40:40.192679] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:21.806 [2024-07-26 08:40:40.192690] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:21.806 [2024-07-26 08:40:40.192772] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:21.806 [2024-07-26 08:40:40.192815] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:21.806 [2024-07-26 08:40:40.192892] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:07:21.806 [2024-07-26 08:40:40.192894] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:22.065 08:40:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:22.065 08:40:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:07:22.065 08:40:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:22.065 08:40:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:22.065 08:40:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:22.065 08:40:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:22.065 08:40:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:22.065 08:40:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.065 08:40:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:22.065 [2024-07-26 08:40:40.347405] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:22.065 08:40:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.065 08:40:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:07:22.065 08:40:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:22.065 08:40:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:22.065 08:40:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:22.065 08:40:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:07:22.065 08:40:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:07:22.065 08:40:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.065 08:40:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:22.065 Malloc0 00:07:22.065 [2024-07-26 08:40:40.412458] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:22.065 08:40:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.065 08:40:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:07:22.065 08:40:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:22.065 08:40:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:22.065 08:40:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=858292 00:07:22.065 08:40:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 858292 /var/tmp/bdevperf.sock 00:07:22.065 08:40:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 858292 ']' 00:07:22.065 08:40:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:22.065 08:40:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:07:22.065 08:40:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:07:22.065 08:40:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:22.065 08:40:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:07:22.065 08:40:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:22.065 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:22.065 08:40:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:07:22.065 08:40:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:22.065 08:40:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:07:22.065 08:40:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:22.065 08:40:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:07:22.065 { 00:07:22.065 "params": { 00:07:22.065 "name": "Nvme$subsystem", 00:07:22.065 "trtype": "$TEST_TRANSPORT", 00:07:22.065 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:22.065 "adrfam": "ipv4", 00:07:22.065 "trsvcid": "$NVMF_PORT", 00:07:22.065 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:22.065 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:22.065 "hdgst": ${hdgst:-false}, 00:07:22.065 "ddgst": ${ddgst:-false} 00:07:22.065 }, 00:07:22.065 "method": "bdev_nvme_attach_controller" 00:07:22.065 } 00:07:22.065 EOF 00:07:22.065 )") 00:07:22.065 08:40:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:07:22.065 08:40:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:07:22.065 08:40:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:07:22.065 08:40:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:07:22.065 "params": { 00:07:22.065 "name": "Nvme0", 00:07:22.065 "trtype": "tcp", 00:07:22.065 "traddr": "10.0.0.2", 00:07:22.065 "adrfam": "ipv4", 00:07:22.065 "trsvcid": "4420", 00:07:22.065 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:22.065 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:22.065 "hdgst": false, 00:07:22.065 "ddgst": false 00:07:22.065 }, 00:07:22.065 "method": "bdev_nvme_attach_controller" 00:07:22.065 }' 00:07:22.065 [2024-07-26 08:40:40.491579] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:07:22.065 [2024-07-26 08:40:40.491649] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid858292 ] 00:07:22.065 EAL: No free 2048 kB hugepages reported on node 1 00:07:22.065 [2024-07-26 08:40:40.523667] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:22.323 [2024-07-26 08:40:40.552934] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.323 [2024-07-26 08:40:40.639877] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.583 Running I/O for 10 seconds... 00:07:22.583 08:40:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:22.583 08:40:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:07:22.583 08:40:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:07:22.583 08:40:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.583 08:40:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:22.583 08:40:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.583 08:40:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:22.583 08:40:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:07:22.583 08:40:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:07:22.583 08:40:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:07:22.583 08:40:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:07:22.583 08:40:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:07:22.583 08:40:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:07:22.583 08:40:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:22.583 08:40:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:22.583 08:40:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:22.583 08:40:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.583 08:40:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:22.583 08:40:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.583 08:40:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:07:22.583 08:40:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:07:22.583 08:40:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:07:22.844 08:40:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:07:22.844 08:40:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:22.844 08:40:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:22.844 08:40:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:22.844 08:40:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.844 08:40:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:22.844 08:40:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.844 08:40:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=515 00:07:22.844 08:40:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 515 -ge 100 ']' 00:07:22.844 08:40:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:07:22.844 08:40:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:07:22.845 08:40:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:07:22.845 08:40:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:22.845 08:40:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.845 08:40:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:22.845 [2024-07-26 08:40:41.235141] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x797ae0 is same with the state(5) to be set 00:07:22.845 [2024-07-26 08:40:41.235238] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x797ae0 is same with the state(5) to be set 00:07:22.845 [2024-07-26 08:40:41.235254] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x797ae0 is same with the state(5) to be set 00:07:22.845 08:40:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.845 08:40:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:22.845 08:40:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.845 08:40:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:22.845 08:40:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.845 08:40:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:07:22.845 [2024-07-26 08:40:41.252560] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:22.845 [2024-07-26 08:40:41.252604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:22.845 [2024-07-26 08:40:41.252623] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:07:22.845 [2024-07-26 08:40:41.252637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:22.845 [2024-07-26 08:40:41.252652] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:07:22.845 [2024-07-26 08:40:41.252666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:22.845 [2024-07-26 08:40:41.252680] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:07:22.845 [2024-07-26 08:40:41.252703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:22.845 [2024-07-26 08:40:41.252717] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x812b50 is same with the state(5) to be set 00:07:22.845 [2024-07-26 08:40:41.252834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.845 [2024-07-26 08:40:41.252856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:22.845 [2024-07-26 08:40:41.252881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:73856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.845 [2024-07-26 08:40:41.252897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:22.845 [2024-07-26 08:40:41.252914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:73984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.845 [2024-07-26 08:40:41.252943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:22.845 [2024-07-26 08:40:41.252959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:74112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.845 [2024-07-26 08:40:41.252972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:22.845 [2024-07-26 08:40:41.252987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:74240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.845 [2024-07-26 08:40:41.253000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:22.845 [2024-07-26 08:40:41.253015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:74368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.845 [2024-07-26 08:40:41.253028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:22.845 [2024-07-26 08:40:41.253065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:74496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.845 [2024-07-26 08:40:41.253082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:22.845 [2024-07-26 08:40:41.253098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:74624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.845 [2024-07-26 08:40:41.253112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:22.845 [2024-07-26 08:40:41.253134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:74752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.845 [2024-07-26 08:40:41.253148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:22.845 [2024-07-26 08:40:41.253163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:74880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.845 [2024-07-26 08:40:41.253177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:22.845 [2024-07-26 08:40:41.253192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:75008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.845 [2024-07-26 08:40:41.253206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:22.845 [2024-07-26 08:40:41.253221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:75136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.845 [2024-07-26 08:40:41.253239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:22.845 [2024-07-26 08:40:41.253256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:75264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.845 [2024-07-26 08:40:41.253270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:22.845 [2024-07-26 08:40:41.253285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:75392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.845 [2024-07-26 08:40:41.253299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:22.845 [2024-07-26 08:40:41.253314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:75520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.845 [2024-07-26 08:40:41.253328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:22.845 [2024-07-26 08:40:41.253343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:75648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.845 [2024-07-26 08:40:41.253356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:22.845 [2024-07-26 08:40:41.253391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:75776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.845 [2024-07-26 08:40:41.253405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:22.845 [2024-07-26 08:40:41.253420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:75904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.845 [2024-07-26 08:40:41.253433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:22.845 [2024-07-26 08:40:41.253447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:76032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.845 [2024-07-26 08:40:41.253460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:22.845 [2024-07-26 08:40:41.253475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:76160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.845 [2024-07-26 08:40:41.253488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:22.845 [2024-07-26 08:40:41.253503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:76288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.845 [2024-07-26 08:40:41.253516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:22.845 [2024-07-26 08:40:41.253532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:76416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.845 [2024-07-26 08:40:41.253546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:22.845 [2024-07-26 08:40:41.253561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:76544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.845 [2024-07-26 08:40:41.253575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:22.845 [2024-07-26 08:40:41.253591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:76672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.845 [2024-07-26 08:40:41.253604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:22.845 [2024-07-26 08:40:41.253623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:76800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.845 [2024-07-26 08:40:41.253637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:22.845 [2024-07-26 08:40:41.253652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:76928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.845 [2024-07-26 08:40:41.253666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:22.845 [2024-07-26 08:40:41.253681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:77056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.845 [2024-07-26 08:40:41.253694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:22.845 [2024-07-26 08:40:41.253709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:77184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.846 [2024-07-26 08:40:41.253722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:22.846 [2024-07-26 08:40:41.253737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:77312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.846 [2024-07-26 08:40:41.253750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:22.846 [2024-07-26 08:40:41.253765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:77440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.846 [2024-07-26 08:40:41.253778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:22.846 [2024-07-26 08:40:41.253792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:77568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.846 [2024-07-26 08:40:41.253806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:22.846 [2024-07-26 08:40:41.253821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:77696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.846 [2024-07-26 08:40:41.253834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:22.846 [2024-07-26 08:40:41.253849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:77824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.846 [2024-07-26 08:40:41.253863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:22.846 [2024-07-26 08:40:41.253878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:77952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.846 [2024-07-26 08:40:41.253891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:22.846 [2024-07-26 08:40:41.253906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:78080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.846 [2024-07-26 08:40:41.253919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:22.846 [2024-07-26 08:40:41.253934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:78208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.846 [2024-07-26 08:40:41.253947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:22.846 [2024-07-26 08:40:41.253963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:78336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.846 [2024-07-26 08:40:41.253980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:22.846 [2024-07-26 08:40:41.253995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:78464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.846 [2024-07-26 08:40:41.254009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:22.846 [2024-07-26 08:40:41.254024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:78592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.846 [2024-07-26 08:40:41.254038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:22.846 [2024-07-26 08:40:41.254077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:78720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.846 [2024-07-26 08:40:41.254093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:22.846 [2024-07-26 08:40:41.254109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:78848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.846 [2024-07-26 08:40:41.254123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:22.846 [2024-07-26 08:40:41.254138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:78976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.846 [2024-07-26 08:40:41.254152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:22.846 [2024-07-26 08:40:41.254167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:79104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.846 [2024-07-26 08:40:41.254181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:22.846 [2024-07-26 08:40:41.254196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:79232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.846 [2024-07-26 08:40:41.254209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:22.846 [2024-07-26 08:40:41.254224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:79360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.846 [2024-07-26 08:40:41.254238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:22.846 [2024-07-26 08:40:41.254253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:79488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.846 [2024-07-26 08:40:41.254267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:22.846 [2024-07-26 08:40:41.254282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:79616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.846 [2024-07-26 08:40:41.254295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:22.846 [2024-07-26 08:40:41.254311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:79744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.846 [2024-07-26 08:40:41.254324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:22.846 [2024-07-26 08:40:41.254339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:79872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.846 [2024-07-26 08:40:41.254367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:22.846 [2024-07-26 08:40:41.254394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:80000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.846 [2024-07-26 08:40:41.254408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:22.846 [2024-07-26 08:40:41.254422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:80128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.846 [2024-07-26 08:40:41.254445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:22.846 [2024-07-26 08:40:41.254459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:80256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.846 [2024-07-26 08:40:41.254473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:22.846 [2024-07-26 08:40:41.254487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:80384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.846 [2024-07-26 08:40:41.254500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:22.846 [2024-07-26 08:40:41.254515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:80512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.846 [2024-07-26 08:40:41.254528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:22.846 [2024-07-26 08:40:41.254543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:80640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.846 [2024-07-26 08:40:41.254557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:22.846 [2024-07-26 08:40:41.254571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:80768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.846 [2024-07-26 08:40:41.254584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:22.846 [2024-07-26 08:40:41.254599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:80896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.846 [2024-07-26 08:40:41.254612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:22.846 [2024-07-26 08:40:41.254627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:81024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.846 [2024-07-26 08:40:41.254640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:22.846 [2024-07-26 08:40:41.254655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:81152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.846 [2024-07-26 08:40:41.254668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:22.846 [2024-07-26 08:40:41.254683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:81280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.846 [2024-07-26 08:40:41.254695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:22.846 [2024-07-26 08:40:41.254710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:81408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.846 [2024-07-26 08:40:41.254723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:22.846 [2024-07-26 08:40:41.254738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:81536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.846 [2024-07-26 08:40:41.254754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:22.846 [2024-07-26 08:40:41.254770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:81664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.846 [2024-07-26 08:40:41.254784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:22.846 [2024-07-26 08:40:41.254799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:81792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.846 [2024-07-26 08:40:41.254812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:22.847 [2024-07-26 08:40:41.254902] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xc445f0 was disconnected and freed. reset controller. 00:07:22.847 [2024-07-26 08:40:41.256028] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:07:22.847 task offset: 73728 on job bdev=Nvme0n1 fails 00:07:22.847 00:07:22.847 Latency(us) 00:07:22.847 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:22.847 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:22.847 Job: Nvme0n1 ended in about 0.40 seconds with error 00:07:22.847 Verification LBA range: start 0x0 length 0x400 00:07:22.847 Nvme0n1 : 0.40 1446.82 90.43 160.76 0.00 38681.37 2718.53 34175.81 00:07:22.847 =================================================================================================================== 00:07:22.847 Total : 1446.82 90.43 160.76 0.00 38681.37 2718.53 34175.81 00:07:22.847 [2024-07-26 08:40:41.257894] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:22.847 [2024-07-26 08:40:41.257921] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x812b50 (9): Bad file descriptor 00:07:22.847 [2024-07-26 08:40:41.268679] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:24.227 08:40:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 858292 00:07:24.227 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (858292) - No such process 00:07:24.227 08:40:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:07:24.227 08:40:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:07:24.227 08:40:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:07:24.227 08:40:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:07:24.227 08:40:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:07:24.227 08:40:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:07:24.227 08:40:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:07:24.227 08:40:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:07:24.227 { 00:07:24.227 "params": { 00:07:24.227 "name": "Nvme$subsystem", 00:07:24.227 "trtype": "$TEST_TRANSPORT", 00:07:24.227 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:24.227 "adrfam": "ipv4", 00:07:24.227 "trsvcid": "$NVMF_PORT", 00:07:24.227 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:24.227 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:24.227 "hdgst": ${hdgst:-false}, 00:07:24.227 "ddgst": ${ddgst:-false} 00:07:24.227 }, 00:07:24.227 "method": "bdev_nvme_attach_controller" 00:07:24.227 } 00:07:24.227 EOF 00:07:24.227 )") 00:07:24.227 08:40:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:07:24.227 08:40:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:07:24.227 08:40:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:07:24.227 08:40:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:07:24.227 "params": { 00:07:24.227 "name": "Nvme0", 00:07:24.227 "trtype": "tcp", 00:07:24.227 "traddr": "10.0.0.2", 00:07:24.227 "adrfam": "ipv4", 00:07:24.227 "trsvcid": "4420", 00:07:24.227 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:24.227 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:24.227 "hdgst": false, 00:07:24.227 "ddgst": false 00:07:24.227 }, 00:07:24.227 "method": "bdev_nvme_attach_controller" 00:07:24.227 }' 00:07:24.227 [2024-07-26 08:40:42.297421] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:07:24.227 [2024-07-26 08:40:42.297494] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid858569 ] 00:07:24.227 EAL: No free 2048 kB hugepages reported on node 1 00:07:24.227 [2024-07-26 08:40:42.329571] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:24.227 [2024-07-26 08:40:42.358418] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.227 [2024-07-26 08:40:42.447167] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.487 Running I/O for 1 seconds... 00:07:25.441 00:07:25.441 Latency(us) 00:07:25.441 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:25.441 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:25.441 Verification LBA range: start 0x0 length 0x400 00:07:25.441 Nvme0n1 : 1.02 1447.27 90.45 0.00 0.00 43549.99 9126.49 37865.24 00:07:25.441 =================================================================================================================== 00:07:25.441 Total : 1447.27 90.45 0.00 0.00 43549.99 9126.49 37865.24 00:07:25.699 08:40:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:07:25.699 08:40:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:07:25.699 08:40:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:07:25.699 08:40:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:25.699 08:40:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:07:25.699 08:40:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:25.699 08:40:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:07:25.699 08:40:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:25.699 08:40:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:07:25.699 08:40:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:25.699 08:40:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:25.699 rmmod nvme_tcp 00:07:25.699 rmmod nvme_fabrics 00:07:25.699 rmmod nvme_keyring 00:07:25.699 08:40:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:25.699 08:40:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:07:25.699 08:40:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:07:25.699 08:40:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 858245 ']' 00:07:25.699 08:40:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 858245 00:07:25.699 08:40:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 858245 ']' 00:07:25.699 08:40:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 858245 00:07:25.700 08:40:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:07:25.700 08:40:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:25.700 08:40:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 858245 00:07:25.700 08:40:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:07:25.700 08:40:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:07:25.700 08:40:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 858245' 00:07:25.700 killing process with pid 858245 00:07:25.700 08:40:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 858245 00:07:25.700 08:40:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 858245 00:07:25.958 [2024-07-26 08:40:44.363229] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:07:25.958 08:40:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:25.958 08:40:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:25.958 08:40:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:25.958 08:40:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:25.958 08:40:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:25.958 08:40:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:25.958 08:40:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:25.958 08:40:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:28.498 08:40:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:28.498 08:40:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:07:28.498 00:07:28.498 real 0m8.562s 00:07:28.498 user 0m19.598s 00:07:28.498 sys 0m2.513s 00:07:28.499 08:40:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:28.499 08:40:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:28.499 ************************************ 00:07:28.499 END TEST nvmf_host_management 00:07:28.499 ************************************ 00:07:28.499 08:40:46 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:28.499 08:40:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:28.499 08:40:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:28.499 08:40:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:28.499 ************************************ 00:07:28.499 START TEST nvmf_lvol 00:07:28.499 ************************************ 00:07:28.499 08:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:28.499 * Looking for test storage... 00:07:28.499 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:28.499 08:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:28.499 08:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:07:28.499 08:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:28.499 08:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:28.499 08:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:28.499 08:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:28.499 08:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:28.499 08:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:28.499 08:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:28.499 08:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:28.499 08:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:28.499 08:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:28.499 08:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:28.499 08:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:28.499 08:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:28.499 08:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:28.499 08:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:28.499 08:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:28.499 08:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:28.499 08:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:28.499 08:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:28.499 08:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:28.499 08:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:28.499 08:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:28.499 08:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:28.499 08:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:07:28.499 08:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:28.499 08:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:07:28.499 08:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:28.499 08:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:28.499 08:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:28.499 08:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:28.499 08:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:28.499 08:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:28.499 08:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:28.499 08:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:28.499 08:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:28.499 08:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:28.499 08:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:07:28.499 08:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:07:28.499 08:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:28.499 08:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:07:28.499 08:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:28.499 08:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:28.499 08:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:28.499 08:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:28.499 08:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:28.499 08:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:28.499 08:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:28.499 08:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:28.499 08:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:28.499 08:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:28.499 08:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:07:28.499 08:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:30.402 08:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:30.402 08:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:07:30.402 08:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:30.402 08:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:30.402 08:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:30.402 08:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:30.402 08:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:30.402 08:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:07:30.402 08:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:30.402 08:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:07:30.402 08:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:07:30.402 08:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:07:30.402 08:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:07:30.402 08:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:07:30.402 08:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:07:30.402 08:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:30.402 08:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:30.402 08:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:30.402 08:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:30.402 08:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:30.402 08:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:30.402 08:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:30.402 08:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:30.402 08:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:30.402 08:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:30.402 08:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:30.402 08:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:30.402 08:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:30.402 08:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:30.402 08:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:30.402 08:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:30.402 08:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:30.402 08:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:30.402 08:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:30.402 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:30.402 08:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:30.402 08:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:30.402 08:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:30.402 08:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:30.402 08:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:30.402 08:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:30.402 08:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:30.402 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:30.402 08:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:30.402 08:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:30.402 08:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:30.402 08:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:30.402 08:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:30.402 08:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:30.402 08:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:30.402 08:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:30.402 08:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:30.402 08:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:30.402 08:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:30.402 08:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:30.402 08:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:30.402 08:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:30.402 08:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:30.402 08:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:30.402 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:30.402 08:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:30.402 08:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:30.402 08:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:30.402 08:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:30.402 08:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:30.402 08:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:30.402 08:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:30.402 08:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:30.403 08:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:30.403 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:30.403 08:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:30.403 08:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:30.403 08:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:07:30.403 08:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:30.403 08:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:30.403 08:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:30.403 08:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:30.403 08:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:30.403 08:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:30.403 08:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:30.403 08:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:30.403 08:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:30.403 08:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:30.403 08:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:30.403 08:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:30.403 08:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:30.403 08:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:30.403 08:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:30.403 08:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:30.403 08:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:30.403 08:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:30.403 08:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:30.403 08:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:30.403 08:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:30.403 08:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:30.403 08:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:30.403 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:30.403 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.236 ms 00:07:30.403 00:07:30.403 --- 10.0.0.2 ping statistics --- 00:07:30.403 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:30.403 rtt min/avg/max/mdev = 0.236/0.236/0.236/0.000 ms 00:07:30.403 08:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:30.403 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:30.403 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:07:30.403 00:07:30.403 --- 10.0.0.1 ping statistics --- 00:07:30.403 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:30.403 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:07:30.403 08:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:30.403 08:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:07:30.403 08:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:30.403 08:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:30.403 08:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:30.403 08:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:30.403 08:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:30.403 08:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:30.403 08:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:30.403 08:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:07:30.403 08:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:30.403 08:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:30.403 08:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:30.403 08:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=860664 00:07:30.403 08:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:07:30.403 08:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 860664 00:07:30.403 08:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 860664 ']' 00:07:30.403 08:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:30.403 08:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:30.403 08:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:30.403 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:30.403 08:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:30.403 08:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:30.403 [2024-07-26 08:40:48.747869] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:07:30.403 [2024-07-26 08:40:48.747954] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:30.403 EAL: No free 2048 kB hugepages reported on node 1 00:07:30.403 [2024-07-26 08:40:48.793175] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:30.403 [2024-07-26 08:40:48.824105] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:30.661 [2024-07-26 08:40:48.914132] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:30.661 [2024-07-26 08:40:48.914200] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:30.661 [2024-07-26 08:40:48.914226] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:30.661 [2024-07-26 08:40:48.914240] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:30.661 [2024-07-26 08:40:48.914253] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:30.661 [2024-07-26 08:40:48.914340] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:30.661 [2024-07-26 08:40:48.914394] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:30.661 [2024-07-26 08:40:48.914412] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.661 08:40:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:30.661 08:40:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:07:30.661 08:40:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:30.661 08:40:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:30.661 08:40:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:30.661 08:40:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:30.661 08:40:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:30.919 [2024-07-26 08:40:49.279863] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:30.919 08:40:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:31.176 08:40:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:07:31.176 08:40:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:31.434 08:40:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:07:31.434 08:40:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:07:31.692 08:40:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:07:31.950 08:40:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=1de86678-0a14-4004-8a00-204a0159ffa8 00:07:31.950 08:40:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 1de86678-0a14-4004-8a00-204a0159ffa8 lvol 20 00:07:32.207 08:40:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=4e1f4fd2-2149-4d15-b626-cd0cf63b59db 00:07:32.207 08:40:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:32.465 08:40:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 4e1f4fd2-2149-4d15-b626-cd0cf63b59db 00:07:32.722 08:40:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:33.007 [2024-07-26 08:40:51.342519] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:33.007 08:40:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:33.265 08:40:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=861076 00:07:33.265 08:40:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:07:33.265 08:40:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:07:33.265 EAL: No free 2048 kB hugepages reported on node 1 00:07:34.203 08:40:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 4e1f4fd2-2149-4d15-b626-cd0cf63b59db MY_SNAPSHOT 00:07:34.461 08:40:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=2bc53e93-7916-493c-827b-79de5146a26a 00:07:34.461 08:40:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 4e1f4fd2-2149-4d15-b626-cd0cf63b59db 30 00:07:35.027 08:40:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 2bc53e93-7916-493c-827b-79de5146a26a MY_CLONE 00:07:35.027 08:40:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=27a1c595-7d95-47aa-8057-3fd598ec0bab 00:07:35.027 08:40:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 27a1c595-7d95-47aa-8057-3fd598ec0bab 00:07:35.595 08:40:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 861076 00:07:43.714 Initializing NVMe Controllers 00:07:43.714 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:43.714 Controller IO queue size 128, less than required. 00:07:43.714 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:43.714 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:07:43.714 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:07:43.714 Initialization complete. Launching workers. 00:07:43.714 ======================================================== 00:07:43.714 Latency(us) 00:07:43.714 Device Information : IOPS MiB/s Average min max 00:07:43.714 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10138.60 39.60 12632.04 2133.55 129928.72 00:07:43.715 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10649.20 41.60 12021.99 2359.56 58956.13 00:07:43.715 ======================================================== 00:07:43.715 Total : 20787.80 81.20 12319.52 2133.55 129928.72 00:07:43.715 00:07:43.715 08:41:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:43.972 08:41:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 4e1f4fd2-2149-4d15-b626-cd0cf63b59db 00:07:44.229 08:41:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 1de86678-0a14-4004-8a00-204a0159ffa8 00:07:44.488 08:41:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:07:44.488 08:41:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:07:44.488 08:41:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:07:44.488 08:41:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:44.488 08:41:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:07:44.488 08:41:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:44.488 08:41:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:07:44.488 08:41:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:44.488 08:41:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:44.488 rmmod nvme_tcp 00:07:44.488 rmmod nvme_fabrics 00:07:44.488 rmmod nvme_keyring 00:07:44.488 08:41:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:44.488 08:41:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:07:44.488 08:41:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:07:44.488 08:41:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 860664 ']' 00:07:44.488 08:41:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 860664 00:07:44.488 08:41:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 860664 ']' 00:07:44.488 08:41:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 860664 00:07:44.488 08:41:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:07:44.488 08:41:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:44.488 08:41:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 860664 00:07:44.488 08:41:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:44.488 08:41:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:44.488 08:41:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 860664' 00:07:44.488 killing process with pid 860664 00:07:44.488 08:41:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 860664 00:07:44.488 08:41:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 860664 00:07:44.746 08:41:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:44.746 08:41:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:44.746 08:41:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:44.746 08:41:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:44.746 08:41:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:44.746 08:41:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:44.746 08:41:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:44.746 08:41:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:47.278 08:41:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:47.278 00:07:47.278 real 0m18.742s 00:07:47.278 user 1m3.673s 00:07:47.278 sys 0m5.625s 00:07:47.278 08:41:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:47.278 08:41:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:47.278 ************************************ 00:07:47.278 END TEST nvmf_lvol 00:07:47.278 ************************************ 00:07:47.278 08:41:05 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:47.278 08:41:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:47.278 08:41:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:47.278 08:41:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:47.278 ************************************ 00:07:47.278 START TEST nvmf_lvs_grow 00:07:47.278 ************************************ 00:07:47.278 08:41:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:47.278 * Looking for test storage... 00:07:47.278 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:47.278 08:41:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:47.278 08:41:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:07:47.278 08:41:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:47.278 08:41:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:47.278 08:41:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:47.278 08:41:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:47.278 08:41:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:47.278 08:41:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:47.278 08:41:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:47.278 08:41:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:47.278 08:41:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:47.278 08:41:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:47.278 08:41:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:47.278 08:41:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:47.278 08:41:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:47.278 08:41:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:47.278 08:41:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:47.278 08:41:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:47.278 08:41:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:47.278 08:41:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:47.278 08:41:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:47.278 08:41:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:47.278 08:41:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:47.278 08:41:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:47.279 08:41:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:47.279 08:41:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:07:47.279 08:41:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:47.279 08:41:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:07:47.279 08:41:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:47.279 08:41:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:47.279 08:41:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:47.279 08:41:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:47.279 08:41:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:47.279 08:41:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:47.279 08:41:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:47.279 08:41:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:47.279 08:41:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:47.279 08:41:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:47.279 08:41:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:07:47.279 08:41:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:47.279 08:41:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:47.279 08:41:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:47.279 08:41:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:47.279 08:41:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:47.279 08:41:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:47.279 08:41:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:47.279 08:41:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:47.279 08:41:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:47.279 08:41:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:47.279 08:41:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:07:47.279 08:41:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:49.186 08:41:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:49.186 08:41:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:07:49.186 08:41:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:49.186 08:41:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:49.186 08:41:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:49.186 08:41:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:49.186 08:41:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:49.186 08:41:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:07:49.186 08:41:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:49.186 08:41:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:07:49.186 08:41:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:07:49.186 08:41:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:07:49.186 08:41:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:07:49.187 08:41:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:07:49.187 08:41:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:07:49.187 08:41:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:49.187 08:41:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:49.187 08:41:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:49.187 08:41:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:49.187 08:41:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:49.187 08:41:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:49.187 08:41:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:49.187 08:41:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:49.187 08:41:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:49.187 08:41:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:49.187 08:41:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:49.187 08:41:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:49.187 08:41:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:49.187 08:41:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:49.187 08:41:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:49.187 08:41:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:49.187 08:41:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:49.187 08:41:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:49.187 08:41:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:49.187 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:49.187 08:41:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:49.187 08:41:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:49.187 08:41:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:49.187 08:41:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:49.187 08:41:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:49.187 08:41:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:49.187 08:41:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:49.187 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:49.187 08:41:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:49.187 08:41:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:49.187 08:41:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:49.187 08:41:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:49.187 08:41:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:49.187 08:41:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:49.187 08:41:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:49.187 08:41:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:49.187 08:41:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:49.187 08:41:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:49.187 08:41:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:49.187 08:41:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:49.187 08:41:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:49.187 08:41:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:49.187 08:41:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:49.187 08:41:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:49.187 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:49.187 08:41:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:49.187 08:41:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:49.187 08:41:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:49.187 08:41:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:49.187 08:41:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:49.187 08:41:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:49.187 08:41:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:49.187 08:41:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:49.187 08:41:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:49.187 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:49.187 08:41:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:49.187 08:41:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:49.187 08:41:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:07:49.187 08:41:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:49.187 08:41:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:49.187 08:41:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:49.187 08:41:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:49.187 08:41:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:49.187 08:41:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:49.187 08:41:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:49.187 08:41:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:49.187 08:41:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:49.187 08:41:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:49.187 08:41:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:49.187 08:41:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:49.187 08:41:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:49.187 08:41:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:49.187 08:41:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:49.187 08:41:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:49.187 08:41:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:49.187 08:41:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:49.187 08:41:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:49.187 08:41:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:49.187 08:41:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:49.187 08:41:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:49.187 08:41:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:49.187 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:49.187 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.247 ms 00:07:49.187 00:07:49.187 --- 10.0.0.2 ping statistics --- 00:07:49.187 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:49.187 rtt min/avg/max/mdev = 0.247/0.247/0.247/0.000 ms 00:07:49.187 08:41:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:49.187 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:49.187 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.199 ms 00:07:49.187 00:07:49.187 --- 10.0.0.1 ping statistics --- 00:07:49.187 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:49.187 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:07:49.187 08:41:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:49.187 08:41:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:07:49.187 08:41:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:49.187 08:41:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:49.187 08:41:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:49.187 08:41:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:49.187 08:41:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:49.187 08:41:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:49.187 08:41:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:49.187 08:41:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:07:49.187 08:41:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:49.187 08:41:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:49.187 08:41:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:49.187 08:41:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=864347 00:07:49.188 08:41:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:49.188 08:41:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 864347 00:07:49.188 08:41:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 864347 ']' 00:07:49.188 08:41:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:49.188 08:41:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:49.188 08:41:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:49.188 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:49.188 08:41:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:49.188 08:41:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:49.188 [2024-07-26 08:41:07.504427] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:07:49.188 [2024-07-26 08:41:07.504514] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:49.188 EAL: No free 2048 kB hugepages reported on node 1 00:07:49.188 [2024-07-26 08:41:07.542917] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:49.188 [2024-07-26 08:41:07.569054] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.446 [2024-07-26 08:41:07.657176] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:49.446 [2024-07-26 08:41:07.657230] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:49.446 [2024-07-26 08:41:07.657243] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:49.446 [2024-07-26 08:41:07.657255] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:49.446 [2024-07-26 08:41:07.657265] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:49.446 [2024-07-26 08:41:07.657291] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.446 08:41:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:49.446 08:41:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:07:49.446 08:41:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:49.446 08:41:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:49.446 08:41:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:49.446 08:41:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:49.446 08:41:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:49.703 [2024-07-26 08:41:08.030405] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:49.703 08:41:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:07:49.703 08:41:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:49.703 08:41:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:49.703 08:41:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:49.703 ************************************ 00:07:49.703 START TEST lvs_grow_clean 00:07:49.703 ************************************ 00:07:49.703 08:41:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:07:49.703 08:41:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:49.703 08:41:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:49.703 08:41:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:49.703 08:41:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:49.703 08:41:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:49.703 08:41:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:49.703 08:41:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:49.703 08:41:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:49.703 08:41:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:49.961 08:41:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:49.961 08:41:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:50.220 08:41:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=68d3221d-5693-4031-8fa5-e63b18af0308 00:07:50.220 08:41:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 68d3221d-5693-4031-8fa5-e63b18af0308 00:07:50.220 08:41:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:50.480 08:41:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:50.480 08:41:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:50.480 08:41:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 68d3221d-5693-4031-8fa5-e63b18af0308 lvol 150 00:07:50.738 08:41:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=a03eaca4-66c0-4ab7-832b-b4d84e47703b 00:07:50.738 08:41:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:50.738 08:41:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:50.995 [2024-07-26 08:41:09.331307] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:50.995 [2024-07-26 08:41:09.331394] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:50.995 true 00:07:50.995 08:41:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:50.995 08:41:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 68d3221d-5693-4031-8fa5-e63b18af0308 00:07:51.254 08:41:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:51.254 08:41:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:51.512 08:41:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 a03eaca4-66c0-4ab7-832b-b4d84e47703b 00:07:51.770 08:41:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:52.028 [2024-07-26 08:41:10.434798] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:52.028 08:41:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:52.285 08:41:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=864786 00:07:52.285 08:41:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:52.286 08:41:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:52.286 08:41:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 864786 /var/tmp/bdevperf.sock 00:07:52.286 08:41:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 864786 ']' 00:07:52.286 08:41:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:52.286 08:41:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:52.286 08:41:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:52.286 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:52.286 08:41:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:52.286 08:41:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:52.286 [2024-07-26 08:41:10.731438] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:07:52.286 [2024-07-26 08:41:10.731511] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid864786 ] 00:07:52.544 EAL: No free 2048 kB hugepages reported on node 1 00:07:52.544 [2024-07-26 08:41:10.764253] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:52.544 [2024-07-26 08:41:10.794621] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.544 [2024-07-26 08:41:10.884306] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:52.544 08:41:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:52.544 08:41:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:07:52.544 08:41:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:53.145 Nvme0n1 00:07:53.145 08:41:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:53.404 [ 00:07:53.404 { 00:07:53.404 "name": "Nvme0n1", 00:07:53.404 "aliases": [ 00:07:53.404 "a03eaca4-66c0-4ab7-832b-b4d84e47703b" 00:07:53.404 ], 00:07:53.404 "product_name": "NVMe disk", 00:07:53.404 "block_size": 4096, 00:07:53.404 "num_blocks": 38912, 00:07:53.404 "uuid": "a03eaca4-66c0-4ab7-832b-b4d84e47703b", 00:07:53.404 "assigned_rate_limits": { 00:07:53.404 "rw_ios_per_sec": 0, 00:07:53.404 "rw_mbytes_per_sec": 0, 00:07:53.404 "r_mbytes_per_sec": 0, 00:07:53.404 "w_mbytes_per_sec": 0 00:07:53.404 }, 00:07:53.404 "claimed": false, 00:07:53.404 "zoned": false, 00:07:53.404 "supported_io_types": { 00:07:53.404 "read": true, 00:07:53.404 "write": true, 00:07:53.404 "unmap": true, 00:07:53.404 "flush": true, 00:07:53.404 "reset": true, 00:07:53.404 "nvme_admin": true, 00:07:53.404 "nvme_io": true, 00:07:53.404 "nvme_io_md": false, 00:07:53.404 "write_zeroes": true, 00:07:53.404 "zcopy": false, 00:07:53.404 "get_zone_info": false, 00:07:53.404 "zone_management": false, 00:07:53.404 "zone_append": false, 00:07:53.404 "compare": true, 00:07:53.404 "compare_and_write": true, 00:07:53.404 "abort": true, 00:07:53.404 "seek_hole": false, 00:07:53.404 "seek_data": false, 00:07:53.404 "copy": true, 00:07:53.404 "nvme_iov_md": false 00:07:53.404 }, 00:07:53.404 "memory_domains": [ 00:07:53.404 { 00:07:53.404 "dma_device_id": "system", 00:07:53.404 "dma_device_type": 1 00:07:53.404 } 00:07:53.404 ], 00:07:53.404 "driver_specific": { 00:07:53.404 "nvme": [ 00:07:53.404 { 00:07:53.404 "trid": { 00:07:53.404 "trtype": "TCP", 00:07:53.404 "adrfam": "IPv4", 00:07:53.404 "traddr": "10.0.0.2", 00:07:53.404 "trsvcid": "4420", 00:07:53.404 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:53.404 }, 00:07:53.404 "ctrlr_data": { 00:07:53.404 "cntlid": 1, 00:07:53.404 "vendor_id": "0x8086", 00:07:53.404 "model_number": "SPDK bdev Controller", 00:07:53.404 "serial_number": "SPDK0", 00:07:53.404 "firmware_revision": "24.09", 00:07:53.404 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:53.404 "oacs": { 00:07:53.404 "security": 0, 00:07:53.404 "format": 0, 00:07:53.404 "firmware": 0, 00:07:53.404 "ns_manage": 0 00:07:53.404 }, 00:07:53.404 "multi_ctrlr": true, 00:07:53.404 "ana_reporting": false 00:07:53.404 }, 00:07:53.404 "vs": { 00:07:53.404 "nvme_version": "1.3" 00:07:53.404 }, 00:07:53.404 "ns_data": { 00:07:53.404 "id": 1, 00:07:53.404 "can_share": true 00:07:53.404 } 00:07:53.404 } 00:07:53.404 ], 00:07:53.404 "mp_policy": "active_passive" 00:07:53.404 } 00:07:53.404 } 00:07:53.404 ] 00:07:53.404 08:41:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=864923 00:07:53.404 08:41:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:53.404 08:41:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:53.404 Running I/O for 10 seconds... 00:07:54.781 Latency(us) 00:07:54.781 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:54.781 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:54.781 Nvme0n1 : 1.00 14243.00 55.64 0.00 0.00 0.00 0.00 0.00 00:07:54.781 =================================================================================================================== 00:07:54.781 Total : 14243.00 55.64 0.00 0.00 0.00 0.00 0.00 00:07:54.781 00:07:55.346 08:41:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 68d3221d-5693-4031-8fa5-e63b18af0308 00:07:55.603 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:55.603 Nvme0n1 : 2.00 15100.50 58.99 0.00 0.00 0.00 0.00 0.00 00:07:55.603 =================================================================================================================== 00:07:55.603 Total : 15100.50 58.99 0.00 0.00 0.00 0.00 0.00 00:07:55.603 00:07:55.603 true 00:07:55.603 08:41:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 68d3221d-5693-4031-8fa5-e63b18af0308 00:07:55.603 08:41:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:55.862 08:41:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:55.862 08:41:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:55.862 08:41:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 864923 00:07:56.430 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:56.430 Nvme0n1 : 3.00 15453.67 60.37 0.00 0.00 0.00 0.00 0.00 00:07:56.430 =================================================================================================================== 00:07:56.431 Total : 15453.67 60.37 0.00 0.00 0.00 0.00 0.00 00:07:56.431 00:07:57.811 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:57.811 Nvme0n1 : 4.00 15482.50 60.48 0.00 0.00 0.00 0.00 0.00 00:07:57.811 =================================================================================================================== 00:07:57.811 Total : 15482.50 60.48 0.00 0.00 0.00 0.00 0.00 00:07:57.811 00:07:58.751 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:58.751 Nvme0n1 : 5.00 15679.80 61.25 0.00 0.00 0.00 0.00 0.00 00:07:58.751 =================================================================================================================== 00:07:58.751 Total : 15679.80 61.25 0.00 0.00 0.00 0.00 0.00 00:07:58.751 00:07:59.690 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:59.690 Nvme0n1 : 6.00 15829.00 61.83 0.00 0.00 0.00 0.00 0.00 00:07:59.690 =================================================================================================================== 00:07:59.690 Total : 15829.00 61.83 0.00 0.00 0.00 0.00 0.00 00:07:59.690 00:08:00.626 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:00.626 Nvme0n1 : 7.00 15912.00 62.16 0.00 0.00 0.00 0.00 0.00 00:08:00.627 =================================================================================================================== 00:08:00.627 Total : 15912.00 62.16 0.00 0.00 0.00 0.00 0.00 00:08:00.627 00:08:01.564 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:01.564 Nvme0n1 : 8.00 15877.12 62.02 0.00 0.00 0.00 0.00 0.00 00:08:01.564 =================================================================================================================== 00:08:01.564 Total : 15877.12 62.02 0.00 0.00 0.00 0.00 0.00 00:08:01.564 00:08:02.503 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:02.503 Nvme0n1 : 9.00 15807.33 61.75 0.00 0.00 0.00 0.00 0.00 00:08:02.503 =================================================================================================================== 00:08:02.503 Total : 15807.33 61.75 0.00 0.00 0.00 0.00 0.00 00:08:02.503 00:08:03.439 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:03.439 Nvme0n1 : 10.00 15731.70 61.45 0.00 0.00 0.00 0.00 0.00 00:08:03.439 =================================================================================================================== 00:08:03.439 Total : 15731.70 61.45 0.00 0.00 0.00 0.00 0.00 00:08:03.439 00:08:03.439 00:08:03.439 Latency(us) 00:08:03.439 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:03.439 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:03.439 Nvme0n1 : 10.00 15738.98 61.48 0.00 0.00 8127.74 2281.62 16893.72 00:08:03.439 =================================================================================================================== 00:08:03.439 Total : 15738.98 61.48 0.00 0.00 8127.74 2281.62 16893.72 00:08:03.439 0 00:08:03.439 08:41:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 864786 00:08:03.439 08:41:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 864786 ']' 00:08:03.439 08:41:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 864786 00:08:03.439 08:41:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:08:03.439 08:41:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:03.439 08:41:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 864786 00:08:03.697 08:41:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:03.697 08:41:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:03.697 08:41:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 864786' 00:08:03.697 killing process with pid 864786 00:08:03.697 08:41:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 864786 00:08:03.697 Received shutdown signal, test time was about 10.000000 seconds 00:08:03.697 00:08:03.697 Latency(us) 00:08:03.697 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:03.697 =================================================================================================================== 00:08:03.697 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:03.697 08:41:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 864786 00:08:03.697 08:41:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:03.954 08:41:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:04.212 08:41:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 68d3221d-5693-4031-8fa5-e63b18af0308 00:08:04.212 08:41:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:04.472 08:41:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:04.472 08:41:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:08:04.472 08:41:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:04.731 [2024-07-26 08:41:23.113841] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:04.731 08:41:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 68d3221d-5693-4031-8fa5-e63b18af0308 00:08:04.731 08:41:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:08:04.731 08:41:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 68d3221d-5693-4031-8fa5-e63b18af0308 00:08:04.731 08:41:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:04.731 08:41:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:04.731 08:41:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:04.731 08:41:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:04.731 08:41:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:04.731 08:41:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:04.731 08:41:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:04.731 08:41:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:04.731 08:41:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 68d3221d-5693-4031-8fa5-e63b18af0308 00:08:04.989 request: 00:08:04.989 { 00:08:04.989 "uuid": "68d3221d-5693-4031-8fa5-e63b18af0308", 00:08:04.989 "method": "bdev_lvol_get_lvstores", 00:08:04.989 "req_id": 1 00:08:04.989 } 00:08:04.989 Got JSON-RPC error response 00:08:04.989 response: 00:08:04.989 { 00:08:04.989 "code": -19, 00:08:04.989 "message": "No such device" 00:08:04.989 } 00:08:04.989 08:41:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:08:04.989 08:41:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:04.989 08:41:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:04.989 08:41:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:04.989 08:41:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:05.247 aio_bdev 00:08:05.247 08:41:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev a03eaca4-66c0-4ab7-832b-b4d84e47703b 00:08:05.247 08:41:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=a03eaca4-66c0-4ab7-832b-b4d84e47703b 00:08:05.247 08:41:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:05.247 08:41:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:08:05.247 08:41:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:05.247 08:41:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:05.247 08:41:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:05.507 08:41:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b a03eaca4-66c0-4ab7-832b-b4d84e47703b -t 2000 00:08:05.766 [ 00:08:05.766 { 00:08:05.766 "name": "a03eaca4-66c0-4ab7-832b-b4d84e47703b", 00:08:05.766 "aliases": [ 00:08:05.766 "lvs/lvol" 00:08:05.766 ], 00:08:05.766 "product_name": "Logical Volume", 00:08:05.766 "block_size": 4096, 00:08:05.766 "num_blocks": 38912, 00:08:05.766 "uuid": "a03eaca4-66c0-4ab7-832b-b4d84e47703b", 00:08:05.766 "assigned_rate_limits": { 00:08:05.766 "rw_ios_per_sec": 0, 00:08:05.766 "rw_mbytes_per_sec": 0, 00:08:05.766 "r_mbytes_per_sec": 0, 00:08:05.766 "w_mbytes_per_sec": 0 00:08:05.766 }, 00:08:05.766 "claimed": false, 00:08:05.766 "zoned": false, 00:08:05.766 "supported_io_types": { 00:08:05.766 "read": true, 00:08:05.766 "write": true, 00:08:05.766 "unmap": true, 00:08:05.766 "flush": false, 00:08:05.766 "reset": true, 00:08:05.766 "nvme_admin": false, 00:08:05.766 "nvme_io": false, 00:08:05.766 "nvme_io_md": false, 00:08:05.766 "write_zeroes": true, 00:08:05.766 "zcopy": false, 00:08:05.766 "get_zone_info": false, 00:08:05.766 "zone_management": false, 00:08:05.766 "zone_append": false, 00:08:05.766 "compare": false, 00:08:05.766 "compare_and_write": false, 00:08:05.766 "abort": false, 00:08:05.766 "seek_hole": true, 00:08:05.766 "seek_data": true, 00:08:05.766 "copy": false, 00:08:05.766 "nvme_iov_md": false 00:08:05.766 }, 00:08:05.766 "driver_specific": { 00:08:05.766 "lvol": { 00:08:05.766 "lvol_store_uuid": "68d3221d-5693-4031-8fa5-e63b18af0308", 00:08:05.766 "base_bdev": "aio_bdev", 00:08:05.766 "thin_provision": false, 00:08:05.766 "num_allocated_clusters": 38, 00:08:05.766 "snapshot": false, 00:08:05.766 "clone": false, 00:08:05.766 "esnap_clone": false 00:08:05.766 } 00:08:05.766 } 00:08:05.766 } 00:08:05.766 ] 00:08:05.766 08:41:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:08:05.766 08:41:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 68d3221d-5693-4031-8fa5-e63b18af0308 00:08:05.766 08:41:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:06.024 08:41:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:06.024 08:41:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 68d3221d-5693-4031-8fa5-e63b18af0308 00:08:06.024 08:41:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:06.283 08:41:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:06.283 08:41:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete a03eaca4-66c0-4ab7-832b-b4d84e47703b 00:08:06.541 08:41:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 68d3221d-5693-4031-8fa5-e63b18af0308 00:08:06.800 08:41:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:07.369 08:41:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:07.369 00:08:07.369 real 0m17.488s 00:08:07.369 user 0m16.661s 00:08:07.369 sys 0m1.972s 00:08:07.369 08:41:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:07.369 08:41:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:07.369 ************************************ 00:08:07.369 END TEST lvs_grow_clean 00:08:07.369 ************************************ 00:08:07.369 08:41:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:08:07.369 08:41:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:07.369 08:41:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:07.369 08:41:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:07.369 ************************************ 00:08:07.369 START TEST lvs_grow_dirty 00:08:07.369 ************************************ 00:08:07.369 08:41:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:08:07.369 08:41:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:07.369 08:41:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:07.369 08:41:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:07.369 08:41:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:07.369 08:41:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:07.369 08:41:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:07.369 08:41:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:07.369 08:41:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:07.369 08:41:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:07.657 08:41:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:07.657 08:41:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:07.916 08:41:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=304c15f0-c013-4f49-9115-4db4d27992bc 00:08:07.916 08:41:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 304c15f0-c013-4f49-9115-4db4d27992bc 00:08:07.916 08:41:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:07.916 08:41:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:07.916 08:41:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:07.916 08:41:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 304c15f0-c013-4f49-9115-4db4d27992bc lvol 150 00:08:08.176 08:41:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=ce511bee-2cb5-4aaf-b98d-c95700a21248 00:08:08.176 08:41:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:08.176 08:41:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:08.434 [2024-07-26 08:41:26.871310] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:08.434 [2024-07-26 08:41:26.871414] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:08.434 true 00:08:08.434 08:41:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 304c15f0-c013-4f49-9115-4db4d27992bc 00:08:08.434 08:41:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:08.691 08:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:08.691 08:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:08.951 08:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 ce511bee-2cb5-4aaf-b98d-c95700a21248 00:08:09.210 08:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:09.469 [2024-07-26 08:41:27.866383] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:09.469 08:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:09.727 08:41:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=866851 00:08:09.727 08:41:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:09.727 08:41:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:09.727 08:41:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 866851 /var/tmp/bdevperf.sock 00:08:09.727 08:41:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 866851 ']' 00:08:09.727 08:41:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:09.727 08:41:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:09.727 08:41:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:09.727 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:09.727 08:41:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:09.727 08:41:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:09.985 [2024-07-26 08:41:28.221666] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:08:09.985 [2024-07-26 08:41:28.221750] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid866851 ] 00:08:09.985 EAL: No free 2048 kB hugepages reported on node 1 00:08:09.985 [2024-07-26 08:41:28.259190] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:09.985 [2024-07-26 08:41:28.289677] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.985 [2024-07-26 08:41:28.382504] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:10.243 08:41:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:10.243 08:41:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:08:10.243 08:41:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:10.501 Nvme0n1 00:08:10.760 08:41:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:10.760 [ 00:08:10.760 { 00:08:10.760 "name": "Nvme0n1", 00:08:10.760 "aliases": [ 00:08:10.760 "ce511bee-2cb5-4aaf-b98d-c95700a21248" 00:08:10.760 ], 00:08:10.760 "product_name": "NVMe disk", 00:08:10.760 "block_size": 4096, 00:08:10.760 "num_blocks": 38912, 00:08:10.760 "uuid": "ce511bee-2cb5-4aaf-b98d-c95700a21248", 00:08:10.760 "assigned_rate_limits": { 00:08:10.760 "rw_ios_per_sec": 0, 00:08:10.760 "rw_mbytes_per_sec": 0, 00:08:10.760 "r_mbytes_per_sec": 0, 00:08:10.760 "w_mbytes_per_sec": 0 00:08:10.760 }, 00:08:10.760 "claimed": false, 00:08:10.760 "zoned": false, 00:08:10.760 "supported_io_types": { 00:08:10.760 "read": true, 00:08:10.760 "write": true, 00:08:10.760 "unmap": true, 00:08:10.760 "flush": true, 00:08:10.760 "reset": true, 00:08:10.760 "nvme_admin": true, 00:08:10.760 "nvme_io": true, 00:08:10.760 "nvme_io_md": false, 00:08:10.760 "write_zeroes": true, 00:08:10.760 "zcopy": false, 00:08:10.760 "get_zone_info": false, 00:08:10.760 "zone_management": false, 00:08:10.760 "zone_append": false, 00:08:10.760 "compare": true, 00:08:10.760 "compare_and_write": true, 00:08:10.760 "abort": true, 00:08:10.760 "seek_hole": false, 00:08:10.760 "seek_data": false, 00:08:10.760 "copy": true, 00:08:10.760 "nvme_iov_md": false 00:08:10.760 }, 00:08:10.760 "memory_domains": [ 00:08:10.760 { 00:08:10.760 "dma_device_id": "system", 00:08:10.760 "dma_device_type": 1 00:08:10.760 } 00:08:10.760 ], 00:08:10.760 "driver_specific": { 00:08:10.760 "nvme": [ 00:08:10.760 { 00:08:10.760 "trid": { 00:08:10.760 "trtype": "TCP", 00:08:10.760 "adrfam": "IPv4", 00:08:10.760 "traddr": "10.0.0.2", 00:08:10.760 "trsvcid": "4420", 00:08:10.760 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:10.760 }, 00:08:10.760 "ctrlr_data": { 00:08:10.760 "cntlid": 1, 00:08:10.760 "vendor_id": "0x8086", 00:08:10.760 "model_number": "SPDK bdev Controller", 00:08:10.760 "serial_number": "SPDK0", 00:08:10.760 "firmware_revision": "24.09", 00:08:10.760 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:10.760 "oacs": { 00:08:10.760 "security": 0, 00:08:10.760 "format": 0, 00:08:10.760 "firmware": 0, 00:08:10.760 "ns_manage": 0 00:08:10.760 }, 00:08:10.760 "multi_ctrlr": true, 00:08:10.760 "ana_reporting": false 00:08:10.760 }, 00:08:10.760 "vs": { 00:08:10.760 "nvme_version": "1.3" 00:08:10.760 }, 00:08:10.760 "ns_data": { 00:08:10.760 "id": 1, 00:08:10.760 "can_share": true 00:08:10.760 } 00:08:10.760 } 00:08:10.760 ], 00:08:10.760 "mp_policy": "active_passive" 00:08:10.760 } 00:08:10.760 } 00:08:10.760 ] 00:08:10.760 08:41:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=866991 00:08:10.760 08:41:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:10.760 08:41:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:11.018 Running I/O for 10 seconds... 00:08:11.957 Latency(us) 00:08:11.957 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:11.957 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:11.957 Nvme0n1 : 1.00 14245.00 55.64 0.00 0.00 0.00 0.00 0.00 00:08:11.957 =================================================================================================================== 00:08:11.957 Total : 14245.00 55.64 0.00 0.00 0.00 0.00 0.00 00:08:11.957 00:08:12.893 08:41:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 304c15f0-c013-4f49-9115-4db4d27992bc 00:08:12.893 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:12.893 Nvme0n1 : 2.00 14526.50 56.74 0.00 0.00 0.00 0.00 0.00 00:08:12.893 =================================================================================================================== 00:08:12.893 Total : 14526.50 56.74 0.00 0.00 0.00 0.00 0.00 00:08:12.893 00:08:13.151 true 00:08:13.151 08:41:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 304c15f0-c013-4f49-9115-4db4d27992bc 00:08:13.151 08:41:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:13.410 08:41:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:13.410 08:41:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:13.410 08:41:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 866991 00:08:13.979 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:13.979 Nvme0n1 : 3.00 14659.00 57.26 0.00 0.00 0.00 0.00 0.00 00:08:13.979 =================================================================================================================== 00:08:13.979 Total : 14659.00 57.26 0.00 0.00 0.00 0.00 0.00 00:08:13.979 00:08:14.916 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:14.916 Nvme0n1 : 4.00 14694.25 57.40 0.00 0.00 0.00 0.00 0.00 00:08:14.916 =================================================================================================================== 00:08:14.916 Total : 14694.25 57.40 0.00 0.00 0.00 0.00 0.00 00:08:14.916 00:08:16.295 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:16.295 Nvme0n1 : 5.00 14740.00 57.58 0.00 0.00 0.00 0.00 0.00 00:08:16.295 =================================================================================================================== 00:08:16.295 Total : 14740.00 57.58 0.00 0.00 0.00 0.00 0.00 00:08:16.295 00:08:17.233 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:17.233 Nvme0n1 : 6.00 14782.17 57.74 0.00 0.00 0.00 0.00 0.00 00:08:17.233 =================================================================================================================== 00:08:17.233 Total : 14782.17 57.74 0.00 0.00 0.00 0.00 0.00 00:08:17.233 00:08:18.171 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:18.171 Nvme0n1 : 7.00 14813.29 57.86 0.00 0.00 0.00 0.00 0.00 00:08:18.171 =================================================================================================================== 00:08:18.171 Total : 14813.29 57.86 0.00 0.00 0.00 0.00 0.00 00:08:18.171 00:08:19.108 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:19.108 Nvme0n1 : 8.00 14862.12 58.06 0.00 0.00 0.00 0.00 0.00 00:08:19.108 =================================================================================================================== 00:08:19.108 Total : 14862.12 58.06 0.00 0.00 0.00 0.00 0.00 00:08:19.108 00:08:20.047 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:20.047 Nvme0n1 : 9.00 14905.22 58.22 0.00 0.00 0.00 0.00 0.00 00:08:20.047 =================================================================================================================== 00:08:20.047 Total : 14905.22 58.22 0.00 0.00 0.00 0.00 0.00 00:08:20.047 00:08:20.983 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:20.983 Nvme0n1 : 10.00 14940.60 58.36 0.00 0.00 0.00 0.00 0.00 00:08:20.983 =================================================================================================================== 00:08:20.983 Total : 14940.60 58.36 0.00 0.00 0.00 0.00 0.00 00:08:20.983 00:08:20.983 00:08:20.983 Latency(us) 00:08:20.983 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:20.983 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:20.983 Nvme0n1 : 10.00 14946.93 58.39 0.00 0.00 8558.91 2172.40 17087.91 00:08:20.983 =================================================================================================================== 00:08:20.983 Total : 14946.93 58.39 0.00 0.00 8558.91 2172.40 17087.91 00:08:20.983 0 00:08:20.983 08:41:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 866851 00:08:20.983 08:41:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 866851 ']' 00:08:20.983 08:41:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 866851 00:08:20.983 08:41:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:08:20.983 08:41:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:20.983 08:41:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 866851 00:08:20.983 08:41:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:20.983 08:41:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:20.983 08:41:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 866851' 00:08:20.983 killing process with pid 866851 00:08:20.983 08:41:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 866851 00:08:20.983 Received shutdown signal, test time was about 10.000000 seconds 00:08:20.983 00:08:20.983 Latency(us) 00:08:20.983 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:20.983 =================================================================================================================== 00:08:20.983 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:20.983 08:41:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 866851 00:08:21.242 08:41:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:21.499 08:41:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:21.758 08:41:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 304c15f0-c013-4f49-9115-4db4d27992bc 00:08:21.758 08:41:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:22.040 08:41:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:22.040 08:41:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:08:22.040 08:41:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 864347 00:08:22.040 08:41:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 864347 00:08:22.040 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 864347 Killed "${NVMF_APP[@]}" "$@" 00:08:22.040 08:41:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:08:22.040 08:41:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:08:22.040 08:41:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:22.040 08:41:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:22.040 08:41:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:22.040 08:41:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=868329 00:08:22.040 08:41:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:22.040 08:41:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 868329 00:08:22.040 08:41:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 868329 ']' 00:08:22.040 08:41:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:22.040 08:41:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:22.040 08:41:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:22.040 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:22.040 08:41:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:22.040 08:41:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:22.040 [2024-07-26 08:41:40.473078] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:08:22.040 [2024-07-26 08:41:40.473184] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:22.303 EAL: No free 2048 kB hugepages reported on node 1 00:08:22.303 [2024-07-26 08:41:40.516093] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:22.303 [2024-07-26 08:41:40.542714] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:22.303 [2024-07-26 08:41:40.629767] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:22.303 [2024-07-26 08:41:40.629828] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:22.303 [2024-07-26 08:41:40.629842] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:22.303 [2024-07-26 08:41:40.629854] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:22.303 [2024-07-26 08:41:40.629863] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:22.303 [2024-07-26 08:41:40.629890] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:22.303 08:41:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:22.303 08:41:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:08:22.303 08:41:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:22.303 08:41:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:22.303 08:41:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:22.562 08:41:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:22.562 08:41:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:22.562 [2024-07-26 08:41:40.998672] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:08:22.562 [2024-07-26 08:41:40.998812] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:08:22.562 [2024-07-26 08:41:40.998858] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:08:22.562 08:41:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:08:22.562 08:41:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev ce511bee-2cb5-4aaf-b98d-c95700a21248 00:08:22.562 08:41:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=ce511bee-2cb5-4aaf-b98d-c95700a21248 00:08:22.562 08:41:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:22.562 08:41:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:08:22.822 08:41:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:22.823 08:41:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:22.823 08:41:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:23.083 08:41:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b ce511bee-2cb5-4aaf-b98d-c95700a21248 -t 2000 00:08:23.343 [ 00:08:23.343 { 00:08:23.343 "name": "ce511bee-2cb5-4aaf-b98d-c95700a21248", 00:08:23.343 "aliases": [ 00:08:23.343 "lvs/lvol" 00:08:23.343 ], 00:08:23.343 "product_name": "Logical Volume", 00:08:23.343 "block_size": 4096, 00:08:23.343 "num_blocks": 38912, 00:08:23.343 "uuid": "ce511bee-2cb5-4aaf-b98d-c95700a21248", 00:08:23.343 "assigned_rate_limits": { 00:08:23.343 "rw_ios_per_sec": 0, 00:08:23.343 "rw_mbytes_per_sec": 0, 00:08:23.343 "r_mbytes_per_sec": 0, 00:08:23.343 "w_mbytes_per_sec": 0 00:08:23.343 }, 00:08:23.343 "claimed": false, 00:08:23.343 "zoned": false, 00:08:23.343 "supported_io_types": { 00:08:23.343 "read": true, 00:08:23.343 "write": true, 00:08:23.343 "unmap": true, 00:08:23.343 "flush": false, 00:08:23.343 "reset": true, 00:08:23.343 "nvme_admin": false, 00:08:23.343 "nvme_io": false, 00:08:23.343 "nvme_io_md": false, 00:08:23.343 "write_zeroes": true, 00:08:23.343 "zcopy": false, 00:08:23.343 "get_zone_info": false, 00:08:23.343 "zone_management": false, 00:08:23.343 "zone_append": false, 00:08:23.343 "compare": false, 00:08:23.343 "compare_and_write": false, 00:08:23.343 "abort": false, 00:08:23.343 "seek_hole": true, 00:08:23.343 "seek_data": true, 00:08:23.343 "copy": false, 00:08:23.343 "nvme_iov_md": false 00:08:23.343 }, 00:08:23.343 "driver_specific": { 00:08:23.343 "lvol": { 00:08:23.343 "lvol_store_uuid": "304c15f0-c013-4f49-9115-4db4d27992bc", 00:08:23.343 "base_bdev": "aio_bdev", 00:08:23.343 "thin_provision": false, 00:08:23.343 "num_allocated_clusters": 38, 00:08:23.343 "snapshot": false, 00:08:23.343 "clone": false, 00:08:23.343 "esnap_clone": false 00:08:23.343 } 00:08:23.343 } 00:08:23.343 } 00:08:23.343 ] 00:08:23.343 08:41:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:08:23.343 08:41:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 304c15f0-c013-4f49-9115-4db4d27992bc 00:08:23.343 08:41:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:08:23.604 08:41:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:08:23.604 08:41:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 304c15f0-c013-4f49-9115-4db4d27992bc 00:08:23.604 08:41:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:08:23.604 08:41:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:08:23.604 08:41:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:23.863 [2024-07-26 08:41:42.295823] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:24.122 08:41:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 304c15f0-c013-4f49-9115-4db4d27992bc 00:08:24.122 08:41:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:08:24.122 08:41:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 304c15f0-c013-4f49-9115-4db4d27992bc 00:08:24.122 08:41:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:24.122 08:41:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:24.122 08:41:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:24.123 08:41:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:24.123 08:41:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:24.123 08:41:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:24.123 08:41:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:24.123 08:41:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:24.123 08:41:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 304c15f0-c013-4f49-9115-4db4d27992bc 00:08:24.123 request: 00:08:24.123 { 00:08:24.123 "uuid": "304c15f0-c013-4f49-9115-4db4d27992bc", 00:08:24.123 "method": "bdev_lvol_get_lvstores", 00:08:24.123 "req_id": 1 00:08:24.123 } 00:08:24.123 Got JSON-RPC error response 00:08:24.123 response: 00:08:24.123 { 00:08:24.123 "code": -19, 00:08:24.123 "message": "No such device" 00:08:24.123 } 00:08:24.123 08:41:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:08:24.123 08:41:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:24.123 08:41:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:24.123 08:41:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:24.123 08:41:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:24.383 aio_bdev 00:08:24.383 08:41:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev ce511bee-2cb5-4aaf-b98d-c95700a21248 00:08:24.383 08:41:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=ce511bee-2cb5-4aaf-b98d-c95700a21248 00:08:24.383 08:41:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:24.383 08:41:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:08:24.383 08:41:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:24.383 08:41:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:24.383 08:41:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:24.641 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b ce511bee-2cb5-4aaf-b98d-c95700a21248 -t 2000 00:08:24.900 [ 00:08:24.900 { 00:08:24.900 "name": "ce511bee-2cb5-4aaf-b98d-c95700a21248", 00:08:24.900 "aliases": [ 00:08:24.900 "lvs/lvol" 00:08:24.900 ], 00:08:24.900 "product_name": "Logical Volume", 00:08:24.900 "block_size": 4096, 00:08:24.900 "num_blocks": 38912, 00:08:24.900 "uuid": "ce511bee-2cb5-4aaf-b98d-c95700a21248", 00:08:24.900 "assigned_rate_limits": { 00:08:24.900 "rw_ios_per_sec": 0, 00:08:24.900 "rw_mbytes_per_sec": 0, 00:08:24.900 "r_mbytes_per_sec": 0, 00:08:24.900 "w_mbytes_per_sec": 0 00:08:24.900 }, 00:08:24.900 "claimed": false, 00:08:24.900 "zoned": false, 00:08:24.900 "supported_io_types": { 00:08:24.900 "read": true, 00:08:24.900 "write": true, 00:08:24.900 "unmap": true, 00:08:24.900 "flush": false, 00:08:24.900 "reset": true, 00:08:24.900 "nvme_admin": false, 00:08:24.900 "nvme_io": false, 00:08:24.900 "nvme_io_md": false, 00:08:24.900 "write_zeroes": true, 00:08:24.900 "zcopy": false, 00:08:24.900 "get_zone_info": false, 00:08:24.900 "zone_management": false, 00:08:24.900 "zone_append": false, 00:08:24.900 "compare": false, 00:08:24.900 "compare_and_write": false, 00:08:24.900 "abort": false, 00:08:24.900 "seek_hole": true, 00:08:24.900 "seek_data": true, 00:08:24.900 "copy": false, 00:08:24.900 "nvme_iov_md": false 00:08:24.900 }, 00:08:24.900 "driver_specific": { 00:08:24.900 "lvol": { 00:08:24.900 "lvol_store_uuid": "304c15f0-c013-4f49-9115-4db4d27992bc", 00:08:24.900 "base_bdev": "aio_bdev", 00:08:24.900 "thin_provision": false, 00:08:24.900 "num_allocated_clusters": 38, 00:08:24.900 "snapshot": false, 00:08:24.900 "clone": false, 00:08:24.900 "esnap_clone": false 00:08:24.900 } 00:08:24.900 } 00:08:24.900 } 00:08:24.900 ] 00:08:24.900 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:08:24.900 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 304c15f0-c013-4f49-9115-4db4d27992bc 00:08:24.900 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:25.157 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:25.157 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 304c15f0-c013-4f49-9115-4db4d27992bc 00:08:25.157 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:25.414 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:25.414 08:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete ce511bee-2cb5-4aaf-b98d-c95700a21248 00:08:25.672 08:41:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 304c15f0-c013-4f49-9115-4db4d27992bc 00:08:25.931 08:41:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:26.190 08:41:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:26.190 00:08:26.190 real 0m19.012s 00:08:26.190 user 0m48.091s 00:08:26.190 sys 0m4.715s 00:08:26.190 08:41:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:26.190 08:41:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:26.190 ************************************ 00:08:26.190 END TEST lvs_grow_dirty 00:08:26.190 ************************************ 00:08:26.190 08:41:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:08:26.190 08:41:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:08:26.190 08:41:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:08:26.190 08:41:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:08:26.190 08:41:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:08:26.190 08:41:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:08:26.190 08:41:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:08:26.190 08:41:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:08:26.190 08:41:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:08:26.450 nvmf_trace.0 00:08:26.450 08:41:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:08:26.450 08:41:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:08:26.450 08:41:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:26.450 08:41:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:08:26.450 08:41:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:26.450 08:41:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:08:26.450 08:41:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:26.450 08:41:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:26.450 rmmod nvme_tcp 00:08:26.450 rmmod nvme_fabrics 00:08:26.450 rmmod nvme_keyring 00:08:26.450 08:41:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:26.450 08:41:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:08:26.450 08:41:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:08:26.450 08:41:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 868329 ']' 00:08:26.450 08:41:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 868329 00:08:26.450 08:41:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 868329 ']' 00:08:26.450 08:41:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 868329 00:08:26.450 08:41:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:08:26.450 08:41:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:26.450 08:41:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 868329 00:08:26.450 08:41:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:26.450 08:41:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:26.450 08:41:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 868329' 00:08:26.450 killing process with pid 868329 00:08:26.450 08:41:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 868329 00:08:26.450 08:41:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 868329 00:08:26.709 08:41:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:26.709 08:41:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:26.709 08:41:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:26.709 08:41:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:26.709 08:41:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:26.709 08:41:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:26.709 08:41:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:26.709 08:41:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:28.613 08:41:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:28.613 00:08:28.613 real 0m41.765s 00:08:28.613 user 1m10.397s 00:08:28.613 sys 0m8.518s 00:08:28.613 08:41:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:28.613 08:41:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:28.613 ************************************ 00:08:28.613 END TEST nvmf_lvs_grow 00:08:28.613 ************************************ 00:08:28.613 08:41:47 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:28.613 08:41:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:28.613 08:41:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:28.613 08:41:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:28.870 ************************************ 00:08:28.870 START TEST nvmf_bdev_io_wait 00:08:28.870 ************************************ 00:08:28.870 08:41:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:28.870 * Looking for test storage... 00:08:28.870 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:28.870 08:41:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:28.870 08:41:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:08:28.870 08:41:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:28.870 08:41:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:28.870 08:41:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:28.870 08:41:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:28.870 08:41:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:28.870 08:41:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:28.870 08:41:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:28.870 08:41:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:28.870 08:41:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:28.870 08:41:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:28.870 08:41:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:28.870 08:41:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:28.870 08:41:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:28.870 08:41:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:28.870 08:41:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:28.870 08:41:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:28.870 08:41:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:28.870 08:41:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:28.870 08:41:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:28.870 08:41:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:28.870 08:41:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:28.871 08:41:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:28.871 08:41:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:28.871 08:41:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:08:28.871 08:41:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:28.871 08:41:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:08:28.871 08:41:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:28.871 08:41:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:28.871 08:41:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:28.871 08:41:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:28.871 08:41:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:28.871 08:41:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:28.871 08:41:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:28.871 08:41:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:28.871 08:41:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:28.871 08:41:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:28.871 08:41:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:08:28.871 08:41:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:28.871 08:41:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:28.871 08:41:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:28.871 08:41:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:28.871 08:41:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:28.871 08:41:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:28.871 08:41:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:28.871 08:41:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:28.871 08:41:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:28.871 08:41:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:28.871 08:41:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:08:28.871 08:41:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:30.774 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:30.774 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:08:30.774 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:30.774 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:30.774 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:30.774 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:30.774 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:30.774 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:08:30.774 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:30.774 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:08:30.774 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:08:30.774 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:08:30.774 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:08:30.774 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:08:30.774 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:08:30.774 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:30.774 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:30.774 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:30.774 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:30.774 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:30.774 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:30.774 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:30.774 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:30.774 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:30.774 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:30.774 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:30.774 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:30.774 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:30.774 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:30.774 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:30.774 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:30.774 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:30.774 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:30.774 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:30.774 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:30.774 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:30.774 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:30.774 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:30.774 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:30.774 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:30.774 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:30.774 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:30.774 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:30.774 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:30.774 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:30.774 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:30.774 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:30.774 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:30.774 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:30.774 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:30.774 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:30.774 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:30.774 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:30.774 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:30.775 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:30.775 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:30.775 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:30.775 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:30.775 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:30.775 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:30.775 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:30.775 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:30.775 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:30.775 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:30.775 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:30.775 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:30.775 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:30.775 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:30.775 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:30.775 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:30.775 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:30.775 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:30.775 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:08:30.775 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:30.775 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:30.775 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:30.775 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:30.775 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:30.775 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:30.775 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:30.775 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:30.775 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:30.775 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:30.775 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:30.775 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:30.775 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:30.775 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:30.775 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:30.775 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:30.775 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:30.775 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:30.775 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:30.775 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:31.032 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:31.032 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:31.032 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:31.032 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:31.032 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.236 ms 00:08:31.032 00:08:31.032 --- 10.0.0.2 ping statistics --- 00:08:31.032 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:31.032 rtt min/avg/max/mdev = 0.236/0.236/0.236/0.000 ms 00:08:31.032 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:31.033 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:31.033 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.139 ms 00:08:31.033 00:08:31.033 --- 10.0.0.1 ping statistics --- 00:08:31.033 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:31.033 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:08:31.033 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:31.033 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:08:31.033 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:31.033 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:31.033 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:31.033 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:31.033 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:31.033 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:31.033 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:31.033 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:08:31.033 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:31.033 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:31.033 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:31.033 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=870849 00:08:31.033 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:08:31.033 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 870849 00:08:31.033 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 870849 ']' 00:08:31.033 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:31.033 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:31.033 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:31.033 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:31.033 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:31.033 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:31.033 [2024-07-26 08:41:49.345949] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:08:31.033 [2024-07-26 08:41:49.346027] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:31.033 EAL: No free 2048 kB hugepages reported on node 1 00:08:31.033 [2024-07-26 08:41:49.383089] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:31.033 [2024-07-26 08:41:49.414921] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:31.292 [2024-07-26 08:41:49.508613] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:31.292 [2024-07-26 08:41:49.508674] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:31.292 [2024-07-26 08:41:49.508691] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:31.292 [2024-07-26 08:41:49.508704] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:31.292 [2024-07-26 08:41:49.508716] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:31.292 [2024-07-26 08:41:49.508834] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:31.292 [2024-07-26 08:41:49.508910] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:31.292 [2024-07-26 08:41:49.509001] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:31.292 [2024-07-26 08:41:49.509003] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.292 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:31.292 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:08:31.292 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:31.292 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:31.292 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:31.292 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:31.292 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:08:31.292 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.292 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:31.292 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.292 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:08:31.292 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.292 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:31.292 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.292 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:31.292 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.292 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:31.292 [2024-07-26 08:41:49.661538] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:31.292 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.292 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:31.292 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.292 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:31.292 Malloc0 00:08:31.292 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.292 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:31.292 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.292 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:31.292 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.292 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:31.292 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.292 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:31.292 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.292 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:31.292 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.292 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:31.292 [2024-07-26 08:41:49.724459] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:31.292 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.292 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=870877 00:08:31.292 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=870878 00:08:31.292 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:08:31.292 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:08:31.292 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:08:31.292 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:08:31.292 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=870881 00:08:31.292 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:31.292 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:08:31.292 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:08:31.292 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:31.292 { 00:08:31.292 "params": { 00:08:31.292 "name": "Nvme$subsystem", 00:08:31.292 "trtype": "$TEST_TRANSPORT", 00:08:31.292 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:31.292 "adrfam": "ipv4", 00:08:31.292 "trsvcid": "$NVMF_PORT", 00:08:31.292 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:31.292 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:31.292 "hdgst": ${hdgst:-false}, 00:08:31.292 "ddgst": ${ddgst:-false} 00:08:31.292 }, 00:08:31.292 "method": "bdev_nvme_attach_controller" 00:08:31.292 } 00:08:31.292 EOF 00:08:31.292 )") 00:08:31.292 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:08:31.292 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:08:31.292 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:31.292 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=870883 00:08:31.292 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:31.292 { 00:08:31.292 "params": { 00:08:31.292 "name": "Nvme$subsystem", 00:08:31.292 "trtype": "$TEST_TRANSPORT", 00:08:31.292 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:31.292 "adrfam": "ipv4", 00:08:31.292 "trsvcid": "$NVMF_PORT", 00:08:31.292 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:31.292 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:31.292 "hdgst": ${hdgst:-false}, 00:08:31.292 "ddgst": ${ddgst:-false} 00:08:31.292 }, 00:08:31.292 "method": "bdev_nvme_attach_controller" 00:08:31.292 } 00:08:31.292 EOF 00:08:31.292 )") 00:08:31.292 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:08:31.292 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:08:31.292 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:08:31.292 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:08:31.292 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:08:31.292 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:31.292 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:08:31.292 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:08:31.292 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:08:31.292 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:31.292 { 00:08:31.292 "params": { 00:08:31.292 "name": "Nvme$subsystem", 00:08:31.292 "trtype": "$TEST_TRANSPORT", 00:08:31.292 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:31.292 "adrfam": "ipv4", 00:08:31.292 "trsvcid": "$NVMF_PORT", 00:08:31.292 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:31.292 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:31.292 "hdgst": ${hdgst:-false}, 00:08:31.292 "ddgst": ${ddgst:-false} 00:08:31.292 }, 00:08:31.292 "method": "bdev_nvme_attach_controller" 00:08:31.292 } 00:08:31.292 EOF 00:08:31.292 )") 00:08:31.292 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:08:31.292 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:08:31.292 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:31.292 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:31.293 { 00:08:31.293 "params": { 00:08:31.293 "name": "Nvme$subsystem", 00:08:31.293 "trtype": "$TEST_TRANSPORT", 00:08:31.293 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:31.293 "adrfam": "ipv4", 00:08:31.293 "trsvcid": "$NVMF_PORT", 00:08:31.293 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:31.293 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:31.293 "hdgst": ${hdgst:-false}, 00:08:31.293 "ddgst": ${ddgst:-false} 00:08:31.293 }, 00:08:31.293 "method": "bdev_nvme_attach_controller" 00:08:31.293 } 00:08:31.293 EOF 00:08:31.293 )") 00:08:31.293 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:08:31.293 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:08:31.293 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 870877 00:08:31.293 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:08:31.293 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:08:31.293 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:08:31.293 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:08:31.293 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:08:31.293 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:08:31.293 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:31.293 "params": { 00:08:31.293 "name": "Nvme1", 00:08:31.293 "trtype": "tcp", 00:08:31.293 "traddr": "10.0.0.2", 00:08:31.293 "adrfam": "ipv4", 00:08:31.293 "trsvcid": "4420", 00:08:31.293 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:31.293 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:31.293 "hdgst": false, 00:08:31.293 "ddgst": false 00:08:31.293 }, 00:08:31.293 "method": "bdev_nvme_attach_controller" 00:08:31.293 }' 00:08:31.293 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:08:31.293 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:31.293 "params": { 00:08:31.293 "name": "Nvme1", 00:08:31.293 "trtype": "tcp", 00:08:31.293 "traddr": "10.0.0.2", 00:08:31.293 "adrfam": "ipv4", 00:08:31.293 "trsvcid": "4420", 00:08:31.293 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:31.293 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:31.293 "hdgst": false, 00:08:31.293 "ddgst": false 00:08:31.293 }, 00:08:31.293 "method": "bdev_nvme_attach_controller" 00:08:31.293 }' 00:08:31.293 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:08:31.293 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:31.293 "params": { 00:08:31.293 "name": "Nvme1", 00:08:31.293 "trtype": "tcp", 00:08:31.293 "traddr": "10.0.0.2", 00:08:31.293 "adrfam": "ipv4", 00:08:31.293 "trsvcid": "4420", 00:08:31.293 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:31.293 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:31.293 "hdgst": false, 00:08:31.293 "ddgst": false 00:08:31.293 }, 00:08:31.293 "method": "bdev_nvme_attach_controller" 00:08:31.293 }' 00:08:31.293 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:08:31.293 08:41:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:31.293 "params": { 00:08:31.293 "name": "Nvme1", 00:08:31.293 "trtype": "tcp", 00:08:31.293 "traddr": "10.0.0.2", 00:08:31.293 "adrfam": "ipv4", 00:08:31.293 "trsvcid": "4420", 00:08:31.293 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:31.293 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:31.293 "hdgst": false, 00:08:31.293 "ddgst": false 00:08:31.293 }, 00:08:31.293 "method": "bdev_nvme_attach_controller" 00:08:31.293 }' 00:08:31.552 [2024-07-26 08:41:49.773602] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:08:31.552 [2024-07-26 08:41:49.773600] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:08:31.552 [2024-07-26 08:41:49.773602] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:08:31.552 [2024-07-26 08:41:49.773643] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:08:31.553 [2024-07-26 08:41:49.773687] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-26 08:41:49.773687] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-26 08:41:49.773688] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:08:31.553 --proc-type=auto ] 00:08:31.553 --proc-type=auto ] 00:08:31.553 [2024-07-26 08:41:49.773728] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:08:31.553 EAL: No free 2048 kB hugepages reported on node 1 00:08:31.553 [2024-07-26 08:41:49.918415] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:31.553 EAL: No free 2048 kB hugepages reported on node 1 00:08:31.553 [2024-07-26 08:41:49.950001] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:31.813 EAL: No free 2048 kB hugepages reported on node 1 00:08:31.813 [2024-07-26 08:41:50.020389] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:31.813 [2024-07-26 08:41:50.025910] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:08:31.813 [2024-07-26 08:41:50.062433] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:31.813 [2024-07-26 08:41:50.116984] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:31.813 EAL: No free 2048 kB hugepages reported on node 1 00:08:31.813 [2024-07-26 08:41:50.139500] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:08:31.813 [2024-07-26 08:41:50.146647] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:31.813 [2024-07-26 08:41:50.218720] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:31.813 [2024-07-26 08:41:50.222215] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:08:31.813 [2024-07-26 08:41:50.249115] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:32.073 [2024-07-26 08:41:50.324177] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:08:32.073 Running I/O for 1 seconds... 00:08:32.073 Running I/O for 1 seconds... 00:08:32.074 Running I/O for 1 seconds... 00:08:32.334 Running I/O for 1 seconds... 00:08:33.271 00:08:33.271 Latency(us) 00:08:33.271 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:33.271 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:08:33.271 Nvme1n1 : 1.02 6240.71 24.38 0.00 0.00 20348.08 8398.32 28932.93 00:08:33.271 =================================================================================================================== 00:08:33.271 Total : 6240.71 24.38 0.00 0.00 20348.08 8398.32 28932.93 00:08:33.271 00:08:33.271 Latency(us) 00:08:33.271 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:33.271 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:08:33.271 Nvme1n1 : 1.01 9616.31 37.56 0.00 0.00 13248.86 7670.14 25243.50 00:08:33.271 =================================================================================================================== 00:08:33.271 Total : 9616.31 37.56 0.00 0.00 13248.86 7670.14 25243.50 00:08:33.271 00:08:33.271 Latency(us) 00:08:33.271 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:33.271 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:08:33.271 Nvme1n1 : 1.01 5970.43 23.32 0.00 0.00 21353.25 7378.87 45632.47 00:08:33.271 =================================================================================================================== 00:08:33.271 Total : 5970.43 23.32 0.00 0.00 21353.25 7378.87 45632.47 00:08:33.271 00:08:33.271 Latency(us) 00:08:33.271 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:33.271 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:08:33.271 Nvme1n1 : 1.00 196453.99 767.40 0.00 0.00 649.07 273.07 867.75 00:08:33.271 =================================================================================================================== 00:08:33.271 Total : 196453.99 767.40 0.00 0.00 649.07 273.07 867.75 00:08:33.532 08:41:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 870878 00:08:33.532 08:41:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 870881 00:08:33.532 08:41:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 870883 00:08:33.532 08:41:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:33.532 08:41:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.532 08:41:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:33.532 08:41:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.532 08:41:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:08:33.532 08:41:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:08:33.532 08:41:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:33.532 08:41:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:08:33.532 08:41:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:33.532 08:41:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:08:33.532 08:41:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:33.532 08:41:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:33.532 rmmod nvme_tcp 00:08:33.533 rmmod nvme_fabrics 00:08:33.533 rmmod nvme_keyring 00:08:33.533 08:41:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:33.533 08:41:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:08:33.533 08:41:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:08:33.533 08:41:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 870849 ']' 00:08:33.533 08:41:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 870849 00:08:33.533 08:41:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 870849 ']' 00:08:33.533 08:41:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 870849 00:08:33.533 08:41:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:08:33.533 08:41:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:33.533 08:41:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 870849 00:08:33.533 08:41:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:33.533 08:41:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:33.533 08:41:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 870849' 00:08:33.533 killing process with pid 870849 00:08:33.533 08:41:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 870849 00:08:33.533 08:41:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 870849 00:08:33.792 08:41:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:33.792 08:41:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:33.792 08:41:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:33.792 08:41:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:33.792 08:41:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:33.792 08:41:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:33.792 08:41:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:33.792 08:41:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:36.324 08:41:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:36.324 00:08:36.324 real 0m7.141s 00:08:36.324 user 0m15.153s 00:08:36.324 sys 0m3.767s 00:08:36.324 08:41:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:36.324 08:41:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:36.324 ************************************ 00:08:36.324 END TEST nvmf_bdev_io_wait 00:08:36.324 ************************************ 00:08:36.324 08:41:54 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:36.324 08:41:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:36.324 08:41:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:36.324 08:41:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:36.324 ************************************ 00:08:36.324 START TEST nvmf_queue_depth 00:08:36.324 ************************************ 00:08:36.324 08:41:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:36.324 * Looking for test storage... 00:08:36.324 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:36.324 08:41:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:36.324 08:41:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:08:36.324 08:41:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:36.324 08:41:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:36.324 08:41:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:36.324 08:41:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:36.324 08:41:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:36.324 08:41:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:36.325 08:41:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:36.325 08:41:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:36.325 08:41:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:36.325 08:41:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:36.325 08:41:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:36.325 08:41:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:36.325 08:41:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:36.325 08:41:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:36.325 08:41:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:36.325 08:41:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:36.325 08:41:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:36.325 08:41:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:36.325 08:41:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:36.325 08:41:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:36.325 08:41:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:36.325 08:41:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:36.325 08:41:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:36.325 08:41:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:08:36.325 08:41:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:36.325 08:41:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:08:36.325 08:41:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:36.325 08:41:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:36.325 08:41:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:36.325 08:41:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:36.325 08:41:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:36.325 08:41:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:36.325 08:41:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:36.325 08:41:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:36.325 08:41:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:08:36.325 08:41:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:08:36.325 08:41:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:36.325 08:41:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:08:36.325 08:41:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:36.325 08:41:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:36.325 08:41:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:36.325 08:41:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:36.325 08:41:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:36.325 08:41:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:36.325 08:41:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:36.325 08:41:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:36.325 08:41:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:36.325 08:41:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:36.325 08:41:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:08:36.325 08:41:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:38.256 08:41:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:38.257 08:41:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:08:38.257 08:41:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:38.257 08:41:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:38.257 08:41:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:38.257 08:41:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:38.257 08:41:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:38.257 08:41:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:08:38.257 08:41:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:38.257 08:41:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:08:38.257 08:41:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:08:38.257 08:41:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:08:38.257 08:41:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:08:38.257 08:41:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:08:38.257 08:41:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:08:38.257 08:41:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:38.257 08:41:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:38.257 08:41:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:38.257 08:41:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:38.257 08:41:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:38.257 08:41:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:38.257 08:41:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:38.257 08:41:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:38.257 08:41:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:38.257 08:41:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:38.257 08:41:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:38.257 08:41:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:38.257 08:41:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:38.257 08:41:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:38.257 08:41:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:38.257 08:41:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:38.257 08:41:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:38.257 08:41:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:38.257 08:41:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:38.257 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:38.257 08:41:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:38.257 08:41:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:38.257 08:41:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:38.257 08:41:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:38.257 08:41:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:38.257 08:41:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:38.257 08:41:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:38.257 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:38.257 08:41:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:38.257 08:41:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:38.257 08:41:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:38.257 08:41:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:38.257 08:41:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:38.257 08:41:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:38.257 08:41:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:38.257 08:41:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:38.257 08:41:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:38.257 08:41:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:38.257 08:41:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:38.257 08:41:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:38.257 08:41:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:38.257 08:41:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:38.257 08:41:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:38.257 08:41:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:38.257 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:38.257 08:41:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:38.257 08:41:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:38.257 08:41:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:38.257 08:41:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:38.257 08:41:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:38.257 08:41:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:38.257 08:41:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:38.257 08:41:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:38.257 08:41:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:38.257 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:38.257 08:41:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:38.257 08:41:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:38.257 08:41:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:08:38.257 08:41:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:38.257 08:41:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:38.257 08:41:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:38.257 08:41:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:38.257 08:41:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:38.257 08:41:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:38.257 08:41:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:38.257 08:41:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:38.257 08:41:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:38.257 08:41:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:38.257 08:41:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:38.257 08:41:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:38.257 08:41:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:38.257 08:41:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:38.257 08:41:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:38.257 08:41:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:38.257 08:41:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:38.257 08:41:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:38.257 08:41:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:38.257 08:41:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:38.257 08:41:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:38.257 08:41:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:38.257 08:41:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:38.257 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:38.257 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.282 ms 00:08:38.257 00:08:38.257 --- 10.0.0.2 ping statistics --- 00:08:38.257 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:38.257 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:08:38.257 08:41:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:38.257 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:38.257 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.167 ms 00:08:38.257 00:08:38.257 --- 10.0.0.1 ping statistics --- 00:08:38.257 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:38.257 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:08:38.257 08:41:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:38.257 08:41:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:08:38.257 08:41:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:38.257 08:41:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:38.257 08:41:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:38.258 08:41:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:38.258 08:41:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:38.258 08:41:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:38.258 08:41:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:38.258 08:41:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:08:38.258 08:41:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:38.258 08:41:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:38.258 08:41:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:38.258 08:41:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=873109 00:08:38.258 08:41:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 873109 00:08:38.258 08:41:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:38.258 08:41:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 873109 ']' 00:08:38.258 08:41:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:38.258 08:41:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:38.258 08:41:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:38.258 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:38.258 08:41:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:38.258 08:41:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:38.258 [2024-07-26 08:41:56.600964] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:08:38.258 [2024-07-26 08:41:56.601068] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:38.258 EAL: No free 2048 kB hugepages reported on node 1 00:08:38.258 [2024-07-26 08:41:56.640186] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:38.258 [2024-07-26 08:41:56.666010] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:38.517 [2024-07-26 08:41:56.752090] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:38.517 [2024-07-26 08:41:56.752152] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:38.517 [2024-07-26 08:41:56.752180] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:38.517 [2024-07-26 08:41:56.752192] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:38.517 [2024-07-26 08:41:56.752202] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:38.517 [2024-07-26 08:41:56.752236] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:38.517 08:41:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:38.517 08:41:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:08:38.517 08:41:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:38.517 08:41:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:38.517 08:41:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:38.517 08:41:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:38.517 08:41:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:38.517 08:41:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.517 08:41:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:38.517 [2024-07-26 08:41:56.896669] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:38.517 08:41:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.517 08:41:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:38.517 08:41:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.517 08:41:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:38.517 Malloc0 00:08:38.517 08:41:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.517 08:41:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:38.517 08:41:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.517 08:41:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:38.517 08:41:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.517 08:41:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:38.517 08:41:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.517 08:41:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:38.517 08:41:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.517 08:41:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:38.517 08:41:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.517 08:41:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:38.517 [2024-07-26 08:41:56.956232] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:38.517 08:41:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.517 08:41:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=873243 00:08:38.517 08:41:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:08:38.517 08:41:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:38.517 08:41:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 873243 /var/tmp/bdevperf.sock 00:08:38.517 08:41:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 873243 ']' 00:08:38.517 08:41:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:38.517 08:41:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:38.517 08:41:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:38.517 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:38.517 08:41:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:38.517 08:41:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:38.776 [2024-07-26 08:41:57.002410] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:08:38.776 [2024-07-26 08:41:57.002486] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid873243 ] 00:08:38.776 EAL: No free 2048 kB hugepages reported on node 1 00:08:38.776 [2024-07-26 08:41:57.034256] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:38.776 [2024-07-26 08:41:57.064304] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:38.776 [2024-07-26 08:41:57.155366] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:39.035 08:41:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:39.035 08:41:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:08:39.035 08:41:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:08:39.035 08:41:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.035 08:41:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:39.035 NVMe0n1 00:08:39.035 08:41:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.035 08:41:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:39.294 Running I/O for 10 seconds... 00:08:49.276 00:08:49.276 Latency(us) 00:08:49.276 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:49.276 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:08:49.276 Verification LBA range: start 0x0 length 0x4000 00:08:49.276 NVMe0n1 : 10.09 8515.27 33.26 0.00 0.00 119763.79 22330.79 74953.77 00:08:49.276 =================================================================================================================== 00:08:49.276 Total : 8515.27 33.26 0.00 0.00 119763.79 22330.79 74953.77 00:08:49.276 0 00:08:49.276 08:42:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 873243 00:08:49.276 08:42:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 873243 ']' 00:08:49.276 08:42:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 873243 00:08:49.276 08:42:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:08:49.276 08:42:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:49.276 08:42:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 873243 00:08:49.276 08:42:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:49.276 08:42:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:49.276 08:42:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 873243' 00:08:49.276 killing process with pid 873243 00:08:49.276 08:42:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 873243 00:08:49.276 Received shutdown signal, test time was about 10.000000 seconds 00:08:49.276 00:08:49.276 Latency(us) 00:08:49.276 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:49.276 =================================================================================================================== 00:08:49.276 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:49.276 08:42:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 873243 00:08:49.534 08:42:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:08:49.534 08:42:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:08:49.534 08:42:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:49.534 08:42:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:08:49.534 08:42:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:49.534 08:42:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:08:49.534 08:42:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:49.534 08:42:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:49.534 rmmod nvme_tcp 00:08:49.534 rmmod nvme_fabrics 00:08:49.534 rmmod nvme_keyring 00:08:49.534 08:42:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:49.534 08:42:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:08:49.534 08:42:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:08:49.534 08:42:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 873109 ']' 00:08:49.534 08:42:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 873109 00:08:49.534 08:42:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 873109 ']' 00:08:49.534 08:42:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 873109 00:08:49.534 08:42:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:08:49.534 08:42:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:49.534 08:42:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 873109 00:08:49.534 08:42:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:49.534 08:42:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:49.534 08:42:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 873109' 00:08:49.534 killing process with pid 873109 00:08:49.534 08:42:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 873109 00:08:49.534 08:42:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 873109 00:08:49.794 08:42:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:49.794 08:42:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:49.794 08:42:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:49.794 08:42:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:49.794 08:42:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:49.794 08:42:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:49.794 08:42:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:49.794 08:42:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:52.332 08:42:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:52.332 00:08:52.332 real 0m15.981s 00:08:52.332 user 0m22.395s 00:08:52.332 sys 0m3.105s 00:08:52.332 08:42:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:52.332 08:42:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:52.332 ************************************ 00:08:52.332 END TEST nvmf_queue_depth 00:08:52.332 ************************************ 00:08:52.332 08:42:10 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:52.332 08:42:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:52.332 08:42:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:52.332 08:42:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:52.332 ************************************ 00:08:52.332 START TEST nvmf_target_multipath 00:08:52.332 ************************************ 00:08:52.332 08:42:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:52.332 * Looking for test storage... 00:08:52.332 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:52.332 08:42:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:52.332 08:42:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:08:52.332 08:42:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:52.332 08:42:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:52.332 08:42:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:52.332 08:42:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:52.332 08:42:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:52.332 08:42:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:52.332 08:42:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:52.332 08:42:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:52.332 08:42:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:52.332 08:42:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:52.332 08:42:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:52.332 08:42:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:52.332 08:42:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:52.332 08:42:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:52.332 08:42:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:52.332 08:42:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:52.332 08:42:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:52.332 08:42:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:52.332 08:42:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:52.332 08:42:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:52.332 08:42:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:52.332 08:42:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:52.333 08:42:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:52.333 08:42:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:08:52.333 08:42:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:52.333 08:42:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:08:52.333 08:42:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:52.333 08:42:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:52.333 08:42:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:52.333 08:42:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:52.333 08:42:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:52.333 08:42:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:52.333 08:42:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:52.333 08:42:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:52.333 08:42:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:52.333 08:42:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:52.333 08:42:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:08:52.333 08:42:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:52.333 08:42:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:08:52.333 08:42:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:52.333 08:42:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:52.333 08:42:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:52.333 08:42:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:52.333 08:42:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:52.333 08:42:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:52.333 08:42:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:52.333 08:42:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:52.333 08:42:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:52.333 08:42:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:52.333 08:42:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:08:52.333 08:42:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:54.257 08:42:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:54.257 08:42:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:08:54.257 08:42:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:54.257 08:42:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:54.257 08:42:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:54.257 08:42:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:54.257 08:42:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:54.257 08:42:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:08:54.257 08:42:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:54.257 08:42:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:08:54.257 08:42:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:08:54.257 08:42:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:08:54.258 08:42:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:08:54.258 08:42:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:08:54.258 08:42:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:08:54.258 08:42:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:54.258 08:42:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:54.258 08:42:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:54.258 08:42:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:54.258 08:42:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:54.258 08:42:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:54.258 08:42:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:54.258 08:42:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:54.258 08:42:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:54.258 08:42:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:54.258 08:42:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:54.258 08:42:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:54.258 08:42:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:54.258 08:42:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:54.258 08:42:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:54.258 08:42:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:54.258 08:42:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:54.258 08:42:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:54.258 08:42:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:54.258 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:54.259 08:42:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:54.259 08:42:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:54.259 08:42:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:54.259 08:42:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:54.259 08:42:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:54.259 08:42:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:54.259 08:42:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:54.259 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:54.259 08:42:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:54.259 08:42:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:54.259 08:42:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:54.259 08:42:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:54.259 08:42:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:54.259 08:42:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:54.259 08:42:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:54.259 08:42:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:54.259 08:42:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:54.259 08:42:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:54.259 08:42:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:54.259 08:42:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:54.259 08:42:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:54.259 08:42:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:54.259 08:42:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:54.259 08:42:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:54.259 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:54.259 08:42:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:54.259 08:42:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:54.259 08:42:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:54.260 08:42:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:54.260 08:42:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:54.260 08:42:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:54.260 08:42:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:54.260 08:42:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:54.260 08:42:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:54.260 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:54.260 08:42:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:54.260 08:42:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:54.260 08:42:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:08:54.260 08:42:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:54.260 08:42:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:54.260 08:42:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:54.260 08:42:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:54.260 08:42:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:54.260 08:42:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:54.260 08:42:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:54.260 08:42:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:54.260 08:42:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:54.260 08:42:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:54.260 08:42:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:54.260 08:42:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:54.260 08:42:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:54.260 08:42:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:54.260 08:42:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:54.260 08:42:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:54.260 08:42:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:54.260 08:42:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:54.260 08:42:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:54.260 08:42:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:54.260 08:42:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:54.260 08:42:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:54.260 08:42:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:54.260 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:54.260 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.237 ms 00:08:54.260 00:08:54.261 --- 10.0.0.2 ping statistics --- 00:08:54.261 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:54.261 rtt min/avg/max/mdev = 0.237/0.237/0.237/0.000 ms 00:08:54.261 08:42:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:54.261 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:54.261 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:08:54.261 00:08:54.261 --- 10.0.0.1 ping statistics --- 00:08:54.261 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:54.261 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:08:54.261 08:42:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:54.261 08:42:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:08:54.261 08:42:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:54.261 08:42:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:54.261 08:42:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:54.261 08:42:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:54.261 08:42:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:54.261 08:42:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:54.261 08:42:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:54.261 08:42:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:08:54.261 08:42:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:08:54.261 only one NIC for nvmf test 00:08:54.261 08:42:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:08:54.261 08:42:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:54.261 08:42:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:08:54.261 08:42:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:54.261 08:42:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:08:54.261 08:42:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:54.261 08:42:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:54.261 rmmod nvme_tcp 00:08:54.261 rmmod nvme_fabrics 00:08:54.261 rmmod nvme_keyring 00:08:54.262 08:42:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:54.262 08:42:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:08:54.262 08:42:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:08:54.262 08:42:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:08:54.262 08:42:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:54.262 08:42:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:54.262 08:42:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:54.262 08:42:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:54.262 08:42:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:54.262 08:42:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:54.262 08:42:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:54.262 08:42:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:56.173 08:42:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:56.173 08:42:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:08:56.173 08:42:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:08:56.173 08:42:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:56.173 08:42:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:08:56.173 08:42:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:56.173 08:42:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:08:56.173 08:42:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:56.173 08:42:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:56.173 08:42:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:56.173 08:42:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:08:56.173 08:42:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:08:56.173 08:42:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:08:56.173 08:42:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:56.173 08:42:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:56.173 08:42:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:56.173 08:42:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:56.173 08:42:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:56.173 08:42:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:56.173 08:42:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:56.173 08:42:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:56.173 08:42:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:56.173 00:08:56.173 real 0m4.239s 00:08:56.173 user 0m0.793s 00:08:56.173 sys 0m1.420s 00:08:56.173 08:42:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:56.173 08:42:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:56.173 ************************************ 00:08:56.173 END TEST nvmf_target_multipath 00:08:56.173 ************************************ 00:08:56.173 08:42:14 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:56.173 08:42:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:56.173 08:42:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:56.173 08:42:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:56.173 ************************************ 00:08:56.173 START TEST nvmf_zcopy 00:08:56.173 ************************************ 00:08:56.173 08:42:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:56.433 * Looking for test storage... 00:08:56.433 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:56.433 08:42:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:56.433 08:42:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:08:56.433 08:42:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:56.433 08:42:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:56.433 08:42:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:56.433 08:42:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:56.433 08:42:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:56.433 08:42:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:56.433 08:42:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:56.433 08:42:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:56.433 08:42:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:56.433 08:42:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:56.433 08:42:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:56.433 08:42:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:56.433 08:42:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:56.433 08:42:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:56.433 08:42:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:56.433 08:42:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:56.433 08:42:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:56.433 08:42:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:56.433 08:42:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:56.433 08:42:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:56.433 08:42:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:56.433 08:42:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:56.433 08:42:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:56.433 08:42:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:08:56.433 08:42:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:56.433 08:42:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:08:56.433 08:42:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:56.433 08:42:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:56.433 08:42:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:56.433 08:42:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:56.433 08:42:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:56.433 08:42:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:56.433 08:42:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:56.433 08:42:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:56.433 08:42:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:08:56.433 08:42:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:56.433 08:42:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:56.433 08:42:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:56.433 08:42:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:56.434 08:42:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:56.434 08:42:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:56.434 08:42:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:56.434 08:42:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:56.434 08:42:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:56.434 08:42:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:56.434 08:42:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:08:56.434 08:42:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:58.335 08:42:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:58.335 08:42:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:08:58.335 08:42:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:58.335 08:42:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:58.335 08:42:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:58.335 08:42:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:58.335 08:42:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:58.335 08:42:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:08:58.335 08:42:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:58.335 08:42:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:08:58.335 08:42:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:08:58.335 08:42:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:08:58.335 08:42:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:08:58.335 08:42:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:08:58.335 08:42:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:08:58.335 08:42:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:58.335 08:42:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:58.335 08:42:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:58.335 08:42:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:58.335 08:42:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:58.335 08:42:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:58.335 08:42:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:58.335 08:42:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:58.335 08:42:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:58.335 08:42:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:58.335 08:42:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:58.335 08:42:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:58.335 08:42:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:58.335 08:42:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:58.335 08:42:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:58.335 08:42:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:58.335 08:42:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:58.335 08:42:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:58.335 08:42:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:58.335 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:58.335 08:42:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:58.335 08:42:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:58.335 08:42:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:58.335 08:42:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:58.335 08:42:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:58.335 08:42:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:58.335 08:42:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:58.335 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:58.335 08:42:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:58.335 08:42:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:58.336 08:42:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:58.336 08:42:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:58.336 08:42:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:58.336 08:42:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:58.336 08:42:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:58.336 08:42:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:58.336 08:42:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:58.336 08:42:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:58.336 08:42:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:58.336 08:42:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:58.336 08:42:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:58.336 08:42:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:58.336 08:42:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:58.336 08:42:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:58.336 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:58.336 08:42:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:58.336 08:42:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:58.336 08:42:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:58.336 08:42:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:58.336 08:42:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:58.336 08:42:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:58.336 08:42:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:58.336 08:42:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:58.336 08:42:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:58.336 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:58.336 08:42:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:58.336 08:42:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:58.336 08:42:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:08:58.336 08:42:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:58.336 08:42:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:58.336 08:42:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:58.336 08:42:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:58.336 08:42:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:58.336 08:42:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:58.336 08:42:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:58.336 08:42:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:58.336 08:42:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:58.336 08:42:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:58.336 08:42:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:58.336 08:42:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:58.336 08:42:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:58.336 08:42:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:58.336 08:42:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:58.336 08:42:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:58.336 08:42:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:58.336 08:42:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:58.336 08:42:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:58.336 08:42:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:58.595 08:42:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:58.595 08:42:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:58.595 08:42:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:58.595 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:58.595 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.213 ms 00:08:58.595 00:08:58.595 --- 10.0.0.2 ping statistics --- 00:08:58.595 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:58.595 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:08:58.595 08:42:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:58.595 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:58.595 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.136 ms 00:08:58.595 00:08:58.595 --- 10.0.0.1 ping statistics --- 00:08:58.595 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:58.595 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:08:58.595 08:42:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:58.595 08:42:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:08:58.595 08:42:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:58.595 08:42:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:58.595 08:42:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:58.595 08:42:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:58.595 08:42:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:58.595 08:42:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:58.595 08:42:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:58.595 08:42:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:08:58.595 08:42:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:58.595 08:42:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:58.595 08:42:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:58.595 08:42:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=878929 00:08:58.595 08:42:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:58.595 08:42:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 878929 00:08:58.595 08:42:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 878929 ']' 00:08:58.595 08:42:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:58.595 08:42:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:58.595 08:42:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:58.595 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:58.595 08:42:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:58.595 08:42:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:58.595 [2024-07-26 08:42:16.929746] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:08:58.595 [2024-07-26 08:42:16.929846] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:58.595 EAL: No free 2048 kB hugepages reported on node 1 00:08:58.595 [2024-07-26 08:42:16.968858] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:58.595 [2024-07-26 08:42:16.995477] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:58.857 [2024-07-26 08:42:17.089160] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:58.857 [2024-07-26 08:42:17.089210] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:58.857 [2024-07-26 08:42:17.089225] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:58.857 [2024-07-26 08:42:17.089237] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:58.857 [2024-07-26 08:42:17.089248] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:58.857 [2024-07-26 08:42:17.089274] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:58.857 08:42:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:58.857 08:42:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:08:58.857 08:42:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:58.857 08:42:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:58.857 08:42:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:58.857 08:42:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:58.857 08:42:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:08:58.857 08:42:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:08:58.857 08:42:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.857 08:42:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:58.857 [2024-07-26 08:42:17.229770] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:58.857 08:42:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.857 08:42:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:58.857 08:42:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.857 08:42:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:58.858 08:42:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.858 08:42:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:58.858 08:42:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.858 08:42:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:58.858 [2024-07-26 08:42:17.246006] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:58.858 08:42:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.858 08:42:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:58.858 08:42:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.858 08:42:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:58.858 08:42:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.858 08:42:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:08:58.858 08:42:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.858 08:42:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:58.858 malloc0 00:08:58.858 08:42:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.858 08:42:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:08:58.858 08:42:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.858 08:42:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:58.858 08:42:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.858 08:42:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:08:58.858 08:42:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:08:58.858 08:42:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:08:58.858 08:42:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:08:58.858 08:42:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:58.858 08:42:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:58.858 { 00:08:58.858 "params": { 00:08:58.858 "name": "Nvme$subsystem", 00:08:58.858 "trtype": "$TEST_TRANSPORT", 00:08:58.858 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:58.858 "adrfam": "ipv4", 00:08:58.858 "trsvcid": "$NVMF_PORT", 00:08:58.858 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:58.858 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:58.858 "hdgst": ${hdgst:-false}, 00:08:58.858 "ddgst": ${ddgst:-false} 00:08:58.858 }, 00:08:58.858 "method": "bdev_nvme_attach_controller" 00:08:58.858 } 00:08:58.858 EOF 00:08:58.858 )") 00:08:58.858 08:42:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:08:58.858 08:42:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:08:58.858 08:42:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:08:58.858 08:42:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:58.858 "params": { 00:08:58.858 "name": "Nvme1", 00:08:58.858 "trtype": "tcp", 00:08:58.858 "traddr": "10.0.0.2", 00:08:58.858 "adrfam": "ipv4", 00:08:58.858 "trsvcid": "4420", 00:08:58.858 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:58.858 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:58.858 "hdgst": false, 00:08:58.858 "ddgst": false 00:08:58.858 }, 00:08:58.858 "method": "bdev_nvme_attach_controller" 00:08:58.858 }' 00:08:59.151 [2024-07-26 08:42:17.339791] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:08:59.151 [2024-07-26 08:42:17.339876] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid879069 ] 00:08:59.151 EAL: No free 2048 kB hugepages reported on node 1 00:08:59.151 [2024-07-26 08:42:17.378740] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:59.151 [2024-07-26 08:42:17.410701] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:59.151 [2024-07-26 08:42:17.506610] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:59.418 Running I/O for 10 seconds... 00:09:09.405 00:09:09.405 Latency(us) 00:09:09.405 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:09.405 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:09:09.405 Verification LBA range: start 0x0 length 0x1000 00:09:09.405 Nvme1n1 : 10.01 5834.66 45.58 0.00 0.00 21879.08 1468.49 30292.20 00:09:09.405 =================================================================================================================== 00:09:09.405 Total : 5834.66 45.58 0.00 0.00 21879.08 1468.49 30292.20 00:09:09.663 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=880292 00:09:09.663 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:09:09.663 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:09.664 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:09:09.664 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:09:09.664 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:09:09.664 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:09:09.664 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:09.664 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:09.664 { 00:09:09.664 "params": { 00:09:09.664 "name": "Nvme$subsystem", 00:09:09.664 "trtype": "$TEST_TRANSPORT", 00:09:09.664 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:09.664 "adrfam": "ipv4", 00:09:09.664 "trsvcid": "$NVMF_PORT", 00:09:09.664 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:09.664 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:09.664 "hdgst": ${hdgst:-false}, 00:09:09.664 "ddgst": ${ddgst:-false} 00:09:09.664 }, 00:09:09.664 "method": "bdev_nvme_attach_controller" 00:09:09.664 } 00:09:09.664 EOF 00:09:09.664 )") 00:09:09.664 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:09:09.664 [2024-07-26 08:42:28.087783] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.664 [2024-07-26 08:42:28.087832] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.664 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:09:09.664 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:09:09.664 08:42:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:09.664 "params": { 00:09:09.664 "name": "Nvme1", 00:09:09.664 "trtype": "tcp", 00:09:09.664 "traddr": "10.0.0.2", 00:09:09.664 "adrfam": "ipv4", 00:09:09.664 "trsvcid": "4420", 00:09:09.664 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:09.664 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:09.664 "hdgst": false, 00:09:09.664 "ddgst": false 00:09:09.664 }, 00:09:09.664 "method": "bdev_nvme_attach_controller" 00:09:09.664 }' 00:09:09.664 [2024-07-26 08:42:28.095732] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.664 [2024-07-26 08:42:28.095759] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.664 [2024-07-26 08:42:28.103746] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.664 [2024-07-26 08:42:28.103770] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.664 [2024-07-26 08:42:28.111761] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.664 [2024-07-26 08:42:28.111782] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.664 [2024-07-26 08:42:28.119779] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.664 [2024-07-26 08:42:28.119799] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.922 [2024-07-26 08:42:28.125472] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:09:09.922 [2024-07-26 08:42:28.125533] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid880292 ] 00:09:09.922 [2024-07-26 08:42:28.127817] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.922 [2024-07-26 08:42:28.127837] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.922 [2024-07-26 08:42:28.135822] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.922 [2024-07-26 08:42:28.135842] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.922 [2024-07-26 08:42:28.143845] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.922 [2024-07-26 08:42:28.143865] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.922 [2024-07-26 08:42:28.151884] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.922 [2024-07-26 08:42:28.151909] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.922 EAL: No free 2048 kB hugepages reported on node 1 00:09:09.922 [2024-07-26 08:42:28.159908] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.922 [2024-07-26 08:42:28.159933] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.922 [2024-07-26 08:42:28.162169] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:09.922 [2024-07-26 08:42:28.167931] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.922 [2024-07-26 08:42:28.167956] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.922 [2024-07-26 08:42:28.175953] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.922 [2024-07-26 08:42:28.175977] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.922 [2024-07-26 08:42:28.183973] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.922 [2024-07-26 08:42:28.183997] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.923 [2024-07-26 08:42:28.191995] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.923 [2024-07-26 08:42:28.192019] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.923 [2024-07-26 08:42:28.193530] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:09.923 [2024-07-26 08:42:28.200040] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.923 [2024-07-26 08:42:28.200102] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.923 [2024-07-26 08:42:28.208076] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.923 [2024-07-26 08:42:28.208125] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.923 [2024-07-26 08:42:28.216074] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.923 [2024-07-26 08:42:28.216112] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.923 [2024-07-26 08:42:28.224106] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.923 [2024-07-26 08:42:28.224127] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.923 [2024-07-26 08:42:28.232126] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.923 [2024-07-26 08:42:28.232147] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.923 [2024-07-26 08:42:28.240144] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.923 [2024-07-26 08:42:28.240166] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.923 [2024-07-26 08:42:28.248186] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.923 [2024-07-26 08:42:28.248217] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.923 [2024-07-26 08:42:28.256203] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.923 [2024-07-26 08:42:28.256234] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.923 [2024-07-26 08:42:28.264189] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.923 [2024-07-26 08:42:28.264211] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.923 [2024-07-26 08:42:28.272210] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.923 [2024-07-26 08:42:28.272232] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.923 [2024-07-26 08:42:28.280235] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.923 [2024-07-26 08:42:28.280256] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.923 [2024-07-26 08:42:28.288253] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.923 [2024-07-26 08:42:28.288274] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.923 [2024-07-26 08:42:28.289765] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:09.923 [2024-07-26 08:42:28.296273] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.923 [2024-07-26 08:42:28.296294] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.923 [2024-07-26 08:42:28.304307] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.923 [2024-07-26 08:42:28.304330] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.923 [2024-07-26 08:42:28.312364] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.923 [2024-07-26 08:42:28.312401] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.923 [2024-07-26 08:42:28.320381] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.923 [2024-07-26 08:42:28.320430] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.923 [2024-07-26 08:42:28.328409] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.923 [2024-07-26 08:42:28.328448] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.923 [2024-07-26 08:42:28.336440] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.923 [2024-07-26 08:42:28.336478] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.923 [2024-07-26 08:42:28.344462] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.923 [2024-07-26 08:42:28.344500] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.923 [2024-07-26 08:42:28.352501] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.923 [2024-07-26 08:42:28.352540] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.923 [2024-07-26 08:42:28.360493] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.923 [2024-07-26 08:42:28.360519] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.923 [2024-07-26 08:42:28.368536] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.923 [2024-07-26 08:42:28.368574] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.923 [2024-07-26 08:42:28.376562] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.923 [2024-07-26 08:42:28.376599] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.181 [2024-07-26 08:42:28.384582] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.181 [2024-07-26 08:42:28.384620] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.181 [2024-07-26 08:42:28.392576] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.181 [2024-07-26 08:42:28.392601] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.181 [2024-07-26 08:42:28.400596] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.181 [2024-07-26 08:42:28.400622] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.181 [2024-07-26 08:42:28.408749] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.181 [2024-07-26 08:42:28.408780] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.181 [2024-07-26 08:42:28.416760] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.181 [2024-07-26 08:42:28.416788] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.182 [2024-07-26 08:42:28.424782] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.182 [2024-07-26 08:42:28.424809] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.182 [2024-07-26 08:42:28.432809] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.182 [2024-07-26 08:42:28.432836] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.182 [2024-07-26 08:42:28.440830] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.182 [2024-07-26 08:42:28.440857] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.182 [2024-07-26 08:42:28.448883] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.182 [2024-07-26 08:42:28.448911] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.182 [2024-07-26 08:42:28.456871] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.182 [2024-07-26 08:42:28.456897] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.182 [2024-07-26 08:42:28.464923] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.182 [2024-07-26 08:42:28.464949] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.182 [2024-07-26 08:42:28.472922] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.182 [2024-07-26 08:42:28.472950] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.182 Running I/O for 5 seconds... 00:09:10.182 [2024-07-26 08:42:28.480946] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.182 [2024-07-26 08:42:28.480973] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.182 [2024-07-26 08:42:28.496361] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.182 [2024-07-26 08:42:28.496399] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.182 [2024-07-26 08:42:28.509022] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.182 [2024-07-26 08:42:28.509078] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.182 [2024-07-26 08:42:28.521661] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.182 [2024-07-26 08:42:28.521687] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.182 [2024-07-26 08:42:28.534262] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.182 [2024-07-26 08:42:28.534289] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.182 [2024-07-26 08:42:28.546331] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.182 [2024-07-26 08:42:28.546372] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.182 [2024-07-26 08:42:28.558404] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.182 [2024-07-26 08:42:28.558429] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.182 [2024-07-26 08:42:28.572496] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.182 [2024-07-26 08:42:28.572521] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.182 [2024-07-26 08:42:28.584264] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.182 [2024-07-26 08:42:28.584292] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.182 [2024-07-26 08:42:28.596554] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.182 [2024-07-26 08:42:28.596580] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.182 [2024-07-26 08:42:28.609371] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.182 [2024-07-26 08:42:28.609397] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.182 [2024-07-26 08:42:28.621901] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.182 [2024-07-26 08:42:28.621943] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.182 [2024-07-26 08:42:28.634251] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.182 [2024-07-26 08:42:28.634278] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.441 [2024-07-26 08:42:28.646709] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.441 [2024-07-26 08:42:28.646735] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.441 [2024-07-26 08:42:28.659236] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.441 [2024-07-26 08:42:28.659264] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.441 [2024-07-26 08:42:28.671386] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.441 [2024-07-26 08:42:28.671412] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.441 [2024-07-26 08:42:28.683151] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.441 [2024-07-26 08:42:28.683178] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.441 [2024-07-26 08:42:28.695583] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.441 [2024-07-26 08:42:28.695610] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.441 [2024-07-26 08:42:28.707530] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.441 [2024-07-26 08:42:28.707557] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.441 [2024-07-26 08:42:28.719999] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.441 [2024-07-26 08:42:28.720026] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.441 [2024-07-26 08:42:28.732401] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.441 [2024-07-26 08:42:28.732427] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.441 [2024-07-26 08:42:28.744584] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.441 [2024-07-26 08:42:28.744610] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.441 [2024-07-26 08:42:28.757248] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.441 [2024-07-26 08:42:28.757275] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.441 [2024-07-26 08:42:28.769617] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.441 [2024-07-26 08:42:28.769644] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.441 [2024-07-26 08:42:28.782266] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.441 [2024-07-26 08:42:28.782294] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.441 [2024-07-26 08:42:28.794265] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.441 [2024-07-26 08:42:28.794292] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.442 [2024-07-26 08:42:28.806356] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.442 [2024-07-26 08:42:28.806396] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.442 [2024-07-26 08:42:28.818491] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.442 [2024-07-26 08:42:28.818531] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.442 [2024-07-26 08:42:28.831146] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.442 [2024-07-26 08:42:28.831173] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.442 [2024-07-26 08:42:28.843764] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.442 [2024-07-26 08:42:28.843789] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.442 [2024-07-26 08:42:28.855374] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.442 [2024-07-26 08:42:28.855400] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.442 [2024-07-26 08:42:28.867585] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.442 [2024-07-26 08:42:28.867610] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.442 [2024-07-26 08:42:28.879646] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.442 [2024-07-26 08:42:28.879673] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.442 [2024-07-26 08:42:28.892138] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.442 [2024-07-26 08:42:28.892165] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.702 [2024-07-26 08:42:28.904588] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.702 [2024-07-26 08:42:28.904631] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.702 [2024-07-26 08:42:28.916811] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.702 [2024-07-26 08:42:28.916836] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.702 [2024-07-26 08:42:28.929259] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.702 [2024-07-26 08:42:28.929285] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.702 [2024-07-26 08:42:28.941467] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.702 [2024-07-26 08:42:28.941492] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.702 [2024-07-26 08:42:28.953422] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.702 [2024-07-26 08:42:28.953447] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.702 [2024-07-26 08:42:28.965676] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.702 [2024-07-26 08:42:28.965702] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.702 [2024-07-26 08:42:28.978003] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.702 [2024-07-26 08:42:28.978057] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.702 [2024-07-26 08:42:28.990535] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.702 [2024-07-26 08:42:28.990575] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.702 [2024-07-26 08:42:29.003641] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.702 [2024-07-26 08:42:29.003666] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.702 [2024-07-26 08:42:29.016196] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.702 [2024-07-26 08:42:29.016222] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.702 [2024-07-26 08:42:29.028228] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.702 [2024-07-26 08:42:29.028255] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.702 [2024-07-26 08:42:29.040295] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.702 [2024-07-26 08:42:29.040322] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.702 [2024-07-26 08:42:29.052731] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.702 [2024-07-26 08:42:29.052757] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.702 [2024-07-26 08:42:29.064656] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.702 [2024-07-26 08:42:29.064683] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.702 [2024-07-26 08:42:29.076558] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.702 [2024-07-26 08:42:29.076601] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.702 [2024-07-26 08:42:29.088495] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.702 [2024-07-26 08:42:29.088521] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.702 [2024-07-26 08:42:29.100005] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.702 [2024-07-26 08:42:29.100032] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.702 [2024-07-26 08:42:29.111780] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.702 [2024-07-26 08:42:29.111806] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.702 [2024-07-26 08:42:29.123882] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.702 [2024-07-26 08:42:29.123909] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.702 [2024-07-26 08:42:29.135805] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.702 [2024-07-26 08:42:29.135832] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.702 [2024-07-26 08:42:29.147534] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.702 [2024-07-26 08:42:29.147560] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.702 [2024-07-26 08:42:29.159266] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.702 [2024-07-26 08:42:29.159293] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.962 [2024-07-26 08:42:29.171123] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.962 [2024-07-26 08:42:29.171150] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.962 [2024-07-26 08:42:29.183092] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.962 [2024-07-26 08:42:29.183119] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.962 [2024-07-26 08:42:29.194878] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.962 [2024-07-26 08:42:29.194903] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.962 [2024-07-26 08:42:29.206481] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.962 [2024-07-26 08:42:29.206512] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.962 [2024-07-26 08:42:29.218130] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.962 [2024-07-26 08:42:29.218157] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.962 [2024-07-26 08:42:29.229841] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.962 [2024-07-26 08:42:29.229868] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.962 [2024-07-26 08:42:29.241655] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.962 [2024-07-26 08:42:29.241681] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.962 [2024-07-26 08:42:29.253580] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.962 [2024-07-26 08:42:29.253606] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.962 [2024-07-26 08:42:29.265495] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.962 [2024-07-26 08:42:29.265521] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.962 [2024-07-26 08:42:29.277751] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.962 [2024-07-26 08:42:29.277777] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.962 [2024-07-26 08:42:29.290122] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.962 [2024-07-26 08:42:29.290149] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.963 [2024-07-26 08:42:29.301817] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.963 [2024-07-26 08:42:29.301843] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.963 [2024-07-26 08:42:29.313722] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.963 [2024-07-26 08:42:29.313749] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.963 [2024-07-26 08:42:29.325885] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.963 [2024-07-26 08:42:29.325912] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.963 [2024-07-26 08:42:29.337132] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.963 [2024-07-26 08:42:29.337159] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.963 [2024-07-26 08:42:29.348777] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.963 [2024-07-26 08:42:29.348804] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.963 [2024-07-26 08:42:29.360409] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.963 [2024-07-26 08:42:29.360436] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.963 [2024-07-26 08:42:29.371884] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.963 [2024-07-26 08:42:29.371910] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.963 [2024-07-26 08:42:29.384464] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.963 [2024-07-26 08:42:29.384489] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.963 [2024-07-26 08:42:29.396767] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.963 [2024-07-26 08:42:29.396793] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.963 [2024-07-26 08:42:29.409441] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.963 [2024-07-26 08:42:29.409484] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.963 [2024-07-26 08:42:29.421712] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.963 [2024-07-26 08:42:29.421737] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.223 [2024-07-26 08:42:29.434861] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.223 [2024-07-26 08:42:29.434894] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.223 [2024-07-26 08:42:29.446638] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.223 [2024-07-26 08:42:29.446663] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.223 [2024-07-26 08:42:29.458696] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.223 [2024-07-26 08:42:29.458721] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.223 [2024-07-26 08:42:29.470544] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.223 [2024-07-26 08:42:29.470570] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.223 [2024-07-26 08:42:29.482779] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.223 [2024-07-26 08:42:29.482804] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.223 [2024-07-26 08:42:29.494829] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.223 [2024-07-26 08:42:29.494854] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.223 [2024-07-26 08:42:29.507642] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.223 [2024-07-26 08:42:29.507669] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.223 [2024-07-26 08:42:29.519844] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.223 [2024-07-26 08:42:29.519870] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.223 [2024-07-26 08:42:29.531955] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.223 [2024-07-26 08:42:29.531981] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.223 [2024-07-26 08:42:29.543570] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.223 [2024-07-26 08:42:29.543597] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.223 [2024-07-26 08:42:29.555617] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.223 [2024-07-26 08:42:29.555658] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.223 [2024-07-26 08:42:29.567716] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.223 [2024-07-26 08:42:29.567742] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.223 [2024-07-26 08:42:29.579357] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.223 [2024-07-26 08:42:29.579382] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.223 [2024-07-26 08:42:29.591320] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.223 [2024-07-26 08:42:29.591363] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.223 [2024-07-26 08:42:29.602883] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.223 [2024-07-26 08:42:29.602912] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.223 [2024-07-26 08:42:29.614519] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.223 [2024-07-26 08:42:29.614563] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.223 [2024-07-26 08:42:29.626343] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.223 [2024-07-26 08:42:29.626385] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.223 [2024-07-26 08:42:29.638544] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.223 [2024-07-26 08:42:29.638570] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.223 [2024-07-26 08:42:29.649950] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.223 [2024-07-26 08:42:29.649976] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.223 [2024-07-26 08:42:29.661847] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.223 [2024-07-26 08:42:29.661880] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.223 [2024-07-26 08:42:29.673734] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.223 [2024-07-26 08:42:29.673759] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.483 [2024-07-26 08:42:29.685911] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.483 [2024-07-26 08:42:29.685953] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.483 [2024-07-26 08:42:29.697688] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.483 [2024-07-26 08:42:29.697714] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.483 [2024-07-26 08:42:29.709675] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.483 [2024-07-26 08:42:29.709701] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.483 [2024-07-26 08:42:29.723305] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.483 [2024-07-26 08:42:29.723332] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.483 [2024-07-26 08:42:29.734758] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.483 [2024-07-26 08:42:29.734783] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.483 [2024-07-26 08:42:29.746550] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.483 [2024-07-26 08:42:29.746595] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.483 [2024-07-26 08:42:29.758256] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.483 [2024-07-26 08:42:29.758284] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.483 [2024-07-26 08:42:29.770479] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.483 [2024-07-26 08:42:29.770522] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.483 [2024-07-26 08:42:29.782171] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.483 [2024-07-26 08:42:29.782198] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.483 [2024-07-26 08:42:29.794271] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.483 [2024-07-26 08:42:29.794298] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.483 [2024-07-26 08:42:29.806434] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.483 [2024-07-26 08:42:29.806460] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.483 [2024-07-26 08:42:29.818241] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.483 [2024-07-26 08:42:29.818268] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.483 [2024-07-26 08:42:29.830771] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.483 [2024-07-26 08:42:29.830798] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.483 [2024-07-26 08:42:29.843326] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.483 [2024-07-26 08:42:29.843367] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.483 [2024-07-26 08:42:29.855389] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.483 [2024-07-26 08:42:29.855433] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.483 [2024-07-26 08:42:29.867382] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.483 [2024-07-26 08:42:29.867408] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.483 [2024-07-26 08:42:29.879542] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.483 [2024-07-26 08:42:29.879567] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.483 [2024-07-26 08:42:29.891494] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.483 [2024-07-26 08:42:29.891535] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.483 [2024-07-26 08:42:29.903295] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.483 [2024-07-26 08:42:29.903323] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.483 [2024-07-26 08:42:29.915444] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.483 [2024-07-26 08:42:29.915470] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.483 [2024-07-26 08:42:29.927169] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.483 [2024-07-26 08:42:29.927196] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.483 [2024-07-26 08:42:29.939186] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.483 [2024-07-26 08:42:29.939213] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.744 [2024-07-26 08:42:29.952725] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.744 [2024-07-26 08:42:29.952765] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.744 [2024-07-26 08:42:29.964380] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.744 [2024-07-26 08:42:29.964406] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.744 [2024-07-26 08:42:29.976685] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.744 [2024-07-26 08:42:29.976712] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.744 [2024-07-26 08:42:29.988350] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.744 [2024-07-26 08:42:29.988375] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.744 [2024-07-26 08:42:30.001208] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.744 [2024-07-26 08:42:30.001240] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.744 [2024-07-26 08:42:30.013045] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.744 [2024-07-26 08:42:30.013104] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.744 [2024-07-26 08:42:30.025712] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.744 [2024-07-26 08:42:30.025740] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.744 [2024-07-26 08:42:30.038314] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.744 [2024-07-26 08:42:30.038357] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.744 [2024-07-26 08:42:30.051472] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.744 [2024-07-26 08:42:30.051505] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.744 [2024-07-26 08:42:30.065441] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.744 [2024-07-26 08:42:30.065468] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.744 [2024-07-26 08:42:30.076658] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.744 [2024-07-26 08:42:30.076684] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.744 [2024-07-26 08:42:30.089067] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.744 [2024-07-26 08:42:30.089093] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.744 [2024-07-26 08:42:30.101669] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.744 [2024-07-26 08:42:30.101695] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.744 [2024-07-26 08:42:30.113768] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.744 [2024-07-26 08:42:30.113794] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.744 [2024-07-26 08:42:30.126834] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.744 [2024-07-26 08:42:30.126860] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.744 [2024-07-26 08:42:30.139452] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.744 [2024-07-26 08:42:30.139478] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.744 [2024-07-26 08:42:30.151646] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.744 [2024-07-26 08:42:30.151672] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.744 [2024-07-26 08:42:30.164075] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.744 [2024-07-26 08:42:30.164102] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.744 [2024-07-26 08:42:30.176217] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.744 [2024-07-26 08:42:30.176244] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.744 [2024-07-26 08:42:30.188860] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.744 [2024-07-26 08:42:30.188887] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.744 [2024-07-26 08:42:30.201466] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.744 [2024-07-26 08:42:30.201494] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.005 [2024-07-26 08:42:30.214032] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.005 [2024-07-26 08:42:30.214070] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.005 [2024-07-26 08:42:30.226694] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.005 [2024-07-26 08:42:30.226719] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.005 [2024-07-26 08:42:30.238871] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.005 [2024-07-26 08:42:30.238896] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.005 [2024-07-26 08:42:30.250867] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.005 [2024-07-26 08:42:30.250892] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.005 [2024-07-26 08:42:30.262837] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.005 [2024-07-26 08:42:30.262863] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.005 [2024-07-26 08:42:30.276908] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.005 [2024-07-26 08:42:30.276933] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.005 [2024-07-26 08:42:30.289094] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.005 [2024-07-26 08:42:30.289130] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.005 [2024-07-26 08:42:30.300769] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.005 [2024-07-26 08:42:30.300795] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.005 [2024-07-26 08:42:30.313105] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.005 [2024-07-26 08:42:30.313132] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.005 [2024-07-26 08:42:30.325313] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.005 [2024-07-26 08:42:30.325355] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.005 [2024-07-26 08:42:30.337299] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.005 [2024-07-26 08:42:30.337325] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.005 [2024-07-26 08:42:30.350408] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.005 [2024-07-26 08:42:30.350433] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.005 [2024-07-26 08:42:30.363321] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.005 [2024-07-26 08:42:30.363347] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.005 [2024-07-26 08:42:30.375352] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.005 [2024-07-26 08:42:30.375378] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.005 [2024-07-26 08:42:30.387835] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.005 [2024-07-26 08:42:30.387864] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.005 [2024-07-26 08:42:30.399830] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.005 [2024-07-26 08:42:30.399856] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.005 [2024-07-26 08:42:30.411708] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.005 [2024-07-26 08:42:30.411735] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.005 [2024-07-26 08:42:30.423744] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.005 [2024-07-26 08:42:30.423770] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.005 [2024-07-26 08:42:30.435874] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.005 [2024-07-26 08:42:30.435900] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.005 [2024-07-26 08:42:30.447494] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.005 [2024-07-26 08:42:30.447520] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.005 [2024-07-26 08:42:30.459200] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.005 [2024-07-26 08:42:30.459228] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.264 [2024-07-26 08:42:30.470421] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.264 [2024-07-26 08:42:30.470449] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.264 [2024-07-26 08:42:30.482528] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.264 [2024-07-26 08:42:30.482555] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.264 [2024-07-26 08:42:30.494574] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.264 [2024-07-26 08:42:30.494601] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.264 [2024-07-26 08:42:30.506268] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.264 [2024-07-26 08:42:30.506295] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.264 [2024-07-26 08:42:30.518112] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.264 [2024-07-26 08:42:30.518139] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.264 [2024-07-26 08:42:30.530329] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.264 [2024-07-26 08:42:30.530371] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.264 [2024-07-26 08:42:30.542360] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.264 [2024-07-26 08:42:30.542386] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.264 [2024-07-26 08:42:30.554585] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.264 [2024-07-26 08:42:30.554625] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.264 [2024-07-26 08:42:30.566628] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.264 [2024-07-26 08:42:30.566654] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.265 [2024-07-26 08:42:30.578204] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.265 [2024-07-26 08:42:30.578231] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.265 [2024-07-26 08:42:30.589870] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.265 [2024-07-26 08:42:30.589896] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.265 [2024-07-26 08:42:30.603279] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.265 [2024-07-26 08:42:30.603306] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.265 [2024-07-26 08:42:30.614300] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.265 [2024-07-26 08:42:30.614327] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.265 [2024-07-26 08:42:30.626123] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.265 [2024-07-26 08:42:30.626150] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.265 [2024-07-26 08:42:30.638276] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.265 [2024-07-26 08:42:30.638303] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.265 [2024-07-26 08:42:30.649897] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.265 [2024-07-26 08:42:30.649926] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.265 [2024-07-26 08:42:30.661924] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.265 [2024-07-26 08:42:30.661953] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.265 [2024-07-26 08:42:30.674016] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.265 [2024-07-26 08:42:30.674042] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.265 [2024-07-26 08:42:30.686195] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.265 [2024-07-26 08:42:30.686222] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.265 [2024-07-26 08:42:30.698479] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.265 [2024-07-26 08:42:30.698505] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.265 [2024-07-26 08:42:30.710930] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.265 [2024-07-26 08:42:30.710957] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.265 [2024-07-26 08:42:30.722869] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.265 [2024-07-26 08:42:30.722896] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.524 [2024-07-26 08:42:30.734947] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.524 [2024-07-26 08:42:30.734974] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.524 [2024-07-26 08:42:30.747406] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.524 [2024-07-26 08:42:30.747432] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.524 [2024-07-26 08:42:30.758938] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.524 [2024-07-26 08:42:30.758963] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.524 [2024-07-26 08:42:30.770419] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.524 [2024-07-26 08:42:30.770449] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.524 [2024-07-26 08:42:30.782477] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.524 [2024-07-26 08:42:30.782504] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.524 [2024-07-26 08:42:30.794251] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.524 [2024-07-26 08:42:30.794278] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.524 [2024-07-26 08:42:30.806180] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.524 [2024-07-26 08:42:30.806229] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.524 [2024-07-26 08:42:30.818718] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.524 [2024-07-26 08:42:30.818744] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.524 [2024-07-26 08:42:30.830811] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.524 [2024-07-26 08:42:30.830852] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.524 [2024-07-26 08:42:30.842572] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.524 [2024-07-26 08:42:30.842598] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.524 [2024-07-26 08:42:30.856168] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.524 [2024-07-26 08:42:30.856196] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.524 [2024-07-26 08:42:30.867520] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.524 [2024-07-26 08:42:30.867561] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.524 [2024-07-26 08:42:30.880525] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.524 [2024-07-26 08:42:30.880551] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.524 [2024-07-26 08:42:30.892538] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.524 [2024-07-26 08:42:30.892564] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.524 [2024-07-26 08:42:30.904419] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.524 [2024-07-26 08:42:30.904445] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.524 [2024-07-26 08:42:30.916641] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.524 [2024-07-26 08:42:30.916669] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.524 [2024-07-26 08:42:30.928492] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.524 [2024-07-26 08:42:30.928518] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.524 [2024-07-26 08:42:30.940925] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.524 [2024-07-26 08:42:30.940951] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.524 [2024-07-26 08:42:30.953497] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.524 [2024-07-26 08:42:30.953527] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.524 [2024-07-26 08:42:30.965909] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.524 [2024-07-26 08:42:30.965934] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.524 [2024-07-26 08:42:30.977851] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.524 [2024-07-26 08:42:30.977878] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.785 [2024-07-26 08:42:30.989235] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.785 [2024-07-26 08:42:30.989262] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.785 [2024-07-26 08:42:31.001173] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.785 [2024-07-26 08:42:31.001200] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.785 [2024-07-26 08:42:31.013372] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.785 [2024-07-26 08:42:31.013399] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.785 [2024-07-26 08:42:31.025886] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.785 [2024-07-26 08:42:31.025912] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.785 [2024-07-26 08:42:31.038107] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.785 [2024-07-26 08:42:31.038150] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.785 [2024-07-26 08:42:31.050181] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.785 [2024-07-26 08:42:31.050208] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.785 [2024-07-26 08:42:31.062119] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.785 [2024-07-26 08:42:31.062147] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.785 [2024-07-26 08:42:31.073935] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.785 [2024-07-26 08:42:31.073961] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.785 [2024-07-26 08:42:31.085638] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.785 [2024-07-26 08:42:31.085663] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.785 [2024-07-26 08:42:31.098378] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.785 [2024-07-26 08:42:31.098420] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.785 [2024-07-26 08:42:31.110158] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.785 [2024-07-26 08:42:31.110186] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.785 [2024-07-26 08:42:31.121821] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.785 [2024-07-26 08:42:31.121846] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.785 [2024-07-26 08:42:31.133694] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.785 [2024-07-26 08:42:31.133720] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.785 [2024-07-26 08:42:31.146055] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.785 [2024-07-26 08:42:31.146089] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.785 [2024-07-26 08:42:31.158211] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.785 [2024-07-26 08:42:31.158252] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.785 [2024-07-26 08:42:31.170067] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.785 [2024-07-26 08:42:31.170094] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.785 [2024-07-26 08:42:31.182239] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.785 [2024-07-26 08:42:31.182266] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.785 [2024-07-26 08:42:31.194360] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.785 [2024-07-26 08:42:31.194386] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.785 [2024-07-26 08:42:31.206616] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.785 [2024-07-26 08:42:31.206641] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.785 [2024-07-26 08:42:31.218175] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.785 [2024-07-26 08:42:31.218202] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.785 [2024-07-26 08:42:31.229963] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.785 [2024-07-26 08:42:31.229988] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.785 [2024-07-26 08:42:31.242036] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.785 [2024-07-26 08:42:31.242086] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.046 [2024-07-26 08:42:31.254503] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.046 [2024-07-26 08:42:31.254529] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.046 [2024-07-26 08:42:31.266512] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.046 [2024-07-26 08:42:31.266550] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.046 [2024-07-26 08:42:31.278907] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.046 [2024-07-26 08:42:31.278937] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.046 [2024-07-26 08:42:31.291684] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.046 [2024-07-26 08:42:31.291709] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.046 [2024-07-26 08:42:31.303810] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.046 [2024-07-26 08:42:31.303840] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.046 [2024-07-26 08:42:31.316074] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.046 [2024-07-26 08:42:31.316101] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.046 [2024-07-26 08:42:31.328889] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.046 [2024-07-26 08:42:31.328915] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.046 [2024-07-26 08:42:31.341410] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.046 [2024-07-26 08:42:31.341440] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.046 [2024-07-26 08:42:31.354226] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.046 [2024-07-26 08:42:31.354252] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.046 [2024-07-26 08:42:31.366904] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.046 [2024-07-26 08:42:31.366934] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.046 [2024-07-26 08:42:31.379153] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.046 [2024-07-26 08:42:31.379180] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.046 [2024-07-26 08:42:31.391454] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.046 [2024-07-26 08:42:31.391479] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.046 [2024-07-26 08:42:31.403798] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.046 [2024-07-26 08:42:31.403823] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.046 [2024-07-26 08:42:31.415901] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.046 [2024-07-26 08:42:31.415930] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.046 [2024-07-26 08:42:31.428660] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.046 [2024-07-26 08:42:31.428690] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.046 [2024-07-26 08:42:31.441530] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.046 [2024-07-26 08:42:31.441560] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.046 [2024-07-26 08:42:31.454015] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.046 [2024-07-26 08:42:31.454054] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.046 [2024-07-26 08:42:31.466448] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.046 [2024-07-26 08:42:31.466478] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.046 [2024-07-26 08:42:31.479389] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.046 [2024-07-26 08:42:31.479419] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.046 [2024-07-26 08:42:31.491842] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.046 [2024-07-26 08:42:31.491867] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.046 [2024-07-26 08:42:31.505144] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.046 [2024-07-26 08:42:31.505183] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.306 [2024-07-26 08:42:31.518312] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.306 [2024-07-26 08:42:31.518339] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.306 [2024-07-26 08:42:31.530959] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.306 [2024-07-26 08:42:31.530989] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.306 [2024-07-26 08:42:31.543796] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.306 [2024-07-26 08:42:31.543826] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.306 [2024-07-26 08:42:31.555918] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.306 [2024-07-26 08:42:31.555948] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.306 [2024-07-26 08:42:31.568115] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.306 [2024-07-26 08:42:31.568141] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.306 [2024-07-26 08:42:31.580462] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.307 [2024-07-26 08:42:31.580488] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.307 [2024-07-26 08:42:31.592606] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.307 [2024-07-26 08:42:31.592636] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.307 [2024-07-26 08:42:31.604837] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.307 [2024-07-26 08:42:31.604868] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.307 [2024-07-26 08:42:31.618709] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.307 [2024-07-26 08:42:31.618734] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.307 [2024-07-26 08:42:31.630622] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.307 [2024-07-26 08:42:31.630651] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.307 [2024-07-26 08:42:31.642890] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.307 [2024-07-26 08:42:31.642920] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.307 [2024-07-26 08:42:31.654684] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.307 [2024-07-26 08:42:31.654714] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.307 [2024-07-26 08:42:31.666474] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.307 [2024-07-26 08:42:31.666514] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.307 [2024-07-26 08:42:31.678697] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.307 [2024-07-26 08:42:31.678723] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.307 [2024-07-26 08:42:31.692751] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.307 [2024-07-26 08:42:31.692778] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.307 [2024-07-26 08:42:31.704763] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.307 [2024-07-26 08:42:31.704789] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.307 [2024-07-26 08:42:31.716410] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.307 [2024-07-26 08:42:31.716436] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.307 [2024-07-26 08:42:31.728334] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.307 [2024-07-26 08:42:31.728375] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.307 [2024-07-26 08:42:31.740444] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.307 [2024-07-26 08:42:31.740470] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.307 [2024-07-26 08:42:31.752830] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.307 [2024-07-26 08:42:31.752856] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.307 [2024-07-26 08:42:31.765107] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.307 [2024-07-26 08:42:31.765134] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.566 [2024-07-26 08:42:31.776938] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.566 [2024-07-26 08:42:31.776966] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.566 [2024-07-26 08:42:31.788582] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.566 [2024-07-26 08:42:31.788608] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.566 [2024-07-26 08:42:31.800401] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.566 [2024-07-26 08:42:31.800428] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.566 [2024-07-26 08:42:31.811768] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.566 [2024-07-26 08:42:31.811795] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.566 [2024-07-26 08:42:31.823648] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.566 [2024-07-26 08:42:31.823675] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.566 [2024-07-26 08:42:31.835629] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.566 [2024-07-26 08:42:31.835668] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.566 [2024-07-26 08:42:31.847446] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.566 [2024-07-26 08:42:31.847473] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.566 [2024-07-26 08:42:31.859289] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.566 [2024-07-26 08:42:31.859316] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.566 [2024-07-26 08:42:31.870957] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.566 [2024-07-26 08:42:31.870984] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.566 [2024-07-26 08:42:31.883001] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.566 [2024-07-26 08:42:31.883028] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.566 [2024-07-26 08:42:31.894617] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.566 [2024-07-26 08:42:31.894644] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.566 [2024-07-26 08:42:31.907729] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.566 [2024-07-26 08:42:31.907756] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.566 [2024-07-26 08:42:31.918472] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.566 [2024-07-26 08:42:31.918499] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.566 [2024-07-26 08:42:31.930684] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.566 [2024-07-26 08:42:31.930712] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.566 [2024-07-26 08:42:31.942581] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.566 [2024-07-26 08:42:31.942607] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.566 [2024-07-26 08:42:31.954757] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.566 [2024-07-26 08:42:31.954798] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.566 [2024-07-26 08:42:31.966761] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.566 [2024-07-26 08:42:31.966787] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.566 [2024-07-26 08:42:31.979132] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.566 [2024-07-26 08:42:31.979159] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.566 [2024-07-26 08:42:31.991156] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.566 [2024-07-26 08:42:31.991182] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.566 [2024-07-26 08:42:32.003352] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.566 [2024-07-26 08:42:32.003378] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.566 [2024-07-26 08:42:32.015388] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.566 [2024-07-26 08:42:32.015414] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.826 [2024-07-26 08:42:32.027259] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.826 [2024-07-26 08:42:32.027287] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.826 [2024-07-26 08:42:32.039301] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.826 [2024-07-26 08:42:32.039327] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.826 [2024-07-26 08:42:32.051566] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.826 [2024-07-26 08:42:32.051592] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.826 [2024-07-26 08:42:32.063659] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.826 [2024-07-26 08:42:32.063685] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.826 [2024-07-26 08:42:32.076399] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.826 [2024-07-26 08:42:32.076439] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.826 [2024-07-26 08:42:32.089196] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.826 [2024-07-26 08:42:32.089222] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.826 [2024-07-26 08:42:32.101539] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.826 [2024-07-26 08:42:32.101565] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.826 [2024-07-26 08:42:32.112782] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.826 [2024-07-26 08:42:32.112809] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.826 [2024-07-26 08:42:32.124969] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.826 [2024-07-26 08:42:32.124995] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.826 [2024-07-26 08:42:32.137074] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.826 [2024-07-26 08:42:32.137100] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.826 [2024-07-26 08:42:32.151155] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.826 [2024-07-26 08:42:32.151182] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.826 [2024-07-26 08:42:32.162142] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.826 [2024-07-26 08:42:32.162168] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.827 [2024-07-26 08:42:32.174189] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.827 [2024-07-26 08:42:32.174216] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.827 [2024-07-26 08:42:32.186198] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.827 [2024-07-26 08:42:32.186226] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.827 [2024-07-26 08:42:32.198369] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.827 [2024-07-26 08:42:32.198394] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.827 [2024-07-26 08:42:32.210740] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.827 [2024-07-26 08:42:32.210765] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.827 [2024-07-26 08:42:32.223026] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.827 [2024-07-26 08:42:32.223074] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.827 [2024-07-26 08:42:32.234373] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.827 [2024-07-26 08:42:32.234399] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.827 [2024-07-26 08:42:32.246075] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.827 [2024-07-26 08:42:32.246102] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.827 [2024-07-26 08:42:32.257880] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.827 [2024-07-26 08:42:32.257905] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.827 [2024-07-26 08:42:32.270190] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.827 [2024-07-26 08:42:32.270216] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.827 [2024-07-26 08:42:32.281856] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.827 [2024-07-26 08:42:32.281882] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.087 [2024-07-26 08:42:32.293934] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.087 [2024-07-26 08:42:32.293959] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.087 [2024-07-26 08:42:32.306646] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.087 [2024-07-26 08:42:32.306672] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.087 [2024-07-26 08:42:32.318932] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.087 [2024-07-26 08:42:32.318958] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.087 [2024-07-26 08:42:32.330930] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.087 [2024-07-26 08:42:32.330956] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.088 [2024-07-26 08:42:32.343168] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.088 [2024-07-26 08:42:32.343198] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.088 [2024-07-26 08:42:32.355564] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.088 [2024-07-26 08:42:32.355590] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.088 [2024-07-26 08:42:32.367205] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.088 [2024-07-26 08:42:32.367231] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.088 [2024-07-26 08:42:32.381138] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.088 [2024-07-26 08:42:32.381165] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.088 [2024-07-26 08:42:32.392116] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.088 [2024-07-26 08:42:32.392142] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.088 [2024-07-26 08:42:32.403626] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.088 [2024-07-26 08:42:32.403651] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.088 [2024-07-26 08:42:32.415999] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.088 [2024-07-26 08:42:32.416024] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.088 [2024-07-26 08:42:32.427910] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.088 [2024-07-26 08:42:32.427935] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.088 [2024-07-26 08:42:32.439663] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.088 [2024-07-26 08:42:32.439689] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.088 [2024-07-26 08:42:32.451639] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.088 [2024-07-26 08:42:32.451665] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.088 [2024-07-26 08:42:32.463097] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.088 [2024-07-26 08:42:32.463142] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.088 [2024-07-26 08:42:32.474853] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.088 [2024-07-26 08:42:32.474878] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.088 [2024-07-26 08:42:32.485706] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.088 [2024-07-26 08:42:32.485731] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.088 [2024-07-26 08:42:32.497825] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.088 [2024-07-26 08:42:32.497850] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.088 [2024-07-26 08:42:32.509792] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.088 [2024-07-26 08:42:32.509817] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.088 [2024-07-26 08:42:32.522199] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.088 [2024-07-26 08:42:32.522225] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.088 [2024-07-26 08:42:32.533814] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.088 [2024-07-26 08:42:32.533839] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.088 [2024-07-26 08:42:32.545940] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.088 [2024-07-26 08:42:32.545983] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.348 [2024-07-26 08:42:32.558178] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.348 [2024-07-26 08:42:32.558206] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.348 [2024-07-26 08:42:32.570134] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.348 [2024-07-26 08:42:32.570161] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.348 [2024-07-26 08:42:32.583986] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.348 [2024-07-26 08:42:32.584012] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.348 [2024-07-26 08:42:32.595220] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.348 [2024-07-26 08:42:32.595247] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.348 [2024-07-26 08:42:32.606967] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.348 [2024-07-26 08:42:32.606992] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.348 [2024-07-26 08:42:32.618371] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.348 [2024-07-26 08:42:32.618398] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.348 [2024-07-26 08:42:32.630482] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.348 [2024-07-26 08:42:32.630507] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.348 [2024-07-26 08:42:32.642904] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.348 [2024-07-26 08:42:32.642938] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.348 [2024-07-26 08:42:32.654878] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.348 [2024-07-26 08:42:32.654903] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.348 [2024-07-26 08:42:32.666747] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.349 [2024-07-26 08:42:32.666773] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.349 [2024-07-26 08:42:32.678808] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.349 [2024-07-26 08:42:32.678833] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.349 [2024-07-26 08:42:32.691054] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.349 [2024-07-26 08:42:32.691103] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.349 [2024-07-26 08:42:32.703086] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.349 [2024-07-26 08:42:32.703113] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.349 [2024-07-26 08:42:32.715380] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.349 [2024-07-26 08:42:32.715421] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.349 [2024-07-26 08:42:32.727532] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.349 [2024-07-26 08:42:32.727573] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.349 [2024-07-26 08:42:32.739426] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.349 [2024-07-26 08:42:32.739451] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.349 [2024-07-26 08:42:32.751082] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.349 [2024-07-26 08:42:32.751108] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.349 [2024-07-26 08:42:32.762871] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.349 [2024-07-26 08:42:32.762896] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.349 [2024-07-26 08:42:32.774945] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.349 [2024-07-26 08:42:32.774971] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.349 [2024-07-26 08:42:32.787322] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.349 [2024-07-26 08:42:32.787349] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.349 [2024-07-26 08:42:32.799265] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.349 [2024-07-26 08:42:32.799291] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.608 [2024-07-26 08:42:32.811184] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.608 [2024-07-26 08:42:32.811212] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.608 [2024-07-26 08:42:32.823305] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.608 [2024-07-26 08:42:32.823332] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.608 [2024-07-26 08:42:32.834775] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.608 [2024-07-26 08:42:32.834801] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.608 [2024-07-26 08:42:32.847131] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.608 [2024-07-26 08:42:32.847157] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.608 [2024-07-26 08:42:32.858375] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.608 [2024-07-26 08:42:32.858401] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.608 [2024-07-26 08:42:32.869623] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.608 [2024-07-26 08:42:32.869655] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.608 [2024-07-26 08:42:32.881476] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.608 [2024-07-26 08:42:32.881501] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.608 [2024-07-26 08:42:32.892817] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.608 [2024-07-26 08:42:32.892842] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.608 [2024-07-26 08:42:32.904886] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.608 [2024-07-26 08:42:32.904911] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.608 [2024-07-26 08:42:32.916772] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.608 [2024-07-26 08:42:32.916798] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.608 [2024-07-26 08:42:32.928617] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.608 [2024-07-26 08:42:32.928643] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.608 [2024-07-26 08:42:32.940309] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.608 [2024-07-26 08:42:32.940337] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.608 [2024-07-26 08:42:32.952793] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.608 [2024-07-26 08:42:32.952823] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.608 [2024-07-26 08:42:32.965703] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.608 [2024-07-26 08:42:32.965730] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.608 [2024-07-26 08:42:32.978232] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.608 [2024-07-26 08:42:32.978259] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.608 [2024-07-26 08:42:32.990931] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.608 [2024-07-26 08:42:32.990957] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.608 [2024-07-26 08:42:33.003421] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.608 [2024-07-26 08:42:33.003447] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.608 [2024-07-26 08:42:33.015750] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.608 [2024-07-26 08:42:33.015776] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.608 [2024-07-26 08:42:33.028009] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.608 [2024-07-26 08:42:33.028034] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.608 [2024-07-26 08:42:33.040377] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.608 [2024-07-26 08:42:33.040418] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.608 [2024-07-26 08:42:33.052295] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.608 [2024-07-26 08:42:33.052321] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.608 [2024-07-26 08:42:33.064387] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.608 [2024-07-26 08:42:33.064430] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.868 [2024-07-26 08:42:33.076569] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.868 [2024-07-26 08:42:33.076595] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.868 [2024-07-26 08:42:33.089326] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.868 [2024-07-26 08:42:33.089353] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.868 [2024-07-26 08:42:33.100895] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.868 [2024-07-26 08:42:33.100927] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.868 [2024-07-26 08:42:33.112619] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.868 [2024-07-26 08:42:33.112644] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.868 [2024-07-26 08:42:33.124321] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.868 [2024-07-26 08:42:33.124363] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.868 [2024-07-26 08:42:33.136155] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.868 [2024-07-26 08:42:33.136180] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.868 [2024-07-26 08:42:33.147539] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.868 [2024-07-26 08:42:33.147564] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.868 [2024-07-26 08:42:33.160858] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.868 [2024-07-26 08:42:33.160883] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.868 [2024-07-26 08:42:33.171658] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.868 [2024-07-26 08:42:33.171684] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.868 [2024-07-26 08:42:33.183507] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.868 [2024-07-26 08:42:33.183533] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.868 [2024-07-26 08:42:33.195176] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.868 [2024-07-26 08:42:33.195202] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.868 [2024-07-26 08:42:33.207024] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.868 [2024-07-26 08:42:33.207078] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.868 [2024-07-26 08:42:33.218629] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.868 [2024-07-26 08:42:33.218654] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.868 [2024-07-26 08:42:33.230248] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.868 [2024-07-26 08:42:33.230274] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.868 [2024-07-26 08:42:33.242688] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.868 [2024-07-26 08:42:33.242714] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.868 [2024-07-26 08:42:33.254731] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.868 [2024-07-26 08:42:33.254758] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.868 [2024-07-26 08:42:33.266781] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.868 [2024-07-26 08:42:33.266806] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.868 [2024-07-26 08:42:33.279314] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.868 [2024-07-26 08:42:33.279355] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.868 [2024-07-26 08:42:33.291224] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.868 [2024-07-26 08:42:33.291267] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.868 [2024-07-26 08:42:33.303264] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.868 [2024-07-26 08:42:33.303290] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.868 [2024-07-26 08:42:33.315283] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.868 [2024-07-26 08:42:33.315310] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.127 [2024-07-26 08:42:33.329144] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.127 [2024-07-26 08:42:33.329178] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.127 [2024-07-26 08:42:33.340635] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.127 [2024-07-26 08:42:33.340661] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.127 [2024-07-26 08:42:33.352680] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.127 [2024-07-26 08:42:33.352706] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.127 [2024-07-26 08:42:33.368636] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.127 [2024-07-26 08:42:33.368663] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.127 [2024-07-26 08:42:33.379846] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.127 [2024-07-26 08:42:33.379871] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.127 [2024-07-26 08:42:33.391669] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.127 [2024-07-26 08:42:33.391695] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.127 [2024-07-26 08:42:33.403845] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.127 [2024-07-26 08:42:33.403870] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.127 [2024-07-26 08:42:33.415478] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.127 [2024-07-26 08:42:33.415503] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.127 [2024-07-26 08:42:33.427503] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.127 [2024-07-26 08:42:33.427529] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.127 [2024-07-26 08:42:33.439558] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.127 [2024-07-26 08:42:33.439585] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.127 [2024-07-26 08:42:33.451073] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.127 [2024-07-26 08:42:33.451099] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.127 [2024-07-26 08:42:33.462961] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.127 [2024-07-26 08:42:33.462987] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.127 [2024-07-26 08:42:33.474921] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.127 [2024-07-26 08:42:33.474947] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.127 [2024-07-26 08:42:33.487140] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.127 [2024-07-26 08:42:33.487168] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.127 [2024-07-26 08:42:33.497988] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.127 [2024-07-26 08:42:33.498013] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.127 00:09:15.127 Latency(us) 00:09:15.127 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:15.127 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:09:15.127 Nvme1n1 : 5.01 10510.76 82.12 0.00 0.00 12159.87 5509.88 25437.68 00:09:15.127 =================================================================================================================== 00:09:15.127 Total : 10510.76 82.12 0.00 0.00 12159.87 5509.88 25437.68 00:09:15.127 [2024-07-26 08:42:33.503636] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.127 [2024-07-26 08:42:33.503662] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.127 [2024-07-26 08:42:33.511663] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.127 [2024-07-26 08:42:33.511692] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.127 [2024-07-26 08:42:33.519682] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.127 [2024-07-26 08:42:33.519713] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.127 [2024-07-26 08:42:33.527734] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.127 [2024-07-26 08:42:33.527783] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.127 [2024-07-26 08:42:33.535754] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.127 [2024-07-26 08:42:33.535798] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.127 [2024-07-26 08:42:33.543772] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.127 [2024-07-26 08:42:33.543816] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.127 [2024-07-26 08:42:33.551801] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.127 [2024-07-26 08:42:33.551847] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.127 [2024-07-26 08:42:33.559818] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.127 [2024-07-26 08:42:33.559862] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.127 [2024-07-26 08:42:33.567841] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.127 [2024-07-26 08:42:33.567885] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.127 [2024-07-26 08:42:33.575861] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.127 [2024-07-26 08:42:33.575906] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.127 [2024-07-26 08:42:33.583891] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.127 [2024-07-26 08:42:33.583938] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.386 [2024-07-26 08:42:33.591910] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.386 [2024-07-26 08:42:33.591955] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.386 [2024-07-26 08:42:33.599927] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.386 [2024-07-26 08:42:33.599974] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.386 [2024-07-26 08:42:33.607953] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.386 [2024-07-26 08:42:33.607999] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.386 [2024-07-26 08:42:33.615974] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.386 [2024-07-26 08:42:33.616020] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.386 [2024-07-26 08:42:33.623989] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.386 [2024-07-26 08:42:33.624033] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.386 [2024-07-26 08:42:33.632008] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.386 [2024-07-26 08:42:33.632050] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.386 [2024-07-26 08:42:33.640027] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.386 [2024-07-26 08:42:33.640077] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.386 [2024-07-26 08:42:33.648017] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.386 [2024-07-26 08:42:33.648044] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.387 [2024-07-26 08:42:33.656041] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.387 [2024-07-26 08:42:33.656081] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.387 [2024-07-26 08:42:33.664104] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.387 [2024-07-26 08:42:33.664148] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.387 [2024-07-26 08:42:33.672119] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.387 [2024-07-26 08:42:33.672163] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.387 [2024-07-26 08:42:33.680135] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.387 [2024-07-26 08:42:33.680166] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.387 [2024-07-26 08:42:33.688116] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.387 [2024-07-26 08:42:33.688140] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.387 [2024-07-26 08:42:33.696187] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.387 [2024-07-26 08:42:33.696232] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.387 [2024-07-26 08:42:33.704205] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.387 [2024-07-26 08:42:33.704249] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.387 [2024-07-26 08:42:33.712207] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.387 [2024-07-26 08:42:33.712237] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.387 [2024-07-26 08:42:33.720206] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.387 [2024-07-26 08:42:33.720227] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.387 [2024-07-26 08:42:33.728229] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.387 [2024-07-26 08:42:33.728250] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.387 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (880292) - No such process 00:09:15.387 08:42:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 880292 00:09:15.387 08:42:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:15.387 08:42:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.387 08:42:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:15.387 08:42:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.387 08:42:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:15.387 08:42:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.387 08:42:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:15.387 delay0 00:09:15.387 08:42:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.387 08:42:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:09:15.387 08:42:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.387 08:42:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:15.387 08:42:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.387 08:42:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:09:15.387 EAL: No free 2048 kB hugepages reported on node 1 00:09:15.645 [2024-07-26 08:42:33.886183] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:22.230 Initializing NVMe Controllers 00:09:22.230 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:22.230 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:22.230 Initialization complete. Launching workers. 00:09:22.230 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 106 00:09:22.230 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 380, failed to submit 46 00:09:22.230 success 200, unsuccess 180, failed 0 00:09:22.230 08:42:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:09:22.230 08:42:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:09:22.230 08:42:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:22.230 08:42:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:09:22.230 08:42:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:22.230 08:42:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:09:22.230 08:42:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:22.231 08:42:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:22.231 rmmod nvme_tcp 00:09:22.231 rmmod nvme_fabrics 00:09:22.231 rmmod nvme_keyring 00:09:22.231 08:42:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:22.231 08:42:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:09:22.231 08:42:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:09:22.231 08:42:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 878929 ']' 00:09:22.231 08:42:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 878929 00:09:22.231 08:42:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 878929 ']' 00:09:22.231 08:42:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 878929 00:09:22.231 08:42:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:09:22.231 08:42:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:22.231 08:42:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 878929 00:09:22.231 08:42:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:22.231 08:42:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:22.231 08:42:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 878929' 00:09:22.231 killing process with pid 878929 00:09:22.231 08:42:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 878929 00:09:22.231 08:42:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 878929 00:09:22.231 08:42:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:22.231 08:42:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:22.231 08:42:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:22.231 08:42:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:22.231 08:42:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:22.231 08:42:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:22.231 08:42:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:22.231 08:42:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:24.172 08:42:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:24.172 00:09:24.172 real 0m27.769s 00:09:24.172 user 0m40.200s 00:09:24.172 sys 0m8.604s 00:09:24.172 08:42:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:24.172 08:42:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:24.172 ************************************ 00:09:24.172 END TEST nvmf_zcopy 00:09:24.172 ************************************ 00:09:24.172 08:42:42 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:24.172 08:42:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:24.172 08:42:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:24.172 08:42:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:24.172 ************************************ 00:09:24.172 START TEST nvmf_nmic 00:09:24.172 ************************************ 00:09:24.172 08:42:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:24.172 * Looking for test storage... 00:09:24.172 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:24.172 08:42:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:24.172 08:42:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:09:24.172 08:42:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:24.172 08:42:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:24.172 08:42:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:24.172 08:42:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:24.172 08:42:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:24.173 08:42:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:24.173 08:42:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:24.173 08:42:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:24.173 08:42:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:24.173 08:42:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:24.173 08:42:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:24.173 08:42:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:24.173 08:42:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:24.173 08:42:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:24.173 08:42:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:24.173 08:42:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:24.173 08:42:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:24.173 08:42:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:24.173 08:42:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:24.173 08:42:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:24.173 08:42:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.173 08:42:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.173 08:42:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.173 08:42:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:09:24.173 08:42:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.173 08:42:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:09:24.173 08:42:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:24.173 08:42:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:24.173 08:42:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:24.173 08:42:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:24.173 08:42:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:24.173 08:42:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:24.173 08:42:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:24.173 08:42:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:24.173 08:42:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:24.173 08:42:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:24.173 08:42:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:09:24.173 08:42:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:24.173 08:42:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:24.173 08:42:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:24.173 08:42:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:24.173 08:42:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:24.173 08:42:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:24.173 08:42:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:24.173 08:42:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:24.173 08:42:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:24.173 08:42:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:24.173 08:42:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:09:24.173 08:42:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:26.074 08:42:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:26.074 08:42:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:09:26.074 08:42:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:26.074 08:42:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:26.074 08:42:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:26.074 08:42:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:26.074 08:42:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:26.074 08:42:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:09:26.074 08:42:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:26.074 08:42:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:09:26.074 08:42:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:09:26.074 08:42:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:09:26.074 08:42:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:09:26.074 08:42:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:09:26.074 08:42:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:09:26.074 08:42:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:26.074 08:42:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:26.074 08:42:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:26.074 08:42:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:26.074 08:42:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:26.074 08:42:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:26.074 08:42:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:26.074 08:42:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:26.074 08:42:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:26.074 08:42:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:26.074 08:42:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:26.074 08:42:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:26.074 08:42:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:26.074 08:42:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:26.074 08:42:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:26.074 08:42:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:26.074 08:42:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:26.074 08:42:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:26.074 08:42:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:26.074 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:26.074 08:42:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:26.074 08:42:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:26.074 08:42:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:26.074 08:42:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:26.074 08:42:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:26.074 08:42:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:26.074 08:42:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:26.074 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:26.074 08:42:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:26.074 08:42:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:26.074 08:42:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:26.074 08:42:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:26.074 08:42:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:26.074 08:42:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:26.075 08:42:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:26.075 08:42:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:26.075 08:42:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:26.075 08:42:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:26.075 08:42:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:26.075 08:42:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:26.075 08:42:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:26.075 08:42:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:26.075 08:42:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:26.075 08:42:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:26.075 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:26.075 08:42:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:26.075 08:42:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:26.075 08:42:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:26.075 08:42:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:26.075 08:42:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:26.075 08:42:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:26.075 08:42:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:26.075 08:42:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:26.075 08:42:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:26.075 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:26.075 08:42:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:26.075 08:42:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:26.075 08:42:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:09:26.075 08:42:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:26.075 08:42:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:26.075 08:42:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:26.075 08:42:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:26.075 08:42:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:26.075 08:42:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:26.075 08:42:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:26.075 08:42:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:26.075 08:42:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:26.075 08:42:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:26.075 08:42:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:26.075 08:42:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:26.075 08:42:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:26.075 08:42:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:26.075 08:42:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:26.075 08:42:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:26.333 08:42:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:26.333 08:42:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:26.333 08:42:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:26.333 08:42:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:26.333 08:42:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:26.333 08:42:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:26.333 08:42:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:26.333 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:26.333 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.147 ms 00:09:26.333 00:09:26.333 --- 10.0.0.2 ping statistics --- 00:09:26.333 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:26.333 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:09:26.333 08:42:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:26.333 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:26.333 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.195 ms 00:09:26.333 00:09:26.333 --- 10.0.0.1 ping statistics --- 00:09:26.333 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:26.333 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:09:26.333 08:42:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:26.333 08:42:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:09:26.333 08:42:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:26.333 08:42:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:26.333 08:42:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:26.333 08:42:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:26.333 08:42:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:26.333 08:42:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:26.333 08:42:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:26.333 08:42:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:09:26.333 08:42:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:26.333 08:42:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:26.333 08:42:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:26.333 08:42:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=883678 00:09:26.333 08:42:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:26.333 08:42:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 883678 00:09:26.333 08:42:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 883678 ']' 00:09:26.333 08:42:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:26.333 08:42:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:26.333 08:42:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:26.333 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:26.333 08:42:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:26.333 08:42:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:26.333 [2024-07-26 08:42:44.712321] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:09:26.333 [2024-07-26 08:42:44.712412] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:26.333 EAL: No free 2048 kB hugepages reported on node 1 00:09:26.333 [2024-07-26 08:42:44.749999] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:26.333 [2024-07-26 08:42:44.775879] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:26.592 [2024-07-26 08:42:44.866375] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:26.592 [2024-07-26 08:42:44.866430] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:26.592 [2024-07-26 08:42:44.866444] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:26.592 [2024-07-26 08:42:44.866455] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:26.592 [2024-07-26 08:42:44.866465] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:26.592 [2024-07-26 08:42:44.866515] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:26.592 [2024-07-26 08:42:44.866632] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:26.592 [2024-07-26 08:42:44.866698] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:26.592 [2024-07-26 08:42:44.866701] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:26.592 08:42:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:26.592 08:42:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:09:26.592 08:42:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:26.592 08:42:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:26.592 08:42:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:26.592 08:42:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:26.592 08:42:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:26.592 08:42:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.592 08:42:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:26.592 [2024-07-26 08:42:45.017540] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:26.592 08:42:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.592 08:42:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:26.592 08:42:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.593 08:42:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:26.593 Malloc0 00:09:26.593 08:42:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.593 08:42:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:26.593 08:42:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.593 08:42:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:26.851 08:42:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.852 08:42:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:26.852 08:42:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.852 08:42:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:26.852 08:42:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.852 08:42:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:26.852 08:42:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.852 08:42:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:26.852 [2024-07-26 08:42:45.071322] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:26.852 08:42:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.852 08:42:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:09:26.852 test case1: single bdev can't be used in multiple subsystems 00:09:26.852 08:42:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:09:26.852 08:42:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.852 08:42:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:26.852 08:42:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.852 08:42:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:09:26.852 08:42:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.852 08:42:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:26.852 08:42:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.852 08:42:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:09:26.852 08:42:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:09:26.852 08:42:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.852 08:42:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:26.852 [2024-07-26 08:42:45.095162] bdev.c:8111:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:09:26.852 [2024-07-26 08:42:45.095193] subsystem.c:2087:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:09:26.852 [2024-07-26 08:42:45.095208] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.852 request: 00:09:26.852 { 00:09:26.852 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:26.852 "namespace": { 00:09:26.852 "bdev_name": "Malloc0", 00:09:26.852 "no_auto_visible": false 00:09:26.852 }, 00:09:26.852 "method": "nvmf_subsystem_add_ns", 00:09:26.852 "req_id": 1 00:09:26.852 } 00:09:26.852 Got JSON-RPC error response 00:09:26.852 response: 00:09:26.852 { 00:09:26.852 "code": -32602, 00:09:26.852 "message": "Invalid parameters" 00:09:26.852 } 00:09:26.852 08:42:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:26.852 08:42:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:09:26.852 08:42:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:09:26.852 08:42:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:09:26.852 Adding namespace failed - expected result. 00:09:26.852 08:42:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:09:26.852 test case2: host connect to nvmf target in multiple paths 00:09:26.852 08:42:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:09:26.852 08:42:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.852 08:42:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:26.852 [2024-07-26 08:42:45.103290] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:09:26.852 08:42:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.852 08:42:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:27.418 08:42:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:09:27.987 08:42:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:09:27.987 08:42:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:09:27.987 08:42:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:27.987 08:42:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:27.987 08:42:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:09:30.521 08:42:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:30.521 08:42:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:30.521 08:42:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:30.521 08:42:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:30.521 08:42:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:30.521 08:42:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:09:30.521 08:42:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:30.521 [global] 00:09:30.521 thread=1 00:09:30.521 invalidate=1 00:09:30.521 rw=write 00:09:30.521 time_based=1 00:09:30.521 runtime=1 00:09:30.521 ioengine=libaio 00:09:30.521 direct=1 00:09:30.521 bs=4096 00:09:30.521 iodepth=1 00:09:30.521 norandommap=0 00:09:30.521 numjobs=1 00:09:30.521 00:09:30.521 verify_dump=1 00:09:30.521 verify_backlog=512 00:09:30.521 verify_state_save=0 00:09:30.521 do_verify=1 00:09:30.521 verify=crc32c-intel 00:09:30.521 [job0] 00:09:30.521 filename=/dev/nvme0n1 00:09:30.521 Could not set queue depth (nvme0n1) 00:09:30.521 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:30.521 fio-3.35 00:09:30.521 Starting 1 thread 00:09:31.458 00:09:31.458 job0: (groupid=0, jobs=1): err= 0: pid=884201: Fri Jul 26 08:42:49 2024 00:09:31.458 read: IOPS=21, BW=86.4KiB/s (88.5kB/s)(88.0KiB/1018msec) 00:09:31.458 slat (nsec): min=12189, max=37381, avg=23945.59, stdev=9220.51 00:09:31.458 clat (usec): min=40786, max=41065, avg=40959.72, stdev=67.43 00:09:31.458 lat (usec): min=40807, max=41081, avg=40983.66, stdev=64.21 00:09:31.458 clat percentiles (usec): 00:09:31.458 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:09:31.458 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:31.458 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:09:31.458 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:09:31.458 | 99.99th=[41157] 00:09:31.458 write: IOPS=502, BW=2012KiB/s (2060kB/s)(2048KiB/1018msec); 0 zone resets 00:09:31.458 slat (nsec): min=7312, max=45281, avg=18315.26, stdev=7997.72 00:09:31.458 clat (usec): min=162, max=499, avg=203.81, stdev=23.41 00:09:31.458 lat (usec): min=170, max=529, avg=222.12, stdev=27.49 00:09:31.458 clat percentiles (usec): 00:09:31.458 | 1.00th=[ 167], 5.00th=[ 174], 10.00th=[ 182], 20.00th=[ 190], 00:09:31.458 | 30.00th=[ 196], 40.00th=[ 198], 50.00th=[ 204], 60.00th=[ 208], 00:09:31.458 | 70.00th=[ 212], 80.00th=[ 217], 90.00th=[ 223], 95.00th=[ 229], 00:09:31.458 | 99.00th=[ 269], 99.50th=[ 310], 99.90th=[ 498], 99.95th=[ 498], 00:09:31.458 | 99.99th=[ 498] 00:09:31.458 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:09:31.458 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:31.458 lat (usec) : 250=94.57%, 500=1.31% 00:09:31.458 lat (msec) : 50=4.12% 00:09:31.458 cpu : usr=0.69%, sys=1.18%, ctx=535, majf=0, minf=2 00:09:31.458 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:31.458 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:31.458 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:31.458 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:31.458 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:31.458 00:09:31.458 Run status group 0 (all jobs): 00:09:31.458 READ: bw=86.4KiB/s (88.5kB/s), 86.4KiB/s-86.4KiB/s (88.5kB/s-88.5kB/s), io=88.0KiB (90.1kB), run=1018-1018msec 00:09:31.458 WRITE: bw=2012KiB/s (2060kB/s), 2012KiB/s-2012KiB/s (2060kB/s-2060kB/s), io=2048KiB (2097kB), run=1018-1018msec 00:09:31.458 00:09:31.458 Disk stats (read/write): 00:09:31.458 nvme0n1: ios=75/512, merge=0/0, ticks=1149/99, in_queue=1248, util=96.09% 00:09:31.458 08:42:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:31.458 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:31.458 08:42:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:31.458 08:42:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:09:31.458 08:42:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:31.458 08:42:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:31.458 08:42:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:31.459 08:42:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:31.459 08:42:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:09:31.459 08:42:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:09:31.459 08:42:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:09:31.459 08:42:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:31.459 08:42:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:09:31.459 08:42:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:31.459 08:42:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:09:31.459 08:42:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:31.459 08:42:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:31.459 rmmod nvme_tcp 00:09:31.459 rmmod nvme_fabrics 00:09:31.459 rmmod nvme_keyring 00:09:31.459 08:42:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:31.717 08:42:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:09:31.717 08:42:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:09:31.718 08:42:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 883678 ']' 00:09:31.718 08:42:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 883678 00:09:31.718 08:42:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 883678 ']' 00:09:31.718 08:42:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 883678 00:09:31.718 08:42:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:09:31.718 08:42:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:31.718 08:42:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 883678 00:09:31.718 08:42:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:31.718 08:42:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:31.718 08:42:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 883678' 00:09:31.718 killing process with pid 883678 00:09:31.718 08:42:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 883678 00:09:31.718 08:42:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 883678 00:09:31.978 08:42:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:31.978 08:42:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:31.978 08:42:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:31.978 08:42:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:31.978 08:42:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:31.978 08:42:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:31.978 08:42:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:31.978 08:42:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:33.885 08:42:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:33.885 00:09:33.885 real 0m9.837s 00:09:33.885 user 0m22.226s 00:09:33.885 sys 0m2.274s 00:09:33.885 08:42:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:33.885 08:42:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:33.885 ************************************ 00:09:33.885 END TEST nvmf_nmic 00:09:33.885 ************************************ 00:09:33.885 08:42:52 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:33.885 08:42:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:33.885 08:42:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:33.885 08:42:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:33.885 ************************************ 00:09:33.885 START TEST nvmf_fio_target 00:09:33.885 ************************************ 00:09:33.885 08:42:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:33.885 * Looking for test storage... 00:09:34.144 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:34.144 08:42:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:34.144 08:42:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:09:34.144 08:42:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:34.144 08:42:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:34.144 08:42:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:34.144 08:42:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:34.144 08:42:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:34.144 08:42:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:34.144 08:42:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:34.144 08:42:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:34.144 08:42:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:34.144 08:42:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:34.144 08:42:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:34.144 08:42:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:34.144 08:42:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:34.144 08:42:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:34.144 08:42:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:34.144 08:42:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:34.144 08:42:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:34.144 08:42:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:34.145 08:42:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:34.145 08:42:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:34.145 08:42:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:34.145 08:42:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:34.145 08:42:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:34.145 08:42:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:09:34.145 08:42:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:34.145 08:42:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:09:34.145 08:42:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:34.145 08:42:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:34.145 08:42:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:34.145 08:42:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:34.145 08:42:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:34.145 08:42:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:34.145 08:42:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:34.145 08:42:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:34.145 08:42:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:34.145 08:42:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:34.145 08:42:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:34.145 08:42:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:09:34.145 08:42:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:34.145 08:42:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:34.145 08:42:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:34.145 08:42:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:34.145 08:42:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:34.145 08:42:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:34.145 08:42:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:34.145 08:42:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:34.145 08:42:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:34.145 08:42:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:34.145 08:42:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:09:34.145 08:42:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:36.048 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:36.048 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:09:36.048 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:36.048 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:36.048 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:36.048 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:36.048 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:36.048 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:09:36.048 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:36.048 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:09:36.048 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:09:36.048 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:09:36.048 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:09:36.048 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:09:36.048 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:09:36.048 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:36.048 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:36.048 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:36.048 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:36.048 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:36.048 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:36.048 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:36.048 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:36.048 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:36.048 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:36.048 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:36.048 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:36.048 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:36.048 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:36.048 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:36.048 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:36.048 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:36.048 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:36.048 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:36.048 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:36.048 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:36.048 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:36.048 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:36.048 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:36.048 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:36.048 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:36.048 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:36.048 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:36.048 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:36.048 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:36.048 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:36.048 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:36.048 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:36.048 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:36.048 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:36.048 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:36.048 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:36.048 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:36.048 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:36.048 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:36.048 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:36.048 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:36.048 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:36.048 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:36.048 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:36.049 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:36.049 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:36.049 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:36.049 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:36.049 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:36.049 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:36.049 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:36.049 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:36.049 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:36.049 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:36.049 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:36.049 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:36.049 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:09:36.049 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:36.049 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:36.049 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:36.049 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:36.049 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:36.049 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:36.049 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:36.049 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:36.049 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:36.049 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:36.049 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:36.049 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:36.049 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:36.049 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:36.049 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:36.307 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:36.307 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:36.307 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:36.307 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:36.307 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:36.307 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:36.307 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:36.307 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:36.307 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:36.307 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.150 ms 00:09:36.307 00:09:36.307 --- 10.0.0.2 ping statistics --- 00:09:36.307 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:36.307 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:09:36.307 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:36.307 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:36.307 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.092 ms 00:09:36.307 00:09:36.307 --- 10.0.0.1 ping statistics --- 00:09:36.307 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:36.307 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:09:36.307 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:36.307 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:09:36.307 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:36.307 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:36.307 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:36.307 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:36.307 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:36.307 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:36.307 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:36.307 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:09:36.307 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:36.307 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:36.307 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:36.307 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=886321 00:09:36.307 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:36.307 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 886321 00:09:36.307 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 886321 ']' 00:09:36.307 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:36.307 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:36.307 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:36.307 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:36.307 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:36.307 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:36.307 [2024-07-26 08:42:54.712862] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:09:36.307 [2024-07-26 08:42:54.712953] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:36.308 EAL: No free 2048 kB hugepages reported on node 1 00:09:36.308 [2024-07-26 08:42:54.752118] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:36.566 [2024-07-26 08:42:54.782795] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:36.566 [2024-07-26 08:42:54.873615] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:36.566 [2024-07-26 08:42:54.873676] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:36.566 [2024-07-26 08:42:54.873702] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:36.566 [2024-07-26 08:42:54.873717] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:36.566 [2024-07-26 08:42:54.873729] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:36.566 [2024-07-26 08:42:54.873809] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:36.566 [2024-07-26 08:42:54.873863] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:36.566 [2024-07-26 08:42:54.873978] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:36.566 [2024-07-26 08:42:54.873980] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:36.566 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:36.566 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:09:36.566 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:36.566 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:36.566 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:36.566 08:42:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:36.566 08:42:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:36.825 [2024-07-26 08:42:55.250239] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:36.825 08:42:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:37.083 08:42:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:09:37.083 08:42:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:37.341 08:42:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:09:37.341 08:42:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:37.905 08:42:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:09:37.905 08:42:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:37.905 08:42:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:09:37.905 08:42:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:09:38.163 08:42:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:38.420 08:42:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:09:38.420 08:42:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:38.712 08:42:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:09:38.712 08:42:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:38.993 08:42:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:09:38.993 08:42:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:09:39.251 08:42:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:39.509 08:42:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:39.509 08:42:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:39.766 08:42:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:39.766 08:42:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:40.024 08:42:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:40.281 [2024-07-26 08:42:58.617982] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:40.281 08:42:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:09:40.539 08:42:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:09:40.797 08:42:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:41.363 08:42:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:09:41.363 08:42:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:09:41.363 08:42:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:41.363 08:42:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:09:41.363 08:42:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:09:41.363 08:42:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:09:43.267 08:43:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:43.267 08:43:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:43.267 08:43:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:43.525 08:43:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:09:43.525 08:43:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:43.525 08:43:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:09:43.525 08:43:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:43.525 [global] 00:09:43.525 thread=1 00:09:43.525 invalidate=1 00:09:43.525 rw=write 00:09:43.525 time_based=1 00:09:43.525 runtime=1 00:09:43.525 ioengine=libaio 00:09:43.525 direct=1 00:09:43.525 bs=4096 00:09:43.525 iodepth=1 00:09:43.525 norandommap=0 00:09:43.525 numjobs=1 00:09:43.525 00:09:43.525 verify_dump=1 00:09:43.525 verify_backlog=512 00:09:43.525 verify_state_save=0 00:09:43.525 do_verify=1 00:09:43.525 verify=crc32c-intel 00:09:43.525 [job0] 00:09:43.525 filename=/dev/nvme0n1 00:09:43.525 [job1] 00:09:43.525 filename=/dev/nvme0n2 00:09:43.525 [job2] 00:09:43.525 filename=/dev/nvme0n3 00:09:43.525 [job3] 00:09:43.525 filename=/dev/nvme0n4 00:09:43.525 Could not set queue depth (nvme0n1) 00:09:43.525 Could not set queue depth (nvme0n2) 00:09:43.525 Could not set queue depth (nvme0n3) 00:09:43.525 Could not set queue depth (nvme0n4) 00:09:43.525 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:43.525 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:43.525 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:43.525 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:43.525 fio-3.35 00:09:43.525 Starting 4 threads 00:09:44.903 00:09:44.903 job0: (groupid=0, jobs=1): err= 0: pid=887345: Fri Jul 26 08:43:03 2024 00:09:44.903 read: IOPS=74, BW=299KiB/s (306kB/s)(308KiB/1031msec) 00:09:44.903 slat (nsec): min=6297, max=37577, avg=14974.64, stdev=9515.02 00:09:44.903 clat (usec): min=250, max=42305, avg=11654.14, stdev=18610.07 00:09:44.903 lat (usec): min=259, max=42316, avg=11669.11, stdev=18616.75 00:09:44.903 clat percentiles (usec): 00:09:44.903 | 1.00th=[ 251], 5.00th=[ 253], 10.00th=[ 258], 20.00th=[ 262], 00:09:44.903 | 30.00th=[ 273], 40.00th=[ 277], 50.00th=[ 289], 60.00th=[ 302], 00:09:44.903 | 70.00th=[ 355], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:09:44.903 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:44.903 | 99.99th=[42206] 00:09:44.903 write: IOPS=496, BW=1986KiB/s (2034kB/s)(2048KiB/1031msec); 0 zone resets 00:09:44.903 slat (nsec): min=7927, max=71185, avg=19284.22, stdev=8778.60 00:09:44.903 clat (usec): min=183, max=542, avg=232.99, stdev=27.70 00:09:44.903 lat (usec): min=192, max=569, avg=252.27, stdev=30.24 00:09:44.903 clat percentiles (usec): 00:09:44.903 | 1.00th=[ 188], 5.00th=[ 198], 10.00th=[ 206], 20.00th=[ 215], 00:09:44.903 | 30.00th=[ 221], 40.00th=[ 225], 50.00th=[ 229], 60.00th=[ 235], 00:09:44.903 | 70.00th=[ 239], 80.00th=[ 247], 90.00th=[ 262], 95.00th=[ 281], 00:09:44.903 | 99.00th=[ 314], 99.50th=[ 338], 99.90th=[ 545], 99.95th=[ 545], 00:09:44.903 | 99.99th=[ 545] 00:09:44.903 bw ( KiB/s): min= 4087, max= 4087, per=31.88%, avg=4087.00, stdev= 0.00, samples=1 00:09:44.903 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:09:44.903 lat (usec) : 250=71.65%, 500=24.45%, 750=0.17% 00:09:44.903 lat (msec) : 4=0.17%, 50=3.57% 00:09:44.903 cpu : usr=0.87%, sys=1.17%, ctx=590, majf=0, minf=1 00:09:44.903 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:44.903 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:44.903 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:44.903 issued rwts: total=77,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:44.903 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:44.903 job1: (groupid=0, jobs=1): err= 0: pid=887346: Fri Jul 26 08:43:03 2024 00:09:44.903 read: IOPS=21, BW=85.1KiB/s (87.1kB/s)(88.0KiB/1034msec) 00:09:44.903 slat (nsec): min=13052, max=34702, avg=22573.00, stdev=9095.92 00:09:44.903 clat (usec): min=40849, max=42011, avg=41372.84, stdev=504.74 00:09:44.903 lat (usec): min=40871, max=42029, avg=41395.42, stdev=506.42 00:09:44.903 clat percentiles (usec): 00:09:44.903 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:09:44.903 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41681], 00:09:44.903 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:09:44.903 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:44.903 | 99.99th=[42206] 00:09:44.903 write: IOPS=495, BW=1981KiB/s (2028kB/s)(2048KiB/1034msec); 0 zone resets 00:09:44.903 slat (nsec): min=6937, max=34918, avg=15407.62, stdev=5717.87 00:09:44.903 clat (usec): min=183, max=348, avg=220.31, stdev=18.10 00:09:44.903 lat (usec): min=191, max=365, avg=235.72, stdev=18.51 00:09:44.903 clat percentiles (usec): 00:09:44.903 | 1.00th=[ 190], 5.00th=[ 198], 10.00th=[ 200], 20.00th=[ 206], 00:09:44.903 | 30.00th=[ 210], 40.00th=[ 215], 50.00th=[ 219], 60.00th=[ 223], 00:09:44.903 | 70.00th=[ 227], 80.00th=[ 235], 90.00th=[ 243], 95.00th=[ 251], 00:09:44.904 | 99.00th=[ 281], 99.50th=[ 306], 99.90th=[ 351], 99.95th=[ 351], 00:09:44.904 | 99.99th=[ 351] 00:09:44.904 bw ( KiB/s): min= 4087, max= 4087, per=31.88%, avg=4087.00, stdev= 0.00, samples=1 00:09:44.904 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:09:44.904 lat (usec) : 250=90.82%, 500=5.06% 00:09:44.904 lat (msec) : 50=4.12% 00:09:44.904 cpu : usr=0.19%, sys=0.87%, ctx=535, majf=0, minf=2 00:09:44.904 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:44.904 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:44.904 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:44.904 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:44.904 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:44.904 job2: (groupid=0, jobs=1): err= 0: pid=887347: Fri Jul 26 08:43:03 2024 00:09:44.904 read: IOPS=515, BW=2063KiB/s (2112kB/s)(2104KiB/1020msec) 00:09:44.904 slat (nsec): min=5724, max=34641, avg=8524.33, stdev=4753.02 00:09:44.904 clat (usec): min=321, max=42055, avg=1440.77, stdev=6580.30 00:09:44.904 lat (usec): min=327, max=42073, avg=1449.30, stdev=6583.57 00:09:44.904 clat percentiles (usec): 00:09:44.904 | 1.00th=[ 330], 5.00th=[ 334], 10.00th=[ 338], 20.00th=[ 343], 00:09:44.904 | 30.00th=[ 347], 40.00th=[ 347], 50.00th=[ 351], 60.00th=[ 355], 00:09:44.904 | 70.00th=[ 359], 80.00th=[ 363], 90.00th=[ 379], 95.00th=[ 396], 00:09:44.904 | 99.00th=[41157], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:44.904 | 99.99th=[42206] 00:09:44.904 write: IOPS=1003, BW=4016KiB/s (4112kB/s)(4096KiB/1020msec); 0 zone resets 00:09:44.904 slat (nsec): min=6597, max=40383, avg=12677.71, stdev=6200.91 00:09:44.904 clat (usec): min=163, max=466, avg=233.90, stdev=41.04 00:09:44.904 lat (usec): min=170, max=482, avg=246.57, stdev=43.47 00:09:44.904 clat percentiles (usec): 00:09:44.904 | 1.00th=[ 169], 5.00th=[ 174], 10.00th=[ 178], 20.00th=[ 196], 00:09:44.904 | 30.00th=[ 221], 40.00th=[ 229], 50.00th=[ 235], 60.00th=[ 241], 00:09:44.904 | 70.00th=[ 249], 80.00th=[ 258], 90.00th=[ 273], 95.00th=[ 297], 00:09:44.904 | 99.00th=[ 392], 99.50th=[ 404], 99.90th=[ 429], 99.95th=[ 465], 00:09:44.904 | 99.99th=[ 465] 00:09:44.904 bw ( KiB/s): min= 8175, max= 8175, per=63.77%, avg=8175.00, stdev= 0.00, samples=1 00:09:44.904 iops : min= 2043, max= 2043, avg=2043.00, stdev= 0.00, samples=1 00:09:44.904 lat (usec) : 250=47.68%, 500=51.35%, 750=0.06% 00:09:44.904 lat (msec) : 50=0.90% 00:09:44.904 cpu : usr=1.86%, sys=1.18%, ctx=1550, majf=0, minf=1 00:09:44.904 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:44.904 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:44.904 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:44.904 issued rwts: total=526,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:44.904 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:44.904 job3: (groupid=0, jobs=1): err= 0: pid=887348: Fri Jul 26 08:43:03 2024 00:09:44.904 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:09:44.904 slat (nsec): min=6979, max=61126, avg=16494.00, stdev=5225.52 00:09:44.904 clat (usec): min=289, max=42067, avg=609.60, stdev=3226.01 00:09:44.904 lat (usec): min=298, max=42081, avg=626.10, stdev=3225.83 00:09:44.904 clat percentiles (usec): 00:09:44.904 | 1.00th=[ 302], 5.00th=[ 314], 10.00th=[ 318], 20.00th=[ 326], 00:09:44.904 | 30.00th=[ 330], 40.00th=[ 334], 50.00th=[ 334], 60.00th=[ 338], 00:09:44.904 | 70.00th=[ 343], 80.00th=[ 347], 90.00th=[ 359], 95.00th=[ 474], 00:09:44.904 | 99.00th=[ 611], 99.50th=[41157], 99.90th=[41157], 99.95th=[42206], 00:09:44.904 | 99.99th=[42206] 00:09:44.904 write: IOPS=1264, BW=5059KiB/s (5180kB/s)(5064KiB/1001msec); 0 zone resets 00:09:44.904 slat (nsec): min=6460, max=69236, avg=21046.19, stdev=9713.59 00:09:44.904 clat (usec): min=179, max=459, avg=252.97, stdev=61.15 00:09:44.904 lat (usec): min=186, max=500, avg=274.02, stdev=67.68 00:09:44.904 clat percentiles (usec): 00:09:44.904 | 1.00th=[ 188], 5.00th=[ 196], 10.00th=[ 202], 20.00th=[ 215], 00:09:44.904 | 30.00th=[ 221], 40.00th=[ 227], 50.00th=[ 231], 60.00th=[ 237], 00:09:44.904 | 70.00th=[ 247], 80.00th=[ 269], 90.00th=[ 371], 95.00th=[ 404], 00:09:44.904 | 99.00th=[ 433], 99.50th=[ 441], 99.90th=[ 457], 99.95th=[ 461], 00:09:44.904 | 99.99th=[ 461] 00:09:44.904 bw ( KiB/s): min= 4087, max= 4087, per=31.88%, avg=4087.00, stdev= 0.00, samples=1 00:09:44.904 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:09:44.904 lat (usec) : 250=39.65%, 500=58.91%, 750=1.14% 00:09:44.904 lat (msec) : 50=0.31% 00:09:44.904 cpu : usr=3.40%, sys=5.10%, ctx=2292, majf=0, minf=1 00:09:44.904 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:44.904 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:44.904 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:44.904 issued rwts: total=1024,1266,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:44.904 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:44.904 00:09:44.904 Run status group 0 (all jobs): 00:09:44.904 READ: bw=6379KiB/s (6532kB/s), 85.1KiB/s-4092KiB/s (87.1kB/s-4190kB/s), io=6596KiB (6754kB), run=1001-1034msec 00:09:44.904 WRITE: bw=12.5MiB/s (13.1MB/s), 1981KiB/s-5059KiB/s (2028kB/s-5180kB/s), io=12.9MiB (13.6MB), run=1001-1034msec 00:09:44.904 00:09:44.904 Disk stats (read/write): 00:09:44.904 nvme0n1: ios=97/512, merge=0/0, ticks=1645/103, in_queue=1748, util=97.29% 00:09:44.904 nvme0n2: ios=43/512, merge=0/0, ticks=1697/105, in_queue=1802, util=97.55% 00:09:44.904 nvme0n3: ios=521/1024, merge=0/0, ticks=545/232, in_queue=777, util=88.88% 00:09:44.904 nvme0n4: ios=779/1024, merge=0/0, ticks=1471/259, in_queue=1730, util=97.46% 00:09:44.904 08:43:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:09:44.904 [global] 00:09:44.904 thread=1 00:09:44.904 invalidate=1 00:09:44.904 rw=randwrite 00:09:44.904 time_based=1 00:09:44.904 runtime=1 00:09:44.904 ioengine=libaio 00:09:44.904 direct=1 00:09:44.904 bs=4096 00:09:44.904 iodepth=1 00:09:44.904 norandommap=0 00:09:44.904 numjobs=1 00:09:44.904 00:09:44.904 verify_dump=1 00:09:44.904 verify_backlog=512 00:09:44.904 verify_state_save=0 00:09:44.904 do_verify=1 00:09:44.904 verify=crc32c-intel 00:09:44.904 [job0] 00:09:44.904 filename=/dev/nvme0n1 00:09:44.904 [job1] 00:09:44.904 filename=/dev/nvme0n2 00:09:44.904 [job2] 00:09:44.904 filename=/dev/nvme0n3 00:09:44.904 [job3] 00:09:44.904 filename=/dev/nvme0n4 00:09:44.904 Could not set queue depth (nvme0n1) 00:09:44.904 Could not set queue depth (nvme0n2) 00:09:44.904 Could not set queue depth (nvme0n3) 00:09:44.904 Could not set queue depth (nvme0n4) 00:09:45.161 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:45.161 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:45.161 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:45.161 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:45.161 fio-3.35 00:09:45.162 Starting 4 threads 00:09:46.542 00:09:46.542 job0: (groupid=0, jobs=1): err= 0: pid=887583: Fri Jul 26 08:43:04 2024 00:09:46.542 read: IOPS=20, BW=82.5KiB/s (84.5kB/s)(84.0KiB/1018msec) 00:09:46.542 slat (nsec): min=7283, max=36403, avg=30433.62, stdev=9385.65 00:09:46.542 clat (usec): min=40866, max=41399, avg=40980.89, stdev=109.33 00:09:46.542 lat (usec): min=40902, max=41406, avg=41011.33, stdev=102.21 00:09:46.542 clat percentiles (usec): 00:09:46.542 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:09:46.542 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:46.542 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:09:46.542 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:09:46.542 | 99.99th=[41157] 00:09:46.542 write: IOPS=502, BW=2012KiB/s (2060kB/s)(2048KiB/1018msec); 0 zone resets 00:09:46.542 slat (nsec): min=6830, max=30988, avg=10700.65, stdev=3601.91 00:09:46.542 clat (usec): min=185, max=894, avg=292.16, stdev=86.83 00:09:46.542 lat (usec): min=194, max=901, avg=302.86, stdev=88.12 00:09:46.542 clat percentiles (usec): 00:09:46.542 | 1.00th=[ 194], 5.00th=[ 204], 10.00th=[ 210], 20.00th=[ 223], 00:09:46.542 | 30.00th=[ 237], 40.00th=[ 253], 50.00th=[ 265], 60.00th=[ 289], 00:09:46.542 | 70.00th=[ 314], 80.00th=[ 351], 90.00th=[ 420], 95.00th=[ 465], 00:09:46.542 | 99.00th=[ 545], 99.50th=[ 644], 99.90th=[ 898], 99.95th=[ 898], 00:09:46.542 | 99.99th=[ 898] 00:09:46.542 bw ( KiB/s): min= 4096, max= 4096, per=31.49%, avg=4096.00, stdev= 0.00, samples=1 00:09:46.542 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:46.542 lat (usec) : 250=36.40%, 500=57.60%, 750=1.88%, 1000=0.19% 00:09:46.542 lat (msec) : 50=3.94% 00:09:46.542 cpu : usr=0.59%, sys=0.49%, ctx=534, majf=0, minf=2 00:09:46.542 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:46.542 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:46.542 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:46.542 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:46.542 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:46.542 job1: (groupid=0, jobs=1): err= 0: pid=887584: Fri Jul 26 08:43:04 2024 00:09:46.542 read: IOPS=1279, BW=5117KiB/s (5240kB/s)(5204KiB/1017msec) 00:09:46.542 slat (nsec): min=4404, max=51843, avg=14429.53, stdev=9017.31 00:09:46.542 clat (usec): min=228, max=42047, avg=498.24, stdev=3040.60 00:09:46.542 lat (usec): min=233, max=42080, avg=512.67, stdev=3041.47 00:09:46.542 clat percentiles (usec): 00:09:46.542 | 1.00th=[ 233], 5.00th=[ 239], 10.00th=[ 243], 20.00th=[ 249], 00:09:46.542 | 30.00th=[ 253], 40.00th=[ 260], 50.00th=[ 265], 60.00th=[ 273], 00:09:46.542 | 70.00th=[ 281], 80.00th=[ 306], 90.00th=[ 322], 95.00th=[ 347], 00:09:46.542 | 99.00th=[ 392], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:09:46.542 | 99.99th=[42206] 00:09:46.542 write: IOPS=1510, BW=6041KiB/s (6186kB/s)(6144KiB/1017msec); 0 zone resets 00:09:46.542 slat (nsec): min=5602, max=58450, avg=13266.49, stdev=6185.83 00:09:46.542 clat (usec): min=153, max=775, avg=206.17, stdev=51.85 00:09:46.542 lat (usec): min=160, max=782, avg=219.44, stdev=51.79 00:09:46.542 clat percentiles (usec): 00:09:46.542 | 1.00th=[ 159], 5.00th=[ 165], 10.00th=[ 167], 20.00th=[ 172], 00:09:46.542 | 30.00th=[ 174], 40.00th=[ 178], 50.00th=[ 186], 60.00th=[ 206], 00:09:46.542 | 70.00th=[ 219], 80.00th=[ 235], 90.00th=[ 265], 95.00th=[ 306], 00:09:46.542 | 99.00th=[ 404], 99.50th=[ 420], 99.90th=[ 537], 99.95th=[ 775], 00:09:46.542 | 99.99th=[ 775] 00:09:46.542 bw ( KiB/s): min= 4096, max= 8192, per=47.24%, avg=6144.00, stdev=2896.31, samples=2 00:09:46.542 iops : min= 1024, max= 2048, avg=1536.00, stdev=724.08, samples=2 00:09:46.542 lat (usec) : 250=57.67%, 500=41.98%, 750=0.07%, 1000=0.04% 00:09:46.542 lat (msec) : 50=0.25% 00:09:46.542 cpu : usr=2.36%, sys=3.84%, ctx=2837, majf=0, minf=1 00:09:46.542 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:46.542 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:46.542 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:46.542 issued rwts: total=1301,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:46.542 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:46.542 job2: (groupid=0, jobs=1): err= 0: pid=887585: Fri Jul 26 08:43:04 2024 00:09:46.542 read: IOPS=21, BW=84.5KiB/s (86.6kB/s)(88.0KiB/1041msec) 00:09:46.542 slat (nsec): min=7515, max=37475, avg=29962.36, stdev=10379.18 00:09:46.542 clat (usec): min=40953, max=42032, avg=41821.09, stdev=336.86 00:09:46.542 lat (usec): min=40989, max=42069, avg=41851.05, stdev=336.53 00:09:46.542 clat percentiles (usec): 00:09:46.542 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41681], 00:09:46.542 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:09:46.542 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:09:46.542 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:46.542 | 99.99th=[42206] 00:09:46.542 write: IOPS=491, BW=1967KiB/s (2015kB/s)(2048KiB/1041msec); 0 zone resets 00:09:46.542 slat (nsec): min=7969, max=24427, avg=9628.63, stdev=1890.98 00:09:46.542 clat (usec): min=175, max=460, avg=220.44, stdev=28.56 00:09:46.542 lat (usec): min=186, max=482, avg=230.07, stdev=29.03 00:09:46.542 clat percentiles (usec): 00:09:46.542 | 1.00th=[ 186], 5.00th=[ 194], 10.00th=[ 200], 20.00th=[ 206], 00:09:46.542 | 30.00th=[ 210], 40.00th=[ 215], 50.00th=[ 217], 60.00th=[ 221], 00:09:46.542 | 70.00th=[ 225], 80.00th=[ 229], 90.00th=[ 237], 95.00th=[ 247], 00:09:46.542 | 99.00th=[ 363], 99.50th=[ 437], 99.90th=[ 461], 99.95th=[ 461], 00:09:46.542 | 99.99th=[ 461] 00:09:46.542 bw ( KiB/s): min= 4096, max= 4096, per=31.49%, avg=4096.00, stdev= 0.00, samples=1 00:09:46.542 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:46.542 lat (usec) : 250=91.76%, 500=4.12% 00:09:46.542 lat (msec) : 50=4.12% 00:09:46.542 cpu : usr=0.38%, sys=0.58%, ctx=535, majf=0, minf=1 00:09:46.542 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:46.542 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:46.542 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:46.542 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:46.542 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:46.542 job3: (groupid=0, jobs=1): err= 0: pid=887586: Fri Jul 26 08:43:04 2024 00:09:46.542 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:09:46.542 slat (nsec): min=4912, max=68729, avg=20201.21, stdev=11336.85 00:09:46.542 clat (usec): min=282, max=42194, avg=1554.08, stdev=7017.67 00:09:46.542 lat (usec): min=288, max=42208, avg=1574.28, stdev=7018.92 00:09:46.542 clat percentiles (usec): 00:09:46.542 | 1.00th=[ 285], 5.00th=[ 293], 10.00th=[ 293], 20.00th=[ 302], 00:09:46.542 | 30.00th=[ 306], 40.00th=[ 314], 50.00th=[ 326], 60.00th=[ 355], 00:09:46.542 | 70.00th=[ 367], 80.00th=[ 375], 90.00th=[ 388], 95.00th=[ 412], 00:09:46.542 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:46.542 | 99.99th=[42206] 00:09:46.542 write: IOPS=824, BW=3297KiB/s (3376kB/s)(3300KiB/1001msec); 0 zone resets 00:09:46.542 slat (nsec): min=6133, max=55563, avg=11553.20, stdev=6333.89 00:09:46.542 clat (usec): min=171, max=448, avg=216.56, stdev=33.65 00:09:46.542 lat (usec): min=186, max=459, avg=228.11, stdev=32.91 00:09:46.542 clat percentiles (usec): 00:09:46.542 | 1.00th=[ 174], 5.00th=[ 178], 10.00th=[ 180], 20.00th=[ 184], 00:09:46.542 | 30.00th=[ 190], 40.00th=[ 208], 50.00th=[ 217], 60.00th=[ 223], 00:09:46.542 | 70.00th=[ 229], 80.00th=[ 239], 90.00th=[ 262], 95.00th=[ 273], 00:09:46.542 | 99.00th=[ 326], 99.50th=[ 371], 99.90th=[ 449], 99.95th=[ 449], 00:09:46.542 | 99.99th=[ 449] 00:09:46.542 bw ( KiB/s): min= 4096, max= 4096, per=31.49%, avg=4096.00, stdev= 0.00, samples=1 00:09:46.542 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:46.542 lat (usec) : 250=52.51%, 500=46.37% 00:09:46.542 lat (msec) : 50=1.12% 00:09:46.542 cpu : usr=0.90%, sys=2.20%, ctx=1337, majf=0, minf=1 00:09:46.542 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:46.542 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:46.542 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:46.542 issued rwts: total=512,825,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:46.542 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:46.542 00:09:46.542 Run status group 0 (all jobs): 00:09:46.542 READ: bw=7132KiB/s (7303kB/s), 82.5KiB/s-5117KiB/s (84.5kB/s-5240kB/s), io=7424KiB (7602kB), run=1001-1041msec 00:09:46.542 WRITE: bw=12.7MiB/s (13.3MB/s), 1967KiB/s-6041KiB/s (2015kB/s-6186kB/s), io=13.2MiB (13.9MB), run=1001-1041msec 00:09:46.542 00:09:46.542 Disk stats (read/write): 00:09:46.542 nvme0n1: ios=67/512, merge=0/0, ticks=853/139, in_queue=992, util=98.00% 00:09:46.543 nvme0n2: ios=1312/1536, merge=0/0, ticks=469/307, in_queue=776, util=86.79% 00:09:46.543 nvme0n3: ios=49/512, merge=0/0, ticks=1319/109, in_queue=1428, util=98.43% 00:09:46.543 nvme0n4: ios=291/512, merge=0/0, ticks=716/118, in_queue=834, util=89.68% 00:09:46.543 08:43:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:09:46.543 [global] 00:09:46.543 thread=1 00:09:46.543 invalidate=1 00:09:46.543 rw=write 00:09:46.543 time_based=1 00:09:46.543 runtime=1 00:09:46.543 ioengine=libaio 00:09:46.543 direct=1 00:09:46.543 bs=4096 00:09:46.543 iodepth=128 00:09:46.543 norandommap=0 00:09:46.543 numjobs=1 00:09:46.543 00:09:46.543 verify_dump=1 00:09:46.543 verify_backlog=512 00:09:46.543 verify_state_save=0 00:09:46.543 do_verify=1 00:09:46.543 verify=crc32c-intel 00:09:46.543 [job0] 00:09:46.543 filename=/dev/nvme0n1 00:09:46.543 [job1] 00:09:46.543 filename=/dev/nvme0n2 00:09:46.543 [job2] 00:09:46.543 filename=/dev/nvme0n3 00:09:46.543 [job3] 00:09:46.543 filename=/dev/nvme0n4 00:09:46.543 Could not set queue depth (nvme0n1) 00:09:46.543 Could not set queue depth (nvme0n2) 00:09:46.543 Could not set queue depth (nvme0n3) 00:09:46.543 Could not set queue depth (nvme0n4) 00:09:46.543 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:46.543 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:46.543 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:46.543 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:46.543 fio-3.35 00:09:46.543 Starting 4 threads 00:09:47.921 00:09:47.921 job0: (groupid=0, jobs=1): err= 0: pid=887932: Fri Jul 26 08:43:06 2024 00:09:47.921 read: IOPS=5089, BW=19.9MiB/s (20.8MB/s)(20.0MiB/1006msec) 00:09:47.921 slat (usec): min=3, max=3853, avg=92.54, stdev=442.23 00:09:47.921 clat (usec): min=8445, max=20402, avg=12396.29, stdev=1846.62 00:09:47.921 lat (usec): min=8470, max=20420, avg=12488.83, stdev=1874.06 00:09:47.921 clat percentiles (usec): 00:09:47.921 | 1.00th=[ 9241], 5.00th=[10159], 10.00th=[10814], 20.00th=[11338], 00:09:47.921 | 30.00th=[11600], 40.00th=[11863], 50.00th=[12125], 60.00th=[12256], 00:09:47.921 | 70.00th=[12518], 80.00th=[12911], 90.00th=[14484], 95.00th=[16909], 00:09:47.921 | 99.00th=[17695], 99.50th=[19530], 99.90th=[19792], 99.95th=[20317], 00:09:47.921 | 99.99th=[20317] 00:09:47.921 write: IOPS=5131, BW=20.0MiB/s (21.0MB/s)(20.2MiB/1006msec); 0 zone resets 00:09:47.921 slat (usec): min=4, max=14341, avg=89.85, stdev=518.48 00:09:47.921 clat (usec): min=5007, max=34349, avg=12365.15, stdev=3356.05 00:09:47.921 lat (usec): min=5818, max=34369, avg=12455.00, stdev=3396.49 00:09:47.921 clat percentiles (usec): 00:09:47.921 | 1.00th=[ 8455], 5.00th=[ 9634], 10.00th=[10552], 20.00th=[11076], 00:09:47.921 | 30.00th=[11338], 40.00th=[11600], 50.00th=[11731], 60.00th=[11863], 00:09:47.921 | 70.00th=[11994], 80.00th=[12256], 90.00th=[13960], 95.00th=[19792], 00:09:47.921 | 99.00th=[29492], 99.50th=[31327], 99.90th=[31327], 99.95th=[34341], 00:09:47.921 | 99.99th=[34341] 00:09:47.921 bw ( KiB/s): min=18584, max=22376, per=31.62%, avg=20480.00, stdev=2681.35, samples=2 00:09:47.921 iops : min= 4646, max= 5594, avg=5120.00, stdev=670.34, samples=2 00:09:47.921 lat (msec) : 10=6.06%, 20=91.43%, 50=2.51% 00:09:47.921 cpu : usr=7.56%, sys=12.24%, ctx=440, majf=0, minf=1 00:09:47.921 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:09:47.921 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:47.921 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:47.921 issued rwts: total=5120,5162,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:47.921 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:47.921 job1: (groupid=0, jobs=1): err= 0: pid=887933: Fri Jul 26 08:43:06 2024 00:09:47.921 read: IOPS=4646, BW=18.1MiB/s (19.0MB/s)(19.0MiB/1046msec) 00:09:47.921 slat (usec): min=2, max=16381, avg=97.57, stdev=685.92 00:09:47.921 clat (usec): min=6115, max=56168, avg=13470.10, stdev=8284.81 00:09:47.921 lat (usec): min=6122, max=57922, avg=13567.67, stdev=8326.47 00:09:47.921 clat percentiles (usec): 00:09:47.921 | 1.00th=[ 6915], 5.00th=[ 8455], 10.00th=[ 9503], 20.00th=[10159], 00:09:47.921 | 30.00th=[10290], 40.00th=[10552], 50.00th=[10683], 60.00th=[10945], 00:09:47.921 | 70.00th=[11469], 80.00th=[12649], 90.00th=[21365], 95.00th=[32900], 00:09:47.921 | 99.00th=[55313], 99.50th=[55837], 99.90th=[55837], 99.95th=[56361], 00:09:47.921 | 99.99th=[56361] 00:09:47.921 write: IOPS=4894, BW=19.1MiB/s (20.0MB/s)(20.0MiB/1046msec); 0 zone resets 00:09:47.921 slat (usec): min=3, max=15620, avg=92.37, stdev=567.38 00:09:47.921 clat (usec): min=4807, max=38965, avg=13018.03, stdev=5322.98 00:09:47.921 lat (usec): min=4815, max=38994, avg=13110.41, stdev=5362.87 00:09:47.921 clat percentiles (usec): 00:09:47.921 | 1.00th=[ 6521], 5.00th=[ 8356], 10.00th=[ 9765], 20.00th=[10028], 00:09:47.921 | 30.00th=[10290], 40.00th=[10421], 50.00th=[10683], 60.00th=[10814], 00:09:47.921 | 70.00th=[12125], 80.00th=[16909], 90.00th=[20841], 95.00th=[23987], 00:09:47.921 | 99.00th=[31589], 99.50th=[33817], 99.90th=[35914], 99.95th=[38011], 00:09:47.922 | 99.99th=[39060] 00:09:47.922 bw ( KiB/s): min=19352, max=21608, per=31.62%, avg=20480.00, stdev=1595.23, samples=2 00:09:47.922 iops : min= 4838, max= 5402, avg=5120.00, stdev=398.81, samples=2 00:09:47.922 lat (msec) : 10=15.29%, 20=72.99%, 50=11.09%, 100=0.63% 00:09:47.922 cpu : usr=7.37%, sys=9.28%, ctx=415, majf=0, minf=1 00:09:47.922 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:09:47.922 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:47.922 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:47.922 issued rwts: total=4860,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:47.922 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:47.922 job2: (groupid=0, jobs=1): err= 0: pid=887934: Fri Jul 26 08:43:06 2024 00:09:47.922 read: IOPS=2583, BW=10.1MiB/s (10.6MB/s)(10.2MiB/1012msec) 00:09:47.922 slat (usec): min=3, max=17314, avg=188.17, stdev=1169.87 00:09:47.922 clat (usec): min=3663, max=55244, avg=23214.21, stdev=11469.95 00:09:47.922 lat (usec): min=3670, max=55260, avg=23402.39, stdev=11582.04 00:09:47.922 clat percentiles (usec): 00:09:47.922 | 1.00th=[ 7177], 5.00th=[ 9372], 10.00th=[10814], 20.00th=[12387], 00:09:47.922 | 30.00th=[14091], 40.00th=[16188], 50.00th=[19006], 60.00th=[22938], 00:09:47.922 | 70.00th=[31589], 80.00th=[34866], 90.00th=[40109], 95.00th=[42730], 00:09:47.922 | 99.00th=[46924], 99.50th=[50070], 99.90th=[54264], 99.95th=[54789], 00:09:47.922 | 99.99th=[55313] 00:09:47.922 write: IOPS=3035, BW=11.9MiB/s (12.4MB/s)(12.0MiB/1012msec); 0 zone resets 00:09:47.922 slat (usec): min=3, max=13993, avg=154.61, stdev=909.20 00:09:47.922 clat (usec): min=995, max=46614, avg=21741.99, stdev=7473.96 00:09:47.922 lat (usec): min=1004, max=46629, avg=21896.60, stdev=7554.78 00:09:47.922 clat percentiles (usec): 00:09:47.922 | 1.00th=[ 7963], 5.00th=[11338], 10.00th=[12125], 20.00th=[15139], 00:09:47.922 | 30.00th=[16909], 40.00th=[18482], 50.00th=[22676], 60.00th=[23725], 00:09:47.922 | 70.00th=[25560], 80.00th=[28181], 90.00th=[32113], 95.00th=[34341], 00:09:47.922 | 99.00th=[39060], 99.50th=[39060], 99.90th=[42206], 99.95th=[43779], 00:09:47.922 | 99.99th=[46400] 00:09:47.922 bw ( KiB/s): min=11128, max=12864, per=18.52%, avg=11996.00, stdev=1227.54, samples=2 00:09:47.922 iops : min= 2782, max= 3216, avg=2999.00, stdev=306.88, samples=2 00:09:47.922 lat (usec) : 1000=0.04% 00:09:47.922 lat (msec) : 2=0.07%, 4=0.12%, 10=3.66%, 20=44.93%, 50=50.95% 00:09:47.922 lat (msec) : 100=0.23% 00:09:47.922 cpu : usr=2.18%, sys=4.85%, ctx=299, majf=0, minf=1 00:09:47.922 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:09:47.922 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:47.922 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:47.922 issued rwts: total=2614,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:47.922 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:47.922 job3: (groupid=0, jobs=1): err= 0: pid=887935: Fri Jul 26 08:43:06 2024 00:09:47.922 read: IOPS=3070, BW=12.0MiB/s (12.6MB/s)(12.1MiB/1008msec) 00:09:47.922 slat (usec): min=3, max=16259, avg=141.17, stdev=874.21 00:09:47.922 clat (usec): min=6832, max=42946, avg=18407.50, stdev=4861.90 00:09:47.922 lat (usec): min=9941, max=42991, avg=18548.67, stdev=4922.51 00:09:47.922 clat percentiles (usec): 00:09:47.922 | 1.00th=[11076], 5.00th=[11863], 10.00th=[12780], 20.00th=[14091], 00:09:47.922 | 30.00th=[15533], 40.00th=[17171], 50.00th=[17957], 60.00th=[19006], 00:09:47.922 | 70.00th=[19530], 80.00th=[21365], 90.00th=[23987], 95.00th=[30278], 00:09:47.922 | 99.00th=[33817], 99.50th=[34341], 99.90th=[34341], 99.95th=[40109], 00:09:47.922 | 99.99th=[42730] 00:09:47.922 write: IOPS=3555, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1008msec); 0 zone resets 00:09:47.922 slat (usec): min=4, max=15481, avg=145.70, stdev=832.16 00:09:47.922 clat (usec): min=11883, max=38458, avg=19607.58, stdev=5164.23 00:09:47.922 lat (usec): min=11895, max=38509, avg=19753.28, stdev=5233.99 00:09:47.922 clat percentiles (usec): 00:09:47.922 | 1.00th=[12780], 5.00th=[13435], 10.00th=[13698], 20.00th=[15401], 00:09:47.922 | 30.00th=[15926], 40.00th=[17957], 50.00th=[18744], 60.00th=[19792], 00:09:47.922 | 70.00th=[21365], 80.00th=[23462], 90.00th=[26870], 95.00th=[31065], 00:09:47.922 | 99.00th=[35390], 99.50th=[36439], 99.90th=[36963], 99.95th=[38011], 00:09:47.922 | 99.99th=[38536] 00:09:47.922 bw ( KiB/s): min=13184, max=14656, per=21.49%, avg=13920.00, stdev=1040.86, samples=2 00:09:47.922 iops : min= 3296, max= 3664, avg=3480.00, stdev=260.22, samples=2 00:09:47.922 lat (msec) : 10=0.07%, 20=67.51%, 50=32.42% 00:09:47.922 cpu : usr=5.46%, sys=7.94%, ctx=322, majf=0, minf=1 00:09:47.922 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:09:47.922 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:47.922 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:47.922 issued rwts: total=3095,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:47.922 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:47.922 00:09:47.922 Run status group 0 (all jobs): 00:09:47.922 READ: bw=58.6MiB/s (61.4MB/s), 10.1MiB/s-19.9MiB/s (10.6MB/s-20.8MB/s), io=61.3MiB (64.3MB), run=1006-1046msec 00:09:47.922 WRITE: bw=63.3MiB/s (66.3MB/s), 11.9MiB/s-20.0MiB/s (12.4MB/s-21.0MB/s), io=66.2MiB (69.4MB), run=1006-1046msec 00:09:47.922 00:09:47.922 Disk stats (read/write): 00:09:47.922 nvme0n1: ios=4638/4615, merge=0/0, ticks=17392/15745, in_queue=33137, util=95.49% 00:09:47.922 nvme0n2: ios=4274/4608, merge=0/0, ticks=25954/25085, in_queue=51039, util=89.24% 00:09:47.922 nvme0n3: ios=2138/2560, merge=0/0, ticks=20193/21401, in_queue=41594, util=91.77% 00:09:47.922 nvme0n4: ios=2617/3015, merge=0/0, ticks=24015/27063, in_queue=51078, util=96.12% 00:09:47.922 08:43:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:09:47.922 [global] 00:09:47.922 thread=1 00:09:47.922 invalidate=1 00:09:47.922 rw=randwrite 00:09:47.922 time_based=1 00:09:47.922 runtime=1 00:09:47.922 ioengine=libaio 00:09:47.922 direct=1 00:09:47.922 bs=4096 00:09:47.922 iodepth=128 00:09:47.922 norandommap=0 00:09:47.922 numjobs=1 00:09:47.922 00:09:47.922 verify_dump=1 00:09:47.922 verify_backlog=512 00:09:47.922 verify_state_save=0 00:09:47.922 do_verify=1 00:09:47.922 verify=crc32c-intel 00:09:47.922 [job0] 00:09:47.922 filename=/dev/nvme0n1 00:09:47.922 [job1] 00:09:47.922 filename=/dev/nvme0n2 00:09:47.922 [job2] 00:09:47.922 filename=/dev/nvme0n3 00:09:47.922 [job3] 00:09:47.922 filename=/dev/nvme0n4 00:09:47.922 Could not set queue depth (nvme0n1) 00:09:47.922 Could not set queue depth (nvme0n2) 00:09:47.922 Could not set queue depth (nvme0n3) 00:09:47.922 Could not set queue depth (nvme0n4) 00:09:48.180 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:48.180 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:48.180 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:48.180 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:48.180 fio-3.35 00:09:48.180 Starting 4 threads 00:09:49.555 00:09:49.555 job0: (groupid=0, jobs=1): err= 0: pid=888167: Fri Jul 26 08:43:07 2024 00:09:49.555 read: IOPS=4059, BW=15.9MiB/s (16.6MB/s)(16.0MiB/1009msec) 00:09:49.555 slat (usec): min=2, max=11341, avg=115.19, stdev=841.40 00:09:49.555 clat (usec): min=3542, max=47575, avg=14349.20, stdev=5408.57 00:09:49.555 lat (usec): min=4144, max=47579, avg=14464.40, stdev=5477.00 00:09:49.555 clat percentiles (usec): 00:09:49.555 | 1.00th=[ 5800], 5.00th=[ 6718], 10.00th=[ 8291], 20.00th=[10159], 00:09:49.555 | 30.00th=[10552], 40.00th=[10945], 50.00th=[12649], 60.00th=[15008], 00:09:49.555 | 70.00th=[17957], 80.00th=[20317], 90.00th=[20841], 95.00th=[22152], 00:09:49.555 | 99.00th=[29230], 99.50th=[30802], 99.90th=[41157], 99.95th=[47449], 00:09:49.555 | 99.99th=[47449] 00:09:49.555 write: IOPS=4163, BW=16.3MiB/s (17.1MB/s)(16.4MiB/1009msec); 0 zone resets 00:09:49.555 slat (usec): min=3, max=11523, avg=117.66, stdev=695.13 00:09:49.555 clat (usec): min=2956, max=53441, avg=16326.58, stdev=9059.00 00:09:49.555 lat (usec): min=2963, max=53454, avg=16444.24, stdev=9123.06 00:09:49.555 clat percentiles (usec): 00:09:49.555 | 1.00th=[ 4817], 5.00th=[ 6980], 10.00th=[ 7373], 20.00th=[ 9896], 00:09:49.555 | 30.00th=[11207], 40.00th=[11731], 50.00th=[13304], 60.00th=[15533], 00:09:49.555 | 70.00th=[19792], 80.00th=[21103], 90.00th=[26084], 95.00th=[38536], 00:09:49.555 | 99.00th=[49546], 99.50th=[50594], 99.90th=[53216], 99.95th=[53216], 00:09:49.555 | 99.99th=[53216] 00:09:49.555 bw ( KiB/s): min=12280, max=20488, per=25.41%, avg=16384.00, stdev=5803.93, samples=2 00:09:49.555 iops : min= 3070, max= 5122, avg=4096.00, stdev=1450.98, samples=2 00:09:49.555 lat (msec) : 4=0.16%, 10=19.30%, 20=56.26%, 50=23.86%, 100=0.42% 00:09:49.555 cpu : usr=2.38%, sys=7.04%, ctx=382, majf=0, minf=1 00:09:49.555 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:09:49.555 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:49.555 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:49.555 issued rwts: total=4096,4201,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:49.555 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:49.555 job1: (groupid=0, jobs=1): err= 0: pid=888168: Fri Jul 26 08:43:07 2024 00:09:49.555 read: IOPS=3050, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1007msec) 00:09:49.555 slat (usec): min=2, max=11997, avg=163.24, stdev=953.44 00:09:49.555 clat (usec): min=8546, max=57075, avg=19304.08, stdev=8394.41 00:09:49.555 lat (usec): min=8550, max=57096, avg=19467.32, stdev=8486.66 00:09:49.555 clat percentiles (usec): 00:09:49.555 | 1.00th=[ 8979], 5.00th=[ 9634], 10.00th=[11207], 20.00th=[13042], 00:09:49.555 | 30.00th=[14484], 40.00th=[15664], 50.00th=[18220], 60.00th=[20055], 00:09:49.555 | 70.00th=[20841], 80.00th=[22676], 90.00th=[27132], 95.00th=[40109], 00:09:49.555 | 99.00th=[51119], 99.50th=[51119], 99.90th=[56361], 99.95th=[56361], 00:09:49.555 | 99.99th=[56886] 00:09:49.555 write: IOPS=3335, BW=13.0MiB/s (13.7MB/s)(13.1MiB/1007msec); 0 zone resets 00:09:49.555 slat (usec): min=4, max=12271, avg=139.21, stdev=757.27 00:09:49.555 clat (usec): min=5227, max=56572, avg=20315.05, stdev=8475.92 00:09:49.555 lat (usec): min=6157, max=56879, avg=20454.25, stdev=8514.63 00:09:49.555 clat percentiles (usec): 00:09:49.556 | 1.00th=[ 8979], 5.00th=[10552], 10.00th=[11863], 20.00th=[12911], 00:09:49.556 | 30.00th=[14746], 40.00th=[17695], 50.00th=[19530], 60.00th=[20841], 00:09:49.556 | 70.00th=[21627], 80.00th=[25035], 90.00th=[31589], 95.00th=[38011], 00:09:49.556 | 99.00th=[51643], 99.50th=[54264], 99.90th=[54789], 99.95th=[56361], 00:09:49.556 | 99.99th=[56361] 00:09:49.556 bw ( KiB/s): min=10920, max=14928, per=20.04%, avg=12924.00, stdev=2834.08, samples=2 00:09:49.556 iops : min= 2730, max= 3732, avg=3231.00, stdev=708.52, samples=2 00:09:49.556 lat (msec) : 10=3.22%, 20=55.12%, 50=40.27%, 100=1.38% 00:09:49.556 cpu : usr=4.87%, sys=5.57%, ctx=334, majf=0, minf=1 00:09:49.556 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:09:49.556 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:49.556 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:49.556 issued rwts: total=3072,3359,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:49.556 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:49.556 job2: (groupid=0, jobs=1): err= 0: pid=888169: Fri Jul 26 08:43:07 2024 00:09:49.556 read: IOPS=4595, BW=18.0MiB/s (18.8MB/s)(18.0MiB/1002msec) 00:09:49.556 slat (usec): min=2, max=12158, avg=112.61, stdev=775.70 00:09:49.556 clat (usec): min=1447, max=29374, avg=14324.61, stdev=3961.59 00:09:49.556 lat (usec): min=1451, max=29384, avg=14437.22, stdev=4006.06 00:09:49.556 clat percentiles (usec): 00:09:49.556 | 1.00th=[ 5080], 5.00th=[ 9241], 10.00th=[11076], 20.00th=[12125], 00:09:49.556 | 30.00th=[12518], 40.00th=[12911], 50.00th=[13042], 60.00th=[13829], 00:09:49.556 | 70.00th=[15008], 80.00th=[17171], 90.00th=[19792], 95.00th=[22414], 00:09:49.556 | 99.00th=[27132], 99.50th=[27657], 99.90th=[29230], 99.95th=[29492], 00:09:49.556 | 99.99th=[29492] 00:09:49.556 write: IOPS=4598, BW=18.0MiB/s (18.8MB/s)(18.0MiB/1002msec); 0 zone resets 00:09:49.556 slat (usec): min=3, max=13063, avg=90.98, stdev=682.87 00:09:49.556 clat (usec): min=746, max=38554, avg=13034.27, stdev=4371.29 00:09:49.556 lat (usec): min=765, max=38573, avg=13125.25, stdev=4428.18 00:09:49.556 clat percentiles (usec): 00:09:49.556 | 1.00th=[ 1713], 5.00th=[ 6390], 10.00th=[ 8717], 20.00th=[11207], 00:09:49.556 | 30.00th=[12125], 40.00th=[12387], 50.00th=[12649], 60.00th=[13042], 00:09:49.556 | 70.00th=[13435], 80.00th=[13960], 90.00th=[18220], 95.00th=[19006], 00:09:49.556 | 99.00th=[30016], 99.50th=[30016], 99.90th=[30016], 99.95th=[35390], 00:09:49.556 | 99.99th=[38536] 00:09:49.556 bw ( KiB/s): min=16384, max=20480, per=28.59%, avg=18432.00, stdev=2896.31, samples=2 00:09:49.556 iops : min= 4096, max= 5120, avg=4608.00, stdev=724.08, samples=2 00:09:49.556 lat (usec) : 750=0.01%, 1000=0.04% 00:09:49.556 lat (msec) : 2=0.53%, 4=0.55%, 10=9.31%, 20=82.65%, 50=6.89% 00:09:49.556 cpu : usr=5.00%, sys=8.79%, ctx=375, majf=0, minf=1 00:09:49.556 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:09:49.556 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:49.556 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:49.556 issued rwts: total=4605,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:49.556 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:49.556 job3: (groupid=0, jobs=1): err= 0: pid=888171: Fri Jul 26 08:43:07 2024 00:09:49.556 read: IOPS=3964, BW=15.5MiB/s (16.2MB/s)(15.6MiB/1009msec) 00:09:49.556 slat (usec): min=2, max=53243, avg=145.12, stdev=1213.00 00:09:49.556 clat (usec): min=659, max=95085, avg=18261.19, stdev=12169.94 00:09:49.556 lat (usec): min=5532, max=95091, avg=18406.31, stdev=12240.33 00:09:49.556 clat percentiles (usec): 00:09:49.556 | 1.00th=[ 8455], 5.00th=[10552], 10.00th=[11469], 20.00th=[12780], 00:09:49.556 | 30.00th=[13042], 40.00th=[13435], 50.00th=[13960], 60.00th=[15664], 00:09:49.556 | 70.00th=[17957], 80.00th=[21365], 90.00th=[24773], 95.00th=[40633], 00:09:49.556 | 99.00th=[81265], 99.50th=[92799], 99.90th=[94897], 99.95th=[94897], 00:09:49.556 | 99.99th=[94897] 00:09:49.556 write: IOPS=4059, BW=15.9MiB/s (16.6MB/s)(16.0MiB/1009msec); 0 zone resets 00:09:49.556 slat (usec): min=4, max=11083, avg=94.67, stdev=570.81 00:09:49.556 clat (usec): min=1471, max=27134, avg=13396.19, stdev=3032.57 00:09:49.556 lat (usec): min=1524, max=27147, avg=13490.86, stdev=3073.94 00:09:49.556 clat percentiles (usec): 00:09:49.556 | 1.00th=[ 4883], 5.00th=[ 7767], 10.00th=[ 9372], 20.00th=[11863], 00:09:49.556 | 30.00th=[12911], 40.00th=[13435], 50.00th=[13698], 60.00th=[13960], 00:09:49.556 | 70.00th=[14222], 80.00th=[14877], 90.00th=[16909], 95.00th=[18482], 00:09:49.556 | 99.00th=[21365], 99.50th=[22938], 99.90th=[25297], 99.95th=[26870], 00:09:49.556 | 99.99th=[27132] 00:09:49.556 bw ( KiB/s): min=12304, max=20464, per=25.41%, avg=16384.00, stdev=5769.99, samples=2 00:09:49.556 iops : min= 3076, max= 5116, avg=4096.00, stdev=1442.50, samples=2 00:09:49.556 lat (usec) : 750=0.01% 00:09:49.556 lat (msec) : 2=0.01%, 4=0.16%, 10=8.26%, 20=78.58%, 50=11.40% 00:09:49.556 lat (msec) : 100=1.57% 00:09:49.556 cpu : usr=4.56%, sys=7.84%, ctx=404, majf=0, minf=1 00:09:49.556 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:09:49.556 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:49.556 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:49.556 issued rwts: total=4000,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:49.556 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:49.556 00:09:49.556 Run status group 0 (all jobs): 00:09:49.556 READ: bw=61.1MiB/s (64.0MB/s), 11.9MiB/s-18.0MiB/s (12.5MB/s-18.8MB/s), io=61.6MiB (64.6MB), run=1002-1009msec 00:09:49.556 WRITE: bw=63.0MiB/s (66.0MB/s), 13.0MiB/s-18.0MiB/s (13.7MB/s-18.8MB/s), io=63.5MiB (66.6MB), run=1002-1009msec 00:09:49.556 00:09:49.556 Disk stats (read/write): 00:09:49.556 nvme0n1: ios=3122/3315, merge=0/0, ticks=26505/30700, in_queue=57205, util=86.87% 00:09:49.556 nvme0n2: ios=2597/2751, merge=0/0, ticks=26303/25185, in_queue=51488, util=87.40% 00:09:49.556 nvme0n3: ios=3613/4050, merge=0/0, ticks=50338/44568, in_queue=94906, util=99.69% 00:09:49.556 nvme0n4: ios=3584/3979, merge=0/0, ticks=45186/43206, in_queue=88392, util=89.67% 00:09:49.556 08:43:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:09:49.556 08:43:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=888306 00:09:49.556 08:43:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:09:49.556 08:43:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:09:49.556 [global] 00:09:49.556 thread=1 00:09:49.556 invalidate=1 00:09:49.556 rw=read 00:09:49.556 time_based=1 00:09:49.556 runtime=10 00:09:49.556 ioengine=libaio 00:09:49.556 direct=1 00:09:49.556 bs=4096 00:09:49.556 iodepth=1 00:09:49.556 norandommap=1 00:09:49.556 numjobs=1 00:09:49.556 00:09:49.556 [job0] 00:09:49.556 filename=/dev/nvme0n1 00:09:49.556 [job1] 00:09:49.556 filename=/dev/nvme0n2 00:09:49.556 [job2] 00:09:49.556 filename=/dev/nvme0n3 00:09:49.556 [job3] 00:09:49.556 filename=/dev/nvme0n4 00:09:49.556 Could not set queue depth (nvme0n1) 00:09:49.556 Could not set queue depth (nvme0n2) 00:09:49.556 Could not set queue depth (nvme0n3) 00:09:49.556 Could not set queue depth (nvme0n4) 00:09:49.556 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:49.556 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:49.556 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:49.556 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:49.556 fio-3.35 00:09:49.556 Starting 4 threads 00:09:52.841 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:09:52.841 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:09:52.841 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=18161664, buflen=4096 00:09:52.841 fio: pid=888403, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:09:52.841 08:43:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:52.841 08:43:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:09:52.841 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=21520384, buflen=4096 00:09:52.841 fio: pid=888402, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:09:53.099 08:43:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:53.099 08:43:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:09:53.099 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=356352, buflen=4096 00:09:53.099 fio: pid=888399, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:09:53.357 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=20475904, buflen=4096 00:09:53.357 fio: pid=888400, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:09:53.357 08:43:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:53.357 08:43:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:09:53.357 00:09:53.357 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=888399: Fri Jul 26 08:43:11 2024 00:09:53.357 read: IOPS=25, BW=101KiB/s (103kB/s)(348KiB/3455msec) 00:09:53.357 slat (usec): min=12, max=34773, avg=416.75, stdev=3704.50 00:09:53.357 clat (usec): min=390, max=43018, avg=39033.37, stdev=9579.14 00:09:53.357 lat (usec): min=405, max=75978, avg=39454.77, stdev=10361.67 00:09:53.357 clat percentiles (usec): 00:09:53.357 | 1.00th=[ 392], 5.00th=[ 586], 10.00th=[40633], 20.00th=[41157], 00:09:53.357 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:53.357 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:09:53.357 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:09:53.357 | 99.99th=[43254] 00:09:53.357 bw ( KiB/s): min= 96, max= 120, per=0.64%, avg=102.67, stdev= 9.35, samples=6 00:09:53.357 iops : min= 24, max= 30, avg=25.67, stdev= 2.34, samples=6 00:09:53.357 lat (usec) : 500=1.14%, 750=4.55% 00:09:53.357 lat (msec) : 50=93.18% 00:09:53.357 cpu : usr=0.12%, sys=0.00%, ctx=90, majf=0, minf=1 00:09:53.357 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:53.357 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:53.357 complete : 0=1.1%, 4=98.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:53.357 issued rwts: total=88,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:53.357 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:53.357 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=888400: Fri Jul 26 08:43:11 2024 00:09:53.357 read: IOPS=1350, BW=5403KiB/s (5533kB/s)(19.5MiB/3701msec) 00:09:53.357 slat (usec): min=4, max=16661, avg=28.66, stdev=393.70 00:09:53.357 clat (usec): min=231, max=42023, avg=703.15, stdev=3849.85 00:09:53.357 lat (usec): min=237, max=42038, avg=731.82, stdev=3869.57 00:09:53.357 clat percentiles (usec): 00:09:53.357 | 1.00th=[ 243], 5.00th=[ 253], 10.00th=[ 262], 20.00th=[ 277], 00:09:53.357 | 30.00th=[ 297], 40.00th=[ 318], 50.00th=[ 326], 60.00th=[ 343], 00:09:53.357 | 70.00th=[ 375], 80.00th=[ 388], 90.00th=[ 400], 95.00th=[ 429], 00:09:53.357 | 99.00th=[ 996], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:09:53.357 | 99.99th=[42206] 00:09:53.357 bw ( KiB/s): min= 96, max=10110, per=32.31%, avg=5159.71, stdev=3677.31, samples=7 00:09:53.357 iops : min= 24, max= 2527, avg=1289.86, stdev=919.22, samples=7 00:09:53.357 lat (usec) : 250=3.54%, 500=94.80%, 750=0.64%, 1000=0.02% 00:09:53.357 lat (msec) : 4=0.04%, 10=0.02%, 20=0.02%, 50=0.90% 00:09:53.357 cpu : usr=0.95%, sys=2.38%, ctx=5009, majf=0, minf=1 00:09:53.357 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:53.357 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:53.357 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:53.357 issued rwts: total=5000,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:53.357 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:53.357 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=888402: Fri Jul 26 08:43:11 2024 00:09:53.357 read: IOPS=1650, BW=6603KiB/s (6761kB/s)(20.5MiB/3183msec) 00:09:53.357 slat (nsec): min=4935, max=78686, avg=18944.82, stdev=10430.96 00:09:53.357 clat (usec): min=237, max=42272, avg=578.07, stdev=3188.19 00:09:53.357 lat (usec): min=250, max=42292, avg=597.02, stdev=3188.44 00:09:53.357 clat percentiles (usec): 00:09:53.357 | 1.00th=[ 251], 5.00th=[ 260], 10.00th=[ 265], 20.00th=[ 277], 00:09:53.357 | 30.00th=[ 285], 40.00th=[ 293], 50.00th=[ 310], 60.00th=[ 334], 00:09:53.357 | 70.00th=[ 355], 80.00th=[ 383], 90.00th=[ 412], 95.00th=[ 465], 00:09:53.357 | 99.00th=[ 545], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:09:53.357 | 99.99th=[42206] 00:09:53.357 bw ( KiB/s): min= 104, max=10928, per=43.84%, avg=7000.00, stdev=4009.09, samples=6 00:09:53.357 iops : min= 26, max= 2732, avg=1750.00, stdev=1002.27, samples=6 00:09:53.357 lat (usec) : 250=0.89%, 500=97.07%, 750=1.39% 00:09:53.357 lat (msec) : 10=0.02%, 50=0.61% 00:09:53.357 cpu : usr=1.23%, sys=3.68%, ctx=5256, majf=0, minf=1 00:09:53.357 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:53.358 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:53.358 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:53.358 issued rwts: total=5255,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:53.358 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:53.358 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=888403: Fri Jul 26 08:43:11 2024 00:09:53.358 read: IOPS=1507, BW=6029KiB/s (6173kB/s)(17.3MiB/2942msec) 00:09:53.358 slat (nsec): min=5659, max=70753, avg=15526.74, stdev=6922.37 00:09:53.358 clat (usec): min=250, max=42334, avg=637.74, stdev=3427.59 00:09:53.358 lat (usec): min=257, max=42347, avg=653.27, stdev=3427.75 00:09:53.358 clat percentiles (usec): 00:09:53.358 | 1.00th=[ 265], 5.00th=[ 277], 10.00th=[ 293], 20.00th=[ 310], 00:09:53.358 | 30.00th=[ 322], 40.00th=[ 334], 50.00th=[ 343], 60.00th=[ 355], 00:09:53.358 | 70.00th=[ 359], 80.00th=[ 371], 90.00th=[ 388], 95.00th=[ 494], 00:09:53.358 | 99.00th=[ 644], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:09:53.358 | 99.99th=[42206] 00:09:53.358 bw ( KiB/s): min= 3072, max=10264, per=36.88%, avg=5889.60, stdev=2812.82, samples=5 00:09:53.358 iops : min= 768, max= 2566, avg=1472.40, stdev=703.21, samples=5 00:09:53.358 lat (usec) : 500=95.20%, 750=3.97%, 1000=0.07% 00:09:53.358 lat (msec) : 2=0.02%, 4=0.02%, 50=0.70% 00:09:53.358 cpu : usr=1.39%, sys=3.67%, ctx=4436, majf=0, minf=1 00:09:53.358 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:53.358 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:53.358 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:53.358 issued rwts: total=4435,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:53.358 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:53.358 00:09:53.358 Run status group 0 (all jobs): 00:09:53.358 READ: bw=15.6MiB/s (16.3MB/s), 101KiB/s-6603KiB/s (103kB/s-6761kB/s), io=57.7MiB (60.5MB), run=2942-3701msec 00:09:53.358 00:09:53.358 Disk stats (read/write): 00:09:53.358 nvme0n1: ios=84/0, merge=0/0, ticks=3275/0, in_queue=3275, util=93.82% 00:09:53.358 nvme0n2: ios=4705/0, merge=0/0, ticks=4324/0, in_queue=4324, util=98.63% 00:09:53.358 nvme0n3: ios=5293/0, merge=0/0, ticks=3686/0, in_queue=3686, util=99.81% 00:09:53.358 nvme0n4: ios=4301/0, merge=0/0, ticks=2643/0, in_queue=2643, util=96.70% 00:09:53.615 08:43:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:53.615 08:43:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:09:53.873 08:43:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:53.873 08:43:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:09:54.130 08:43:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:54.130 08:43:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:09:54.387 08:43:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:54.387 08:43:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:09:54.648 08:43:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:09:54.648 08:43:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 888306 00:09:54.648 08:43:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:09:54.648 08:43:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:54.929 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:54.929 08:43:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:54.929 08:43:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:09:54.929 08:43:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:54.929 08:43:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:54.929 08:43:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:54.929 08:43:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:54.929 08:43:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:09:54.929 08:43:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:09:54.929 08:43:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:09:54.929 nvmf hotplug test: fio failed as expected 00:09:54.929 08:43:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:55.187 08:43:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:09:55.187 08:43:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:09:55.187 08:43:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:09:55.187 08:43:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:09:55.187 08:43:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:09:55.187 08:43:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:55.187 08:43:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:09:55.187 08:43:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:55.187 08:43:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:09:55.187 08:43:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:55.187 08:43:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:55.187 rmmod nvme_tcp 00:09:55.187 rmmod nvme_fabrics 00:09:55.187 rmmod nvme_keyring 00:09:55.187 08:43:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:55.187 08:43:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:09:55.187 08:43:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:09:55.187 08:43:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 886321 ']' 00:09:55.187 08:43:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 886321 00:09:55.187 08:43:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 886321 ']' 00:09:55.187 08:43:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 886321 00:09:55.187 08:43:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:09:55.187 08:43:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:55.187 08:43:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 886321 00:09:55.187 08:43:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:55.187 08:43:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:55.187 08:43:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 886321' 00:09:55.187 killing process with pid 886321 00:09:55.187 08:43:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 886321 00:09:55.187 08:43:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 886321 00:09:55.446 08:43:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:55.446 08:43:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:55.446 08:43:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:55.446 08:43:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:55.446 08:43:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:55.446 08:43:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:55.446 08:43:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:55.446 08:43:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:57.352 08:43:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:57.352 00:09:57.352 real 0m23.485s 00:09:57.352 user 1m21.214s 00:09:57.352 sys 0m7.068s 00:09:57.352 08:43:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:57.352 08:43:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:57.352 ************************************ 00:09:57.352 END TEST nvmf_fio_target 00:09:57.352 ************************************ 00:09:57.352 08:43:15 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:57.352 08:43:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:57.352 08:43:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:57.352 08:43:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:57.611 ************************************ 00:09:57.611 START TEST nvmf_bdevio 00:09:57.611 ************************************ 00:09:57.611 08:43:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:57.611 * Looking for test storage... 00:09:57.611 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:57.611 08:43:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:57.611 08:43:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:09:57.611 08:43:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:57.611 08:43:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:57.611 08:43:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:57.611 08:43:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:57.611 08:43:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:57.611 08:43:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:57.611 08:43:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:57.611 08:43:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:57.611 08:43:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:57.611 08:43:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:57.611 08:43:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:57.611 08:43:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:57.611 08:43:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:57.611 08:43:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:57.611 08:43:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:57.611 08:43:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:57.612 08:43:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:57.612 08:43:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:57.612 08:43:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:57.612 08:43:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:57.612 08:43:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.612 08:43:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.612 08:43:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.612 08:43:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:09:57.612 08:43:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.612 08:43:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:09:57.612 08:43:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:57.612 08:43:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:57.612 08:43:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:57.612 08:43:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:57.612 08:43:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:57.612 08:43:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:57.612 08:43:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:57.612 08:43:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:57.612 08:43:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:57.612 08:43:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:57.612 08:43:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:09:57.612 08:43:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:57.612 08:43:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:57.612 08:43:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:57.612 08:43:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:57.612 08:43:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:57.612 08:43:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:57.612 08:43:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:57.612 08:43:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:57.612 08:43:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:57.612 08:43:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:57.612 08:43:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:09:57.612 08:43:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:59.515 08:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:59.515 08:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:09:59.515 08:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:59.515 08:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:59.515 08:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:59.515 08:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:59.515 08:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:59.515 08:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:09:59.515 08:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:59.515 08:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:09:59.515 08:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:09:59.515 08:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:09:59.515 08:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:09:59.515 08:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:09:59.515 08:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:09:59.515 08:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:59.515 08:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:59.515 08:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:59.515 08:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:59.515 08:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:59.515 08:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:59.515 08:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:59.515 08:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:59.515 08:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:59.515 08:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:59.515 08:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:59.515 08:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:59.515 08:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:59.515 08:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:59.515 08:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:59.515 08:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:59.515 08:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:59.515 08:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:59.515 08:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:59.515 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:59.515 08:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:59.515 08:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:59.515 08:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:59.515 08:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:59.515 08:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:59.515 08:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:59.515 08:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:59.515 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:59.515 08:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:59.515 08:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:59.515 08:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:59.515 08:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:59.515 08:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:59.515 08:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:59.515 08:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:59.515 08:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:59.515 08:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:59.515 08:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:59.515 08:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:59.515 08:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:59.515 08:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:59.515 08:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:59.515 08:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:59.515 08:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:59.515 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:59.515 08:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:59.515 08:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:59.515 08:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:59.515 08:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:59.515 08:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:59.515 08:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:59.515 08:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:59.515 08:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:59.515 08:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:59.515 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:59.515 08:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:59.515 08:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:59.515 08:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:09:59.515 08:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:59.515 08:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:59.515 08:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:59.515 08:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:59.515 08:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:59.515 08:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:59.515 08:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:59.515 08:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:59.515 08:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:59.515 08:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:59.515 08:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:59.515 08:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:59.515 08:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:59.515 08:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:59.515 08:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:59.516 08:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:59.516 08:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:59.516 08:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:59.516 08:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:59.516 08:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:59.774 08:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:59.774 08:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:59.774 08:43:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:59.774 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:59.774 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.223 ms 00:09:59.774 00:09:59.774 --- 10.0.0.2 ping statistics --- 00:09:59.774 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:59.774 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:09:59.774 08:43:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:59.774 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:59.774 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.100 ms 00:09:59.774 00:09:59.774 --- 10.0.0.1 ping statistics --- 00:09:59.774 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:59.774 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:09:59.774 08:43:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:59.774 08:43:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:09:59.774 08:43:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:59.774 08:43:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:59.774 08:43:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:59.774 08:43:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:59.774 08:43:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:59.774 08:43:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:59.774 08:43:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:59.774 08:43:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:09:59.774 08:43:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:59.774 08:43:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:59.774 08:43:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:59.774 08:43:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=891028 00:09:59.774 08:43:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:09:59.774 08:43:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 891028 00:09:59.774 08:43:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 891028 ']' 00:09:59.774 08:43:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:59.774 08:43:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:59.774 08:43:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:59.774 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:59.774 08:43:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:59.774 08:43:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:59.774 [2024-07-26 08:43:18.090889] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:09:59.774 [2024-07-26 08:43:18.090973] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:59.774 EAL: No free 2048 kB hugepages reported on node 1 00:09:59.774 [2024-07-26 08:43:18.128420] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:59.774 [2024-07-26 08:43:18.159181] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:00.033 [2024-07-26 08:43:18.251558] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:00.033 [2024-07-26 08:43:18.251618] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:00.033 [2024-07-26 08:43:18.251635] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:00.033 [2024-07-26 08:43:18.251649] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:00.033 [2024-07-26 08:43:18.251661] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:00.033 [2024-07-26 08:43:18.251791] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:10:00.033 [2024-07-26 08:43:18.251866] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:10:00.033 [2024-07-26 08:43:18.251951] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:10:00.033 [2024-07-26 08:43:18.251957] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:00.033 08:43:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:00.033 08:43:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:10:00.033 08:43:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:00.033 08:43:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:00.033 08:43:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:00.033 08:43:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:00.033 08:43:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:00.033 08:43:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.033 08:43:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:00.033 [2024-07-26 08:43:18.406374] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:00.033 08:43:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.033 08:43:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:00.033 08:43:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.033 08:43:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:00.033 Malloc0 00:10:00.033 08:43:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.033 08:43:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:00.033 08:43:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.033 08:43:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:00.033 08:43:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.033 08:43:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:00.033 08:43:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.033 08:43:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:00.033 08:43:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.033 08:43:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:00.033 08:43:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.033 08:43:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:00.033 [2024-07-26 08:43:18.458791] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:00.033 08:43:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.033 08:43:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:10:00.033 08:43:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:10:00.033 08:43:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:10:00.033 08:43:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:10:00.033 08:43:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:00.033 08:43:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:00.033 { 00:10:00.033 "params": { 00:10:00.033 "name": "Nvme$subsystem", 00:10:00.033 "trtype": "$TEST_TRANSPORT", 00:10:00.033 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:00.033 "adrfam": "ipv4", 00:10:00.033 "trsvcid": "$NVMF_PORT", 00:10:00.033 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:00.033 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:00.033 "hdgst": ${hdgst:-false}, 00:10:00.033 "ddgst": ${ddgst:-false} 00:10:00.033 }, 00:10:00.033 "method": "bdev_nvme_attach_controller" 00:10:00.033 } 00:10:00.033 EOF 00:10:00.033 )") 00:10:00.033 08:43:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:10:00.033 08:43:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:10:00.033 08:43:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:10:00.033 08:43:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:00.033 "params": { 00:10:00.033 "name": "Nvme1", 00:10:00.033 "trtype": "tcp", 00:10:00.033 "traddr": "10.0.0.2", 00:10:00.033 "adrfam": "ipv4", 00:10:00.033 "trsvcid": "4420", 00:10:00.033 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:00.033 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:00.033 "hdgst": false, 00:10:00.033 "ddgst": false 00:10:00.033 }, 00:10:00.033 "method": "bdev_nvme_attach_controller" 00:10:00.033 }' 00:10:00.293 [2024-07-26 08:43:18.507131] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:10:00.293 [2024-07-26 08:43:18.507204] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid891170 ] 00:10:00.293 EAL: No free 2048 kB hugepages reported on node 1 00:10:00.293 [2024-07-26 08:43:18.539538] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:00.293 [2024-07-26 08:43:18.569028] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:00.293 [2024-07-26 08:43:18.661717] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:00.293 [2024-07-26 08:43:18.661764] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:00.293 [2024-07-26 08:43:18.661767] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:00.553 I/O targets: 00:10:00.553 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:10:00.553 00:10:00.553 00:10:00.553 CUnit - A unit testing framework for C - Version 2.1-3 00:10:00.553 http://cunit.sourceforge.net/ 00:10:00.553 00:10:00.553 00:10:00.553 Suite: bdevio tests on: Nvme1n1 00:10:00.811 Test: blockdev write read block ...passed 00:10:00.811 Test: blockdev write zeroes read block ...passed 00:10:00.811 Test: blockdev write zeroes read no split ...passed 00:10:00.811 Test: blockdev write zeroes read split ...passed 00:10:00.811 Test: blockdev write zeroes read split partial ...passed 00:10:00.811 Test: blockdev reset ...[2024-07-26 08:43:19.206899] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:10:00.811 [2024-07-26 08:43:19.207013] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a940 (9): Bad file descriptor 00:10:01.069 [2024-07-26 08:43:19.301380] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:01.069 passed 00:10:01.069 Test: blockdev write read 8 blocks ...passed 00:10:01.069 Test: blockdev write read size > 128k ...passed 00:10:01.069 Test: blockdev write read invalid size ...passed 00:10:01.069 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:01.069 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:01.069 Test: blockdev write read max offset ...passed 00:10:01.069 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:01.069 Test: blockdev writev readv 8 blocks ...passed 00:10:01.069 Test: blockdev writev readv 30 x 1block ...passed 00:10:01.069 Test: blockdev writev readv block ...passed 00:10:01.069 Test: blockdev writev readv size > 128k ...passed 00:10:01.069 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:01.069 Test: blockdev comparev and writev ...[2024-07-26 08:43:19.518314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:01.069 [2024-07-26 08:43:19.518348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:10:01.069 [2024-07-26 08:43:19.518372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:01.069 [2024-07-26 08:43:19.518387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:10:01.069 [2024-07-26 08:43:19.518725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:01.069 [2024-07-26 08:43:19.518748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:10:01.069 [2024-07-26 08:43:19.518769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:01.069 [2024-07-26 08:43:19.518785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:10:01.069 [2024-07-26 08:43:19.519163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:01.069 [2024-07-26 08:43:19.519187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:10:01.069 [2024-07-26 08:43:19.519209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:01.069 [2024-07-26 08:43:19.519224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:10:01.069 [2024-07-26 08:43:19.519550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:01.069 [2024-07-26 08:43:19.519573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:10:01.069 [2024-07-26 08:43:19.519593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:01.069 [2024-07-26 08:43:19.519608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:10:01.328 passed 00:10:01.329 Test: blockdev nvme passthru rw ...passed 00:10:01.329 Test: blockdev nvme passthru vendor specific ...[2024-07-26 08:43:19.602376] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:01.329 [2024-07-26 08:43:19.602411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:10:01.329 [2024-07-26 08:43:19.602593] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:01.329 [2024-07-26 08:43:19.602616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:10:01.329 [2024-07-26 08:43:19.602793] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:01.329 [2024-07-26 08:43:19.602816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:10:01.329 [2024-07-26 08:43:19.602987] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:01.329 [2024-07-26 08:43:19.603009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:10:01.329 passed 00:10:01.329 Test: blockdev nvme admin passthru ...passed 00:10:01.329 Test: blockdev copy ...passed 00:10:01.329 00:10:01.329 Run Summary: Type Total Ran Passed Failed Inactive 00:10:01.329 suites 1 1 n/a 0 0 00:10:01.329 tests 23 23 23 0 0 00:10:01.329 asserts 152 152 152 0 n/a 00:10:01.329 00:10:01.329 Elapsed time = 1.316 seconds 00:10:01.589 08:43:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:01.589 08:43:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.589 08:43:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:01.589 08:43:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.589 08:43:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:10:01.589 08:43:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:10:01.589 08:43:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:01.589 08:43:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:10:01.589 08:43:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:01.589 08:43:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:10:01.589 08:43:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:01.589 08:43:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:01.589 rmmod nvme_tcp 00:10:01.589 rmmod nvme_fabrics 00:10:01.589 rmmod nvme_keyring 00:10:01.589 08:43:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:01.589 08:43:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:10:01.589 08:43:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:10:01.589 08:43:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 891028 ']' 00:10:01.589 08:43:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 891028 00:10:01.589 08:43:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 891028 ']' 00:10:01.589 08:43:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 891028 00:10:01.589 08:43:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:10:01.589 08:43:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:01.589 08:43:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 891028 00:10:01.589 08:43:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:10:01.589 08:43:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:10:01.589 08:43:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 891028' 00:10:01.589 killing process with pid 891028 00:10:01.589 08:43:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 891028 00:10:01.589 08:43:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 891028 00:10:01.849 08:43:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:01.849 08:43:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:01.849 08:43:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:01.849 08:43:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:01.849 08:43:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:01.849 08:43:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:01.849 08:43:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:01.849 08:43:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:04.390 08:43:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:04.390 00:10:04.390 real 0m6.432s 00:10:04.390 user 0m11.097s 00:10:04.390 sys 0m2.094s 00:10:04.390 08:43:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:04.390 08:43:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:04.390 ************************************ 00:10:04.390 END TEST nvmf_bdevio 00:10:04.390 ************************************ 00:10:04.390 08:43:22 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:10:04.390 00:10:04.390 real 3m50.583s 00:10:04.390 user 9m52.951s 00:10:04.390 sys 1m9.037s 00:10:04.390 08:43:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:04.390 08:43:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:04.390 ************************************ 00:10:04.390 END TEST nvmf_target_core 00:10:04.390 ************************************ 00:10:04.390 08:43:22 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:04.390 08:43:22 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:04.390 08:43:22 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:04.390 08:43:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:04.390 ************************************ 00:10:04.390 START TEST nvmf_target_extra 00:10:04.390 ************************************ 00:10:04.390 08:43:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:04.390 * Looking for test storage... 00:10:04.390 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:10:04.390 08:43:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:04.390 08:43:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:10:04.390 08:43:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:04.390 08:43:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:04.390 08:43:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:04.390 08:43:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:04.390 08:43:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:04.390 08:43:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:04.390 08:43:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:04.390 08:43:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:04.390 08:43:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:04.390 08:43:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:04.390 08:43:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:04.390 08:43:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:04.390 08:43:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:04.390 08:43:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:04.390 08:43:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:04.390 08:43:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:04.390 08:43:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:04.390 08:43:22 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:04.390 08:43:22 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:04.390 08:43:22 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:04.390 08:43:22 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.390 08:43:22 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.390 08:43:22 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.390 08:43:22 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:10:04.391 08:43:22 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.391 08:43:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@47 -- # : 0 00:10:04.391 08:43:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:04.391 08:43:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:04.391 08:43:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:04.391 08:43:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:04.391 08:43:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:04.391 08:43:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:04.391 08:43:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:04.391 08:43:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:04.391 08:43:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:10:04.391 08:43:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:10:04.391 08:43:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:10:04.391 08:43:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:04.391 08:43:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:04.391 08:43:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:04.391 08:43:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:04.391 ************************************ 00:10:04.391 START TEST nvmf_example 00:10:04.391 ************************************ 00:10:04.391 08:43:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:04.391 * Looking for test storage... 00:10:04.391 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:04.391 08:43:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:04.391 08:43:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:10:04.391 08:43:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:04.391 08:43:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:04.391 08:43:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:04.391 08:43:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:04.391 08:43:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:04.391 08:43:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:04.391 08:43:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:04.391 08:43:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:04.391 08:43:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:04.391 08:43:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:04.391 08:43:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:04.391 08:43:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:04.391 08:43:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:04.391 08:43:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:04.391 08:43:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:04.391 08:43:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:04.391 08:43:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:04.391 08:43:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:04.391 08:43:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:04.391 08:43:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:04.391 08:43:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.391 08:43:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.391 08:43:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.391 08:43:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:10:04.391 08:43:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.391 08:43:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:10:04.391 08:43:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:04.391 08:43:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:04.391 08:43:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:04.391 08:43:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:04.391 08:43:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:04.391 08:43:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:04.391 08:43:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:04.391 08:43:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:04.391 08:43:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:10:04.391 08:43:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:10:04.391 08:43:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:10:04.391 08:43:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:10:04.391 08:43:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:10:04.391 08:43:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:10:04.391 08:43:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:10:04.391 08:43:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:10:04.391 08:43:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:04.391 08:43:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:04.391 08:43:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:10:04.391 08:43:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:04.391 08:43:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:04.391 08:43:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:04.391 08:43:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:04.391 08:43:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:04.391 08:43:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:04.391 08:43:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:04.391 08:43:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:04.391 08:43:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:04.391 08:43:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:04.391 08:43:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:10:04.391 08:43:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:06.294 08:43:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:06.294 08:43:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:10:06.294 08:43:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:06.294 08:43:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:06.294 08:43:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:06.294 08:43:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:06.294 08:43:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:06.294 08:43:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:10:06.294 08:43:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:06.295 08:43:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:10:06.295 08:43:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:10:06.295 08:43:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:10:06.295 08:43:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:10:06.295 08:43:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:10:06.295 08:43:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:10:06.295 08:43:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:06.295 08:43:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:06.295 08:43:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:06.295 08:43:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:06.295 08:43:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:06.295 08:43:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:06.295 08:43:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:06.295 08:43:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:06.295 08:43:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:06.295 08:43:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:06.295 08:43:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:06.295 08:43:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:06.295 08:43:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:06.295 08:43:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:06.295 08:43:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:06.295 08:43:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:06.295 08:43:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:06.295 08:43:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:06.295 08:43:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:06.295 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:06.295 08:43:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:06.295 08:43:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:06.295 08:43:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:06.295 08:43:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:06.295 08:43:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:06.295 08:43:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:06.295 08:43:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:06.295 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:06.295 08:43:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:06.295 08:43:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:06.295 08:43:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:06.295 08:43:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:06.295 08:43:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:06.295 08:43:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:06.295 08:43:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:06.295 08:43:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:06.295 08:43:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:06.295 08:43:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:06.295 08:43:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:06.295 08:43:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:06.295 08:43:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:06.295 08:43:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:06.295 08:43:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:06.295 08:43:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:06.295 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:06.295 08:43:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:06.295 08:43:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:06.295 08:43:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:06.295 08:43:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:06.295 08:43:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:06.295 08:43:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:06.295 08:43:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:06.295 08:43:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:06.295 08:43:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:06.295 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:06.295 08:43:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:06.295 08:43:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:06.295 08:43:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:10:06.295 08:43:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:06.295 08:43:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:06.295 08:43:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:06.295 08:43:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:06.295 08:43:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:06.295 08:43:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:06.295 08:43:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:06.295 08:43:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:06.295 08:43:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:06.295 08:43:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:06.295 08:43:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:06.295 08:43:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:06.295 08:43:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:06.295 08:43:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:06.295 08:43:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:06.295 08:43:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:06.295 08:43:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:06.295 08:43:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:06.295 08:43:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:06.295 08:43:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:06.295 08:43:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:06.295 08:43:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:06.295 08:43:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:06.295 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:06.295 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.142 ms 00:10:06.295 00:10:06.295 --- 10.0.0.2 ping statistics --- 00:10:06.295 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:06.295 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:10:06.295 08:43:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:06.295 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:06.295 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.091 ms 00:10:06.295 00:10:06.295 --- 10.0.0.1 ping statistics --- 00:10:06.295 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:06.295 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:10:06.295 08:43:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:06.295 08:43:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:10:06.295 08:43:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:06.295 08:43:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:06.295 08:43:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:06.295 08:43:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:06.295 08:43:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:06.295 08:43:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:06.295 08:43:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:06.295 08:43:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:10:06.295 08:43:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:10:06.295 08:43:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:06.295 08:43:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:06.295 08:43:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:10:06.296 08:43:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:10:06.296 08:43:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=893307 00:10:06.296 08:43:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:10:06.296 08:43:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:06.296 08:43:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 893307 00:10:06.296 08:43:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@831 -- # '[' -z 893307 ']' 00:10:06.296 08:43:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:06.296 08:43:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:06.296 08:43:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:06.296 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:06.296 08:43:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:06.296 08:43:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:06.296 EAL: No free 2048 kB hugepages reported on node 1 00:10:07.672 08:43:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:07.672 08:43:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # return 0 00:10:07.672 08:43:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:10:07.672 08:43:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:07.672 08:43:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:07.672 08:43:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:07.672 08:43:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.672 08:43:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:07.672 08:43:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.672 08:43:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:10:07.672 08:43:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.672 08:43:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:07.672 08:43:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.672 08:43:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:10:07.672 08:43:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:07.672 08:43:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.672 08:43:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:07.672 08:43:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.672 08:43:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:10:07.672 08:43:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:07.672 08:43:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.672 08:43:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:07.672 08:43:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.672 08:43:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:07.672 08:43:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.672 08:43:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:07.672 08:43:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.672 08:43:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:10:07.672 08:43:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:10:07.672 EAL: No free 2048 kB hugepages reported on node 1 00:10:17.697 Initializing NVMe Controllers 00:10:17.697 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:17.697 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:17.697 Initialization complete. Launching workers. 00:10:17.697 ======================================================== 00:10:17.697 Latency(us) 00:10:17.697 Device Information : IOPS MiB/s Average min max 00:10:17.697 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 15157.09 59.21 4223.32 883.82 18120.79 00:10:17.697 ======================================================== 00:10:17.697 Total : 15157.09 59.21 4223.32 883.82 18120.79 00:10:17.697 00:10:17.697 08:43:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:10:17.697 08:43:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:10:17.697 08:43:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:17.697 08:43:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # sync 00:10:17.697 08:43:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:17.697 08:43:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:10:17.697 08:43:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:17.697 08:43:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:17.697 rmmod nvme_tcp 00:10:17.697 rmmod nvme_fabrics 00:10:17.697 rmmod nvme_keyring 00:10:17.697 08:43:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:17.697 08:43:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:10:17.697 08:43:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:10:17.697 08:43:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 893307 ']' 00:10:17.697 08:43:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@490 -- # killprocess 893307 00:10:17.697 08:43:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@950 -- # '[' -z 893307 ']' 00:10:17.697 08:43:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # kill -0 893307 00:10:17.697 08:43:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # uname 00:10:17.697 08:43:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:17.698 08:43:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 893307 00:10:17.698 08:43:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # process_name=nvmf 00:10:17.698 08:43:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # '[' nvmf = sudo ']' 00:10:17.698 08:43:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@968 -- # echo 'killing process with pid 893307' 00:10:17.698 killing process with pid 893307 00:10:17.698 08:43:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@969 -- # kill 893307 00:10:17.698 08:43:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@974 -- # wait 893307 00:10:17.956 nvmf threads initialize successfully 00:10:17.956 bdev subsystem init successfully 00:10:17.956 created a nvmf target service 00:10:17.956 create targets's poll groups done 00:10:17.956 all subsystems of target started 00:10:17.956 nvmf target is running 00:10:17.956 all subsystems of target stopped 00:10:17.956 destroy targets's poll groups done 00:10:17.956 destroyed the nvmf target service 00:10:17.956 bdev subsystem finish successfully 00:10:17.956 nvmf threads destroy successfully 00:10:17.956 08:43:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:17.956 08:43:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:17.956 08:43:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:17.956 08:43:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:17.956 08:43:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:17.956 08:43:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:17.956 08:43:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:17.956 08:43:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:19.864 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:19.864 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:10:19.864 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:19.864 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:19.864 00:10:19.864 real 0m15.852s 00:10:19.864 user 0m45.010s 00:10:19.864 sys 0m3.200s 00:10:19.864 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:19.864 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:19.864 ************************************ 00:10:19.864 END TEST nvmf_example 00:10:19.864 ************************************ 00:10:19.864 08:43:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:19.864 08:43:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:19.864 08:43:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:19.864 08:43:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:20.126 ************************************ 00:10:20.126 START TEST nvmf_filesystem 00:10:20.126 ************************************ 00:10:20.126 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:20.126 * Looking for test storage... 00:10:20.126 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:20.126 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:10:20.126 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:10:20.126 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:10:20.126 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:10:20.126 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:10:20.126 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:10:20.126 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:10:20.126 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:10:20.126 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:10:20.126 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:10:20.126 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:10:20.126 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:10:20.126 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:10:20.126 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:10:20.126 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:10:20.126 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:10:20.126 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:10:20.126 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:10:20.126 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:10:20.126 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:10:20.126 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:10:20.126 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:10:20.126 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:10:20.126 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:10:20.126 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:10:20.126 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:10:20.126 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:10:20.126 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:20.126 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:10:20.126 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:10:20.126 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:10:20.126 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:10:20.126 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:10:20.126 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:10:20.126 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:10:20.126 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:10:20.126 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:10:20.126 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:10:20.126 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:10:20.126 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:10:20.126 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:10:20.127 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:10:20.127 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:10:20.127 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:10:20.127 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:10:20.127 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:10:20.127 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:10:20.127 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:10:20.127 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:10:20.127 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:10:20.127 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:10:20.127 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:10:20.127 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:10:20.127 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:10:20.127 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:10:20.127 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:10:20.127 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:10:20.127 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:10:20.127 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:10:20.127 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:10:20.127 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:10:20.127 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:10:20.127 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:10:20.127 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:10:20.127 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:10:20.127 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:10:20.127 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:10:20.127 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:10:20.127 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:10:20.127 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:10:20.127 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:10:20.127 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:10:20.127 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:10:20.127 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:10:20.127 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:10:20.127 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:10:20.127 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:10:20.127 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:10:20.127 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:10:20.127 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:10:20.127 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:10:20.127 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:10:20.127 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:10:20.127 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:10:20.127 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:10:20.127 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:10:20.127 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:10:20.127 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:10:20.127 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:10:20.127 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:10:20.127 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:10:20.127 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:10:20.127 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:20.127 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:20.127 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:20.127 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:20.127 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:20.127 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:20.127 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:10:20.127 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:20.127 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:10:20.127 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:10:20.127 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:10:20.127 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:10:20.127 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:10:20.127 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:10:20.127 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:10:20.127 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:10:20.127 #define SPDK_CONFIG_H 00:10:20.127 #define SPDK_CONFIG_APPS 1 00:10:20.127 #define SPDK_CONFIG_ARCH native 00:10:20.127 #undef SPDK_CONFIG_ASAN 00:10:20.127 #undef SPDK_CONFIG_AVAHI 00:10:20.127 #undef SPDK_CONFIG_CET 00:10:20.127 #define SPDK_CONFIG_COVERAGE 1 00:10:20.127 #define SPDK_CONFIG_CROSS_PREFIX 00:10:20.127 #undef SPDK_CONFIG_CRYPTO 00:10:20.127 #undef SPDK_CONFIG_CRYPTO_MLX5 00:10:20.127 #undef SPDK_CONFIG_CUSTOMOCF 00:10:20.127 #undef SPDK_CONFIG_DAOS 00:10:20.127 #define SPDK_CONFIG_DAOS_DIR 00:10:20.127 #define SPDK_CONFIG_DEBUG 1 00:10:20.127 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:10:20.127 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:10:20.127 #define SPDK_CONFIG_DPDK_INC_DIR //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:10:20.127 #define SPDK_CONFIG_DPDK_LIB_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:10:20.127 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:10:20.127 #undef SPDK_CONFIG_DPDK_UADK 00:10:20.127 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:20.127 #define SPDK_CONFIG_EXAMPLES 1 00:10:20.127 #undef SPDK_CONFIG_FC 00:10:20.127 #define SPDK_CONFIG_FC_PATH 00:10:20.127 #define SPDK_CONFIG_FIO_PLUGIN 1 00:10:20.127 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:10:20.127 #undef SPDK_CONFIG_FUSE 00:10:20.127 #undef SPDK_CONFIG_FUZZER 00:10:20.127 #define SPDK_CONFIG_FUZZER_LIB 00:10:20.127 #undef SPDK_CONFIG_GOLANG 00:10:20.127 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:10:20.127 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:10:20.127 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:10:20.127 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:10:20.127 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:10:20.127 #undef SPDK_CONFIG_HAVE_LIBBSD 00:10:20.127 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:10:20.127 #define SPDK_CONFIG_IDXD 1 00:10:20.127 #define SPDK_CONFIG_IDXD_KERNEL 1 00:10:20.127 #undef SPDK_CONFIG_IPSEC_MB 00:10:20.127 #define SPDK_CONFIG_IPSEC_MB_DIR 00:10:20.127 #define SPDK_CONFIG_ISAL 1 00:10:20.127 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:10:20.127 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:10:20.127 #define SPDK_CONFIG_LIBDIR 00:10:20.127 #undef SPDK_CONFIG_LTO 00:10:20.127 #define SPDK_CONFIG_MAX_LCORES 128 00:10:20.127 #define SPDK_CONFIG_NVME_CUSE 1 00:10:20.127 #undef SPDK_CONFIG_OCF 00:10:20.127 #define SPDK_CONFIG_OCF_PATH 00:10:20.127 #define SPDK_CONFIG_OPENSSL_PATH 00:10:20.127 #undef SPDK_CONFIG_PGO_CAPTURE 00:10:20.127 #define SPDK_CONFIG_PGO_DIR 00:10:20.127 #undef SPDK_CONFIG_PGO_USE 00:10:20.127 #define SPDK_CONFIG_PREFIX /usr/local 00:10:20.127 #undef SPDK_CONFIG_RAID5F 00:10:20.127 #undef SPDK_CONFIG_RBD 00:10:20.127 #define SPDK_CONFIG_RDMA 1 00:10:20.127 #define SPDK_CONFIG_RDMA_PROV verbs 00:10:20.127 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:10:20.127 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:10:20.127 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:10:20.127 #define SPDK_CONFIG_SHARED 1 00:10:20.127 #undef SPDK_CONFIG_SMA 00:10:20.127 #define SPDK_CONFIG_TESTS 1 00:10:20.127 #undef SPDK_CONFIG_TSAN 00:10:20.127 #define SPDK_CONFIG_UBLK 1 00:10:20.128 #define SPDK_CONFIG_UBSAN 1 00:10:20.128 #undef SPDK_CONFIG_UNIT_TESTS 00:10:20.128 #undef SPDK_CONFIG_URING 00:10:20.128 #define SPDK_CONFIG_URING_PATH 00:10:20.128 #undef SPDK_CONFIG_URING_ZNS 00:10:20.128 #undef SPDK_CONFIG_USDT 00:10:20.128 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:10:20.128 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:10:20.128 #define SPDK_CONFIG_VFIO_USER 1 00:10:20.128 #define SPDK_CONFIG_VFIO_USER_DIR 00:10:20.128 #define SPDK_CONFIG_VHOST 1 00:10:20.128 #define SPDK_CONFIG_VIRTIO 1 00:10:20.128 #undef SPDK_CONFIG_VTUNE 00:10:20.128 #define SPDK_CONFIG_VTUNE_DIR 00:10:20.128 #define SPDK_CONFIG_WERROR 1 00:10:20.128 #define SPDK_CONFIG_WPDK_DIR 00:10:20.128 #undef SPDK_CONFIG_XNVME 00:10:20.128 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:10:20.128 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:10:20.128 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:20.128 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:20.128 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:20.128 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:20.128 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:20.128 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:20.128 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:20.128 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:20.128 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:20.128 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:20.128 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:20.128 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:20.128 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:20.128 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:10:20.128 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:20.128 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:10:20.128 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:10:20.128 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:10:20.128 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:10:20.128 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:10:20.128 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:10:20.128 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:10:20.128 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:10:20.128 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:10:20.128 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:10:20.128 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:10:20.128 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:10:20.128 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:10:20.128 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:10:20.128 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:10:20.128 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:10:20.128 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:10:20.128 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:10:20.128 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:10:20.128 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:10:20.128 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:10:20.128 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 1 00:10:20.128 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:10:20.128 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:10:20.128 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:10:20.128 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:10:20.128 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:10:20.128 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:10:20.128 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:10:20.128 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:10:20.128 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:10:20.128 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:10:20.128 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:10:20.128 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:10:20.128 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:10:20.128 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:10:20.128 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:10:20.128 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:10:20.128 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:10:20.128 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:10:20.128 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:10:20.128 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:10:20.128 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:10:20.128 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:10:20.128 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:10:20.128 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:10:20.128 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:10:20.128 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:10:20.128 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:10:20.129 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:10:20.129 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:10:20.129 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:10:20.129 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:10:20.129 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:10:20.129 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:10:20.129 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:10:20.129 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:10:20.129 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:10:20.129 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:10:20.129 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:10:20.129 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:10:20.129 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:10:20.129 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:10:20.129 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:10:20.129 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:10:20.129 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:10:20.129 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:10:20.129 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:10:20.129 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:10:20.129 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:10:20.129 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:10:20.129 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:10:20.129 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:10:20.129 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:10:20.129 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:10:20.129 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:10:20.129 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:10:20.129 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:10:20.129 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:10:20.129 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:10:20.129 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:10:20.129 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:10:20.129 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:10:20.129 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:10:20.129 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:10:20.129 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:10:20.129 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:10:20.129 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:10:20.129 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:10:20.129 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:10:20.129 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:10:20.129 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:10:20.129 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:10:20.129 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:10:20.129 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:10:20.129 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:10:20.129 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:10:20.129 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:10:20.129 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:10:20.129 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : main 00:10:20.129 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:10:20.129 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:10:20.129 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:10:20.129 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:10:20.129 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:10:20.129 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:10:20.129 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:10:20.129 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:10:20.129 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:10:20.129 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:10:20.129 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:10:20.129 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:10:20.129 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:10:20.129 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:10:20.129 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:10:20.129 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:10:20.129 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:10:20.129 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:10:20.129 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:10:20.129 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:10:20.129 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:10:20.129 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:10:20.129 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:10:20.129 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:10:20.129 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:10:20.129 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:10:20.129 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:10:20.129 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:10:20.129 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:10:20.129 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:10:20.129 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:10:20.129 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:10:20.129 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:10:20.129 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:10:20.129 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:10:20.129 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:20.129 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:20.129 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:10:20.129 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:10:20.129 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:20.129 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:20.129 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:20.130 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:20.130 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:10:20.130 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:10:20.130 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:20.130 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:20.130 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONDONTWRITEBYTECODE=1 00:10:20.130 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONDONTWRITEBYTECODE=1 00:10:20.130 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:20.130 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:20.130 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@196 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:20.130 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@196 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:20.130 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:10:20.130 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@201 -- # rm -rf /var/tmp/asan_suppression_file 00:10:20.130 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@202 -- # cat 00:10:20.130 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@238 -- # echo leak:libfuse3.so 00:10:20.130 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:20.130 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:20.130 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:20.130 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:20.130 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # '[' -z /var/spdk/dependencies ']' 00:10:20.130 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@247 -- # export DEPENDENCY_DIR 00:10:20.130 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:20.130 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:20.130 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@252 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:20.130 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@252 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:20.130 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:20.130 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:20.130 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:20.130 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:20.130 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:20.130 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:20.130 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@261 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:20.130 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@261 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:20.130 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@264 -- # '[' 0 -eq 0 ']' 00:10:20.130 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export valgrind= 00:10:20.130 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # valgrind= 00:10:20.130 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # uname -s 00:10:20.130 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # '[' Linux = Linux ']' 00:10:20.130 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # HUGEMEM=4096 00:10:20.130 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # export CLEAR_HUGE=yes 00:10:20.130 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # CLEAR_HUGE=yes 00:10:20.130 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@274 -- # [[ 0 -eq 1 ]] 00:10:20.130 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@274 -- # [[ 0 -eq 1 ]] 00:10:20.130 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@281 -- # MAKE=make 00:10:20.130 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@282 -- # MAKEFLAGS=-j48 00:10:20.130 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@298 -- # export HUGEMEM=4096 00:10:20.130 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@298 -- # HUGEMEM=4096 00:10:20.130 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@300 -- # NO_HUGE=() 00:10:20.130 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@301 -- # TEST_MODE= 00:10:20.130 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@302 -- # for i in "$@" 00:10:20.130 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@303 -- # case "$i" in 00:10:20.130 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # TEST_TRANSPORT=tcp 00:10:20.130 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@320 -- # [[ -z 895001 ]] 00:10:20.130 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@320 -- # kill -0 895001 00:10:20.130 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:10:20.130 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@330 -- # [[ -v testdir ]] 00:10:20.130 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@332 -- # local requested_size=2147483648 00:10:20.130 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@333 -- # local mount target_dir 00:10:20.130 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@335 -- # local -A mounts fss sizes avails uses 00:10:20.130 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@336 -- # local source fs size avail mount use 00:10:20.130 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # local storage_fallback storage_candidates 00:10:20.130 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # mktemp -udt spdk.XXXXXX 00:10:20.130 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # storage_fallback=/tmp/spdk.RbMIiv 00:10:20.130 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@345 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:10:20.130 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # [[ -n '' ]] 00:10:20.130 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@352 -- # [[ -n '' ]] 00:10:20.130 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@357 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.RbMIiv/tests/target /tmp/spdk.RbMIiv 00:10:20.130 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # requested_size=2214592512 00:10:20.130 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:10:20.130 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # df -T 00:10:20.130 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # grep -v Filesystem 00:10:20.130 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=spdk_devtmpfs 00:10:20.130 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=devtmpfs 00:10:20.130 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=67108864 00:10:20.130 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=67108864 00:10:20.130 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=0 00:10:20.130 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:10:20.130 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=/dev/pmem0 00:10:20.130 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=ext2 00:10:20.130 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=953643008 00:10:20.130 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=5284429824 00:10:20.131 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=4330786816 00:10:20.131 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:10:20.131 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=spdk_root 00:10:20.131 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=overlay 00:10:20.131 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=54036713472 00:10:20.131 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=61994713088 00:10:20.131 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=7957999616 00:10:20.131 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:10:20.131 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:10:20.131 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:10:20.131 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=30935175168 00:10:20.131 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=30997356544 00:10:20.131 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=62181376 00:10:20.131 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:10:20.131 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:10:20.131 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:10:20.131 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=12376535040 00:10:20.131 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=12398944256 00:10:20.131 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=22409216 00:10:20.131 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:10:20.131 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:10:20.131 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:10:20.131 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=30996549632 00:10:20.131 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=30997356544 00:10:20.131 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=806912 00:10:20.131 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:10:20.131 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:10:20.131 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:10:20.131 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=6199463936 00:10:20.131 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=6199468032 00:10:20.131 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=4096 00:10:20.131 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:10:20.131 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # printf '* Looking for test storage...\n' 00:10:20.131 * Looking for test storage... 00:10:20.131 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@370 -- # local target_space new_size 00:10:20.131 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # for target_dir in "${storage_candidates[@]}" 00:10:20.131 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:20.131 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # awk '$1 !~ /Filesystem/{print $6}' 00:10:20.131 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mount=/ 00:10:20.131 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # target_space=54036713472 00:10:20.131 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@377 -- # (( target_space == 0 || target_space < requested_size )) 00:10:20.131 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # (( target_space >= requested_size )) 00:10:20.131 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # [[ overlay == tmpfs ]] 00:10:20.131 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # [[ overlay == ramfs ]] 00:10:20.131 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # [[ / == / ]] 00:10:20.131 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # new_size=10172592128 00:10:20.131 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@384 -- # (( new_size * 100 / sizes[/] > 95 )) 00:10:20.131 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:20.131 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:20.131 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@390 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:20.131 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:20.131 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # return 0 00:10:20.131 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:10:20.131 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:10:20.131 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:10:20.131 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:10:20.131 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 00:10:20.131 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:10:20.131 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:10:20.131 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:10:20.131 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:10:20.131 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:10:20.131 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:10:20.131 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:10:20.131 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:10:20.131 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:10:20.131 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:20.131 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:10:20.131 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:20.131 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:20.131 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:20.131 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:20.131 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:20.131 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:20.131 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:20.131 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:20.131 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:20.131 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:20.131 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:20.132 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:20.132 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:20.132 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:20.132 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:20.132 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:20.132 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:20.132 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:20.132 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:20.132 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:20.132 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:20.132 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:20.132 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:20.132 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:20.132 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:20.132 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:10:20.132 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:20.132 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:20.132 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:20.132 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:20.132 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:20.132 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:20.132 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:20.132 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:20.132 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:10:20.132 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:10:20.132 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:10:20.132 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:20.132 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:20.132 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:20.132 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:20.132 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:20.132 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:20.132 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:20.132 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:20.132 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:20.132 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:20.132 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:10:20.132 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:22.664 08:43:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:22.664 08:43:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:10:22.664 08:43:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:22.664 08:43:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:22.664 08:43:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:22.664 08:43:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:22.664 08:43:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:22.664 08:43:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:10:22.664 08:43:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:22.664 08:43:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:10:22.664 08:43:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:10:22.664 08:43:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:10:22.664 08:43:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:10:22.664 08:43:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:10:22.664 08:43:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:10:22.664 08:43:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:22.664 08:43:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:22.664 08:43:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:22.664 08:43:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:22.664 08:43:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:22.664 08:43:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:22.664 08:43:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:22.664 08:43:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:22.664 08:43:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:22.664 08:43:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:22.664 08:43:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:22.664 08:43:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:22.664 08:43:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:22.664 08:43:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:22.664 08:43:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:22.664 08:43:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:22.664 08:43:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:22.664 08:43:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:22.664 08:43:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:22.664 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:22.664 08:43:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:22.664 08:43:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:22.664 08:43:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:22.664 08:43:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:22.664 08:43:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:22.664 08:43:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:22.664 08:43:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:22.664 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:22.664 08:43:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:22.665 08:43:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:22.665 08:43:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:22.665 08:43:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:22.665 08:43:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:22.665 08:43:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:22.665 08:43:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:22.665 08:43:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:22.665 08:43:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:22.665 08:43:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:22.665 08:43:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:22.665 08:43:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:22.665 08:43:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:22.665 08:43:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:22.665 08:43:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:22.665 08:43:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:22.665 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:22.665 08:43:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:22.665 08:43:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:22.665 08:43:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:22.665 08:43:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:22.665 08:43:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:22.665 08:43:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:22.665 08:43:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:22.665 08:43:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:22.665 08:43:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:22.665 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:22.665 08:43:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:22.665 08:43:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:22.665 08:43:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:10:22.665 08:43:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:22.665 08:43:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:22.665 08:43:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:22.665 08:43:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:22.665 08:43:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:22.665 08:43:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:22.665 08:43:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:22.665 08:43:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:22.665 08:43:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:22.665 08:43:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:22.665 08:43:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:22.665 08:43:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:22.665 08:43:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:22.665 08:43:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:22.665 08:43:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:22.665 08:43:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:22.665 08:43:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:22.665 08:43:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:22.665 08:43:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:22.665 08:43:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:22.665 08:43:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:22.665 08:43:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:22.665 08:43:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:22.665 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:22.665 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.151 ms 00:10:22.665 00:10:22.665 --- 10.0.0.2 ping statistics --- 00:10:22.665 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:22.665 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:10:22.665 08:43:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:22.665 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:22.665 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.126 ms 00:10:22.665 00:10:22.665 --- 10.0.0.1 ping statistics --- 00:10:22.665 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:22.665 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:10:22.665 08:43:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:22.665 08:43:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:10:22.665 08:43:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:22.665 08:43:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:22.665 08:43:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:22.665 08:43:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:22.665 08:43:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:22.665 08:43:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:22.665 08:43:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:22.665 08:43:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:10:22.665 08:43:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:22.665 08:43:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:22.665 08:43:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:22.665 ************************************ 00:10:22.665 START TEST nvmf_filesystem_no_in_capsule 00:10:22.665 ************************************ 00:10:22.665 08:43:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 0 00:10:22.665 08:43:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:10:22.665 08:43:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:22.665 08:43:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:22.665 08:43:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:22.665 08:43:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:22.665 08:43:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=896630 00:10:22.665 08:43:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:22.665 08:43:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 896630 00:10:22.665 08:43:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 896630 ']' 00:10:22.665 08:43:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:22.665 08:43:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:22.665 08:43:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:22.665 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:22.665 08:43:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:22.665 08:43:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:22.665 [2024-07-26 08:43:40.812796] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:10:22.665 [2024-07-26 08:43:40.812895] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:22.665 EAL: No free 2048 kB hugepages reported on node 1 00:10:22.665 [2024-07-26 08:43:40.853754] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:22.665 [2024-07-26 08:43:40.880624] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:22.665 [2024-07-26 08:43:40.970134] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:22.665 [2024-07-26 08:43:40.970195] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:22.665 [2024-07-26 08:43:40.970224] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:22.665 [2024-07-26 08:43:40.970236] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:22.665 [2024-07-26 08:43:40.970246] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:22.665 [2024-07-26 08:43:40.970300] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:22.666 [2024-07-26 08:43:40.970359] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:22.666 [2024-07-26 08:43:40.970424] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:22.666 [2024-07-26 08:43:40.970426] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:22.666 08:43:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:22.666 08:43:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:10:22.666 08:43:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:22.666 08:43:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:22.666 08:43:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:22.666 08:43:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:22.666 08:43:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:22.666 08:43:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:10:22.666 08:43:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.666 08:43:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:22.666 [2024-07-26 08:43:41.116321] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:22.924 08:43:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.924 08:43:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:22.924 08:43:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.924 08:43:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:22.924 Malloc1 00:10:22.924 08:43:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.924 08:43:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:22.924 08:43:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.924 08:43:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:22.924 08:43:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.924 08:43:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:22.924 08:43:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.924 08:43:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:22.924 08:43:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.924 08:43:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:22.924 08:43:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.924 08:43:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:22.924 [2024-07-26 08:43:41.305988] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:22.924 08:43:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.924 08:43:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:22.924 08:43:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:10:22.924 08:43:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:10:22.924 08:43:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:10:22.924 08:43:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:10:22.924 08:43:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:22.924 08:43:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.924 08:43:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:22.924 08:43:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.924 08:43:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:10:22.924 { 00:10:22.924 "name": "Malloc1", 00:10:22.924 "aliases": [ 00:10:22.924 "dd1813fe-8d6c-410e-a113-01bd913a858d" 00:10:22.924 ], 00:10:22.924 "product_name": "Malloc disk", 00:10:22.924 "block_size": 512, 00:10:22.924 "num_blocks": 1048576, 00:10:22.924 "uuid": "dd1813fe-8d6c-410e-a113-01bd913a858d", 00:10:22.924 "assigned_rate_limits": { 00:10:22.924 "rw_ios_per_sec": 0, 00:10:22.924 "rw_mbytes_per_sec": 0, 00:10:22.924 "r_mbytes_per_sec": 0, 00:10:22.924 "w_mbytes_per_sec": 0 00:10:22.924 }, 00:10:22.924 "claimed": true, 00:10:22.924 "claim_type": "exclusive_write", 00:10:22.924 "zoned": false, 00:10:22.924 "supported_io_types": { 00:10:22.924 "read": true, 00:10:22.924 "write": true, 00:10:22.924 "unmap": true, 00:10:22.924 "flush": true, 00:10:22.924 "reset": true, 00:10:22.924 "nvme_admin": false, 00:10:22.924 "nvme_io": false, 00:10:22.924 "nvme_io_md": false, 00:10:22.924 "write_zeroes": true, 00:10:22.924 "zcopy": true, 00:10:22.924 "get_zone_info": false, 00:10:22.924 "zone_management": false, 00:10:22.924 "zone_append": false, 00:10:22.924 "compare": false, 00:10:22.924 "compare_and_write": false, 00:10:22.924 "abort": true, 00:10:22.924 "seek_hole": false, 00:10:22.924 "seek_data": false, 00:10:22.924 "copy": true, 00:10:22.924 "nvme_iov_md": false 00:10:22.924 }, 00:10:22.924 "memory_domains": [ 00:10:22.924 { 00:10:22.924 "dma_device_id": "system", 00:10:22.924 "dma_device_type": 1 00:10:22.924 }, 00:10:22.924 { 00:10:22.924 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:22.924 "dma_device_type": 2 00:10:22.924 } 00:10:22.924 ], 00:10:22.924 "driver_specific": {} 00:10:22.924 } 00:10:22.924 ]' 00:10:22.924 08:43:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:10:22.924 08:43:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:10:22.924 08:43:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:10:23.182 08:43:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:10:23.182 08:43:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:10:23.182 08:43:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:10:23.182 08:43:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:23.182 08:43:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:23.749 08:43:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:23.749 08:43:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:10:23.749 08:43:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:23.749 08:43:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:23.749 08:43:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:10:25.660 08:43:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:25.660 08:43:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:25.660 08:43:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:25.660 08:43:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:25.660 08:43:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:25.660 08:43:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:10:25.661 08:43:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:25.661 08:43:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:25.661 08:43:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:25.661 08:43:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:25.661 08:43:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:25.661 08:43:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:25.661 08:43:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:25.661 08:43:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:25.661 08:43:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:25.661 08:43:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:25.661 08:43:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:25.917 08:43:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:26.483 08:43:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:10:27.420 08:43:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:10:27.420 08:43:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:27.420 08:43:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:27.420 08:43:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:27.420 08:43:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:27.420 ************************************ 00:10:27.420 START TEST filesystem_ext4 00:10:27.420 ************************************ 00:10:27.420 08:43:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:27.420 08:43:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:27.420 08:43:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:27.420 08:43:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:27.420 08:43:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:10:27.420 08:43:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:10:27.420 08:43:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:10:27.420 08:43:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # local force 00:10:27.420 08:43:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:10:27.420 08:43:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:10:27.420 08:43:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:27.420 mke2fs 1.46.5 (30-Dec-2021) 00:10:27.680 Discarding device blocks: 0/522240 done 00:10:27.680 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:27.680 Filesystem UUID: ee8c0f62-e2c1-4ed0-8f87-e0db26d61c2b 00:10:27.680 Superblock backups stored on blocks: 00:10:27.680 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:27.680 00:10:27.680 Allocating group tables: 0/64 done 00:10:27.680 Writing inode tables: 0/64 done 00:10:27.680 Creating journal (8192 blocks): done 00:10:27.680 Writing superblocks and filesystem accounting information: 0/64 done 00:10:27.680 00:10:27.680 08:43:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@945 -- # return 0 00:10:27.680 08:43:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:27.939 08:43:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:27.939 08:43:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:10:27.939 08:43:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:27.939 08:43:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:10:27.939 08:43:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:10:27.939 08:43:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:27.939 08:43:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 896630 00:10:27.939 08:43:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:27.939 08:43:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:27.939 08:43:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:27.939 08:43:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:27.939 00:10:27.939 real 0m0.573s 00:10:27.939 user 0m0.016s 00:10:27.939 sys 0m0.060s 00:10:27.939 08:43:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:27.939 08:43:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:10:27.939 ************************************ 00:10:27.939 END TEST filesystem_ext4 00:10:27.939 ************************************ 00:10:28.199 08:43:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:10:28.199 08:43:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:28.199 08:43:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:28.199 08:43:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:28.199 ************************************ 00:10:28.199 START TEST filesystem_btrfs 00:10:28.199 ************************************ 00:10:28.199 08:43:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:10:28.199 08:43:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:10:28.199 08:43:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:28.199 08:43:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:10:28.199 08:43:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:10:28.199 08:43:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:10:28.199 08:43:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:10:28.199 08:43:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # local force 00:10:28.199 08:43:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:10:28.199 08:43:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:10:28.199 08:43:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:10:28.458 btrfs-progs v6.6.2 00:10:28.458 See https://btrfs.readthedocs.io for more information. 00:10:28.458 00:10:28.458 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:10:28.458 NOTE: several default settings have changed in version 5.15, please make sure 00:10:28.458 this does not affect your deployments: 00:10:28.458 - DUP for metadata (-m dup) 00:10:28.458 - enabled no-holes (-O no-holes) 00:10:28.458 - enabled free-space-tree (-R free-space-tree) 00:10:28.458 00:10:28.458 Label: (null) 00:10:28.458 UUID: 174c5b66-efef-496f-8f49-7cfa8ca7e2f2 00:10:28.458 Node size: 16384 00:10:28.458 Sector size: 4096 00:10:28.458 Filesystem size: 510.00MiB 00:10:28.458 Block group profiles: 00:10:28.458 Data: single 8.00MiB 00:10:28.458 Metadata: DUP 32.00MiB 00:10:28.458 System: DUP 8.00MiB 00:10:28.458 SSD detected: yes 00:10:28.458 Zoned device: no 00:10:28.458 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:10:28.458 Runtime features: free-space-tree 00:10:28.458 Checksum: crc32c 00:10:28.458 Number of devices: 1 00:10:28.458 Devices: 00:10:28.458 ID SIZE PATH 00:10:28.458 1 510.00MiB /dev/nvme0n1p1 00:10:28.458 00:10:28.458 08:43:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@945 -- # return 0 00:10:28.458 08:43:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:28.718 08:43:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:28.718 08:43:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:10:28.718 08:43:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:28.718 08:43:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:10:28.718 08:43:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:10:28.718 08:43:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:28.718 08:43:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 896630 00:10:28.718 08:43:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:28.718 08:43:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:28.718 08:43:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:28.718 08:43:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:28.718 00:10:28.718 real 0m0.553s 00:10:28.718 user 0m0.019s 00:10:28.718 sys 0m0.110s 00:10:28.718 08:43:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:28.718 08:43:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:10:28.718 ************************************ 00:10:28.718 END TEST filesystem_btrfs 00:10:28.718 ************************************ 00:10:28.719 08:43:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:10:28.719 08:43:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:28.719 08:43:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:28.719 08:43:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:28.719 ************************************ 00:10:28.719 START TEST filesystem_xfs 00:10:28.719 ************************************ 00:10:28.719 08:43:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:10:28.719 08:43:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:10:28.719 08:43:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:28.719 08:43:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:10:28.719 08:43:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:10:28.719 08:43:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:10:28.719 08:43:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # local i=0 00:10:28.719 08:43:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # local force 00:10:28.719 08:43:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:10:28.719 08:43:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@934 -- # force=-f 00:10:28.719 08:43:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:10:28.719 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:10:28.719 = sectsz=512 attr=2, projid32bit=1 00:10:28.719 = crc=1 finobt=1, sparse=1, rmapbt=0 00:10:28.719 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:10:28.719 data = bsize=4096 blocks=130560, imaxpct=25 00:10:28.719 = sunit=0 swidth=0 blks 00:10:28.719 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:10:28.719 log =internal log bsize=4096 blocks=16384, version=2 00:10:28.719 = sectsz=512 sunit=0 blks, lazy-count=1 00:10:28.719 realtime =none extsz=4096 blocks=0, rtextents=0 00:10:29.657 Discarding blocks...Done. 00:10:29.657 08:43:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@945 -- # return 0 00:10:29.657 08:43:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:32.191 08:43:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:32.192 08:43:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:10:32.192 08:43:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:32.192 08:43:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:10:32.192 08:43:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:10:32.192 08:43:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:32.192 08:43:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 896630 00:10:32.192 08:43:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:32.192 08:43:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:32.192 08:43:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:32.192 08:43:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:32.192 00:10:32.192 real 0m3.252s 00:10:32.192 user 0m0.023s 00:10:32.192 sys 0m0.051s 00:10:32.192 08:43:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:32.192 08:43:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:10:32.192 ************************************ 00:10:32.192 END TEST filesystem_xfs 00:10:32.192 ************************************ 00:10:32.192 08:43:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:10:32.192 08:43:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:10:32.192 08:43:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:32.192 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:32.192 08:43:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:32.192 08:43:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:10:32.192 08:43:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:32.192 08:43:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:32.449 08:43:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:32.449 08:43:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:32.449 08:43:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:10:32.449 08:43:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:32.449 08:43:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.449 08:43:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:32.449 08:43:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.449 08:43:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:32.449 08:43:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 896630 00:10:32.449 08:43:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 896630 ']' 00:10:32.449 08:43:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # kill -0 896630 00:10:32.449 08:43:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # uname 00:10:32.449 08:43:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:32.449 08:43:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 896630 00:10:32.449 08:43:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:32.449 08:43:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:32.449 08:43:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 896630' 00:10:32.449 killing process with pid 896630 00:10:32.449 08:43:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@969 -- # kill 896630 00:10:32.449 08:43:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@974 -- # wait 896630 00:10:32.706 08:43:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:10:32.706 00:10:32.706 real 0m10.356s 00:10:32.706 user 0m39.657s 00:10:32.706 sys 0m1.616s 00:10:32.706 08:43:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:32.706 08:43:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:32.706 ************************************ 00:10:32.706 END TEST nvmf_filesystem_no_in_capsule 00:10:32.706 ************************************ 00:10:32.706 08:43:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:10:32.706 08:43:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:32.706 08:43:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:32.706 08:43:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:32.964 ************************************ 00:10:32.964 START TEST nvmf_filesystem_in_capsule 00:10:32.964 ************************************ 00:10:32.964 08:43:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 4096 00:10:32.964 08:43:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:10:32.964 08:43:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:32.964 08:43:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:32.964 08:43:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:32.964 08:43:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:32.964 08:43:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=898044 00:10:32.964 08:43:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:32.964 08:43:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 898044 00:10:32.964 08:43:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 898044 ']' 00:10:32.964 08:43:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:32.964 08:43:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:32.964 08:43:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:32.964 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:32.964 08:43:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:32.964 08:43:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:32.964 [2024-07-26 08:43:51.220746] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:10:32.964 [2024-07-26 08:43:51.220842] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:32.964 EAL: No free 2048 kB hugepages reported on node 1 00:10:32.964 [2024-07-26 08:43:51.261068] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:32.964 [2024-07-26 08:43:51.287738] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:32.964 [2024-07-26 08:43:51.375615] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:32.964 [2024-07-26 08:43:51.375669] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:32.964 [2024-07-26 08:43:51.375697] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:32.964 [2024-07-26 08:43:51.375709] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:32.964 [2024-07-26 08:43:51.375719] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:32.964 [2024-07-26 08:43:51.375801] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:32.964 [2024-07-26 08:43:51.375867] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:32.964 [2024-07-26 08:43:51.375933] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:32.964 [2024-07-26 08:43:51.375936] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:33.224 08:43:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:33.224 08:43:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:10:33.224 08:43:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:33.224 08:43:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:33.224 08:43:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:33.224 08:43:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:33.224 08:43:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:33.224 08:43:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:10:33.224 08:43:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.224 08:43:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:33.224 [2024-07-26 08:43:51.520555] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:33.224 08:43:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.224 08:43:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:33.224 08:43:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.224 08:43:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:33.487 Malloc1 00:10:33.487 08:43:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.487 08:43:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:33.487 08:43:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.487 08:43:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:33.487 08:43:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.487 08:43:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:33.487 08:43:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.487 08:43:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:33.487 08:43:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.487 08:43:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:33.487 08:43:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.487 08:43:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:33.487 [2024-07-26 08:43:51.710456] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:33.487 08:43:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.487 08:43:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:33.487 08:43:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:10:33.487 08:43:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:10:33.487 08:43:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:10:33.487 08:43:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:10:33.487 08:43:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:33.487 08:43:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.487 08:43:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:33.487 08:43:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.487 08:43:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:10:33.487 { 00:10:33.487 "name": "Malloc1", 00:10:33.487 "aliases": [ 00:10:33.487 "c1f9067f-977f-4d72-8651-67724251f38b" 00:10:33.487 ], 00:10:33.487 "product_name": "Malloc disk", 00:10:33.487 "block_size": 512, 00:10:33.487 "num_blocks": 1048576, 00:10:33.487 "uuid": "c1f9067f-977f-4d72-8651-67724251f38b", 00:10:33.487 "assigned_rate_limits": { 00:10:33.487 "rw_ios_per_sec": 0, 00:10:33.487 "rw_mbytes_per_sec": 0, 00:10:33.487 "r_mbytes_per_sec": 0, 00:10:33.487 "w_mbytes_per_sec": 0 00:10:33.487 }, 00:10:33.487 "claimed": true, 00:10:33.487 "claim_type": "exclusive_write", 00:10:33.487 "zoned": false, 00:10:33.487 "supported_io_types": { 00:10:33.487 "read": true, 00:10:33.487 "write": true, 00:10:33.487 "unmap": true, 00:10:33.487 "flush": true, 00:10:33.487 "reset": true, 00:10:33.487 "nvme_admin": false, 00:10:33.487 "nvme_io": false, 00:10:33.487 "nvme_io_md": false, 00:10:33.487 "write_zeroes": true, 00:10:33.487 "zcopy": true, 00:10:33.487 "get_zone_info": false, 00:10:33.487 "zone_management": false, 00:10:33.487 "zone_append": false, 00:10:33.487 "compare": false, 00:10:33.487 "compare_and_write": false, 00:10:33.487 "abort": true, 00:10:33.487 "seek_hole": false, 00:10:33.487 "seek_data": false, 00:10:33.487 "copy": true, 00:10:33.487 "nvme_iov_md": false 00:10:33.487 }, 00:10:33.487 "memory_domains": [ 00:10:33.487 { 00:10:33.487 "dma_device_id": "system", 00:10:33.487 "dma_device_type": 1 00:10:33.487 }, 00:10:33.487 { 00:10:33.487 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:33.487 "dma_device_type": 2 00:10:33.487 } 00:10:33.487 ], 00:10:33.487 "driver_specific": {} 00:10:33.487 } 00:10:33.487 ]' 00:10:33.487 08:43:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:10:33.487 08:43:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:10:33.487 08:43:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:10:33.487 08:43:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:10:33.487 08:43:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:10:33.487 08:43:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:10:33.487 08:43:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:33.487 08:43:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:34.081 08:43:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:34.081 08:43:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:10:34.081 08:43:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:34.081 08:43:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:34.081 08:43:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:10:35.985 08:43:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:35.985 08:43:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:35.985 08:43:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:35.985 08:43:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:35.985 08:43:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:35.985 08:43:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:10:35.985 08:43:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:35.985 08:43:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:35.985 08:43:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:35.985 08:43:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:35.985 08:43:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:35.986 08:43:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:35.986 08:43:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:35.986 08:43:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:35.986 08:43:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:35.986 08:43:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:35.986 08:43:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:36.552 08:43:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:37.488 08:43:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:10:38.427 08:43:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:10:38.427 08:43:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:38.427 08:43:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:38.427 08:43:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:38.427 08:43:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:38.427 ************************************ 00:10:38.427 START TEST filesystem_in_capsule_ext4 00:10:38.427 ************************************ 00:10:38.427 08:43:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:38.427 08:43:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:38.427 08:43:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:38.427 08:43:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:38.427 08:43:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:10:38.427 08:43:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:10:38.427 08:43:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:10:38.427 08:43:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # local force 00:10:38.427 08:43:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:10:38.427 08:43:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:10:38.427 08:43:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:38.427 mke2fs 1.46.5 (30-Dec-2021) 00:10:38.427 Discarding device blocks: 0/522240 done 00:10:38.427 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:38.427 Filesystem UUID: d1264ee0-e5ad-4114-a3e9-3f7967481fe0 00:10:38.427 Superblock backups stored on blocks: 00:10:38.427 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:38.427 00:10:38.427 Allocating group tables: 0/64 done 00:10:38.427 Writing inode tables: 0/64 done 00:10:38.686 Creating journal (8192 blocks): done 00:10:39.624 Writing superblocks and filesystem accounting information: 0/64 done 00:10:39.624 00:10:39.624 08:43:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@945 -- # return 0 00:10:39.624 08:43:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:40.563 08:43:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:40.563 08:43:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:10:40.564 08:43:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:40.564 08:43:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:10:40.564 08:43:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:10:40.564 08:43:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:40.564 08:43:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 898044 00:10:40.564 08:43:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:40.564 08:43:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:40.564 08:43:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:40.564 08:43:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:40.564 00:10:40.564 real 0m2.040s 00:10:40.564 user 0m0.022s 00:10:40.564 sys 0m0.047s 00:10:40.564 08:43:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:40.564 08:43:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:10:40.564 ************************************ 00:10:40.564 END TEST filesystem_in_capsule_ext4 00:10:40.564 ************************************ 00:10:40.564 08:43:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:10:40.564 08:43:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:40.564 08:43:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:40.564 08:43:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:40.564 ************************************ 00:10:40.564 START TEST filesystem_in_capsule_btrfs 00:10:40.564 ************************************ 00:10:40.564 08:43:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:10:40.564 08:43:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:10:40.564 08:43:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:40.564 08:43:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:10:40.564 08:43:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:10:40.564 08:43:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:10:40.564 08:43:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:10:40.564 08:43:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # local force 00:10:40.564 08:43:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:10:40.564 08:43:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:10:40.564 08:43:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:10:40.824 btrfs-progs v6.6.2 00:10:40.824 See https://btrfs.readthedocs.io for more information. 00:10:40.824 00:10:40.824 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:10:40.824 NOTE: several default settings have changed in version 5.15, please make sure 00:10:40.824 this does not affect your deployments: 00:10:40.824 - DUP for metadata (-m dup) 00:10:40.824 - enabled no-holes (-O no-holes) 00:10:40.824 - enabled free-space-tree (-R free-space-tree) 00:10:40.824 00:10:40.824 Label: (null) 00:10:40.824 UUID: 509e451e-c970-461d-8ca5-ddcbafa8fd54 00:10:40.824 Node size: 16384 00:10:40.824 Sector size: 4096 00:10:40.824 Filesystem size: 510.00MiB 00:10:40.824 Block group profiles: 00:10:40.824 Data: single 8.00MiB 00:10:40.824 Metadata: DUP 32.00MiB 00:10:40.824 System: DUP 8.00MiB 00:10:40.824 SSD detected: yes 00:10:40.824 Zoned device: no 00:10:40.825 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:10:40.825 Runtime features: free-space-tree 00:10:40.825 Checksum: crc32c 00:10:40.825 Number of devices: 1 00:10:40.825 Devices: 00:10:40.825 ID SIZE PATH 00:10:40.825 1 510.00MiB /dev/nvme0n1p1 00:10:40.825 00:10:40.825 08:43:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@945 -- # return 0 00:10:40.825 08:43:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:41.764 08:43:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:41.764 08:43:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:10:41.764 08:43:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:41.764 08:43:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:10:41.764 08:43:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:10:41.764 08:43:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:41.764 08:43:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 898044 00:10:41.764 08:43:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:41.764 08:43:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:41.764 08:43:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:41.764 08:43:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:41.764 00:10:41.764 real 0m1.146s 00:10:41.764 user 0m0.020s 00:10:41.764 sys 0m0.115s 00:10:41.764 08:43:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:41.764 08:43:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:10:41.764 ************************************ 00:10:41.764 END TEST filesystem_in_capsule_btrfs 00:10:41.764 ************************************ 00:10:41.764 08:44:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:10:41.764 08:44:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:41.764 08:44:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:41.764 08:44:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:41.764 ************************************ 00:10:41.764 START TEST filesystem_in_capsule_xfs 00:10:41.764 ************************************ 00:10:41.764 08:44:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:10:41.764 08:44:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:10:41.764 08:44:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:41.764 08:44:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:10:41.764 08:44:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:10:41.764 08:44:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:10:41.764 08:44:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # local i=0 00:10:41.764 08:44:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # local force 00:10:41.764 08:44:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:10:41.764 08:44:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@934 -- # force=-f 00:10:41.764 08:44:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:10:41.764 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:10:41.764 = sectsz=512 attr=2, projid32bit=1 00:10:41.764 = crc=1 finobt=1, sparse=1, rmapbt=0 00:10:41.764 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:10:41.764 data = bsize=4096 blocks=130560, imaxpct=25 00:10:41.764 = sunit=0 swidth=0 blks 00:10:41.764 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:10:41.764 log =internal log bsize=4096 blocks=16384, version=2 00:10:41.764 = sectsz=512 sunit=0 blks, lazy-count=1 00:10:41.764 realtime =none extsz=4096 blocks=0, rtextents=0 00:10:42.702 Discarding blocks...Done. 00:10:42.702 08:44:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@945 -- # return 0 00:10:42.702 08:44:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:45.238 08:44:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:45.238 08:44:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:10:45.238 08:44:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:45.238 08:44:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:10:45.238 08:44:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:10:45.238 08:44:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:45.238 08:44:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 898044 00:10:45.238 08:44:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:45.238 08:44:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:45.238 08:44:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:45.238 08:44:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:45.238 00:10:45.238 real 0m3.532s 00:10:45.238 user 0m0.017s 00:10:45.238 sys 0m0.063s 00:10:45.239 08:44:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:45.239 08:44:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:10:45.239 ************************************ 00:10:45.239 END TEST filesystem_in_capsule_xfs 00:10:45.239 ************************************ 00:10:45.239 08:44:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:10:45.239 08:44:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:10:45.239 08:44:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:45.497 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:45.497 08:44:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:45.497 08:44:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:10:45.497 08:44:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:45.497 08:44:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:45.497 08:44:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:45.497 08:44:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:45.497 08:44:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:10:45.497 08:44:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:45.497 08:44:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.497 08:44:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:45.497 08:44:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.497 08:44:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:45.497 08:44:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 898044 00:10:45.497 08:44:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 898044 ']' 00:10:45.497 08:44:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # kill -0 898044 00:10:45.497 08:44:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # uname 00:10:45.497 08:44:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:45.497 08:44:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 898044 00:10:45.497 08:44:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:45.497 08:44:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:45.497 08:44:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 898044' 00:10:45.497 killing process with pid 898044 00:10:45.497 08:44:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@969 -- # kill 898044 00:10:45.497 08:44:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@974 -- # wait 898044 00:10:46.118 08:44:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:10:46.118 00:10:46.118 real 0m13.048s 00:10:46.118 user 0m50.216s 00:10:46.118 sys 0m1.870s 00:10:46.118 08:44:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:46.118 08:44:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:46.118 ************************************ 00:10:46.118 END TEST nvmf_filesystem_in_capsule 00:10:46.118 ************************************ 00:10:46.118 08:44:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:10:46.118 08:44:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:46.118 08:44:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:10:46.118 08:44:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:46.118 08:44:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:10:46.118 08:44:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:46.118 08:44:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:46.118 rmmod nvme_tcp 00:10:46.118 rmmod nvme_fabrics 00:10:46.118 rmmod nvme_keyring 00:10:46.118 08:44:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:46.118 08:44:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:10:46.118 08:44:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:10:46.118 08:44:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:10:46.118 08:44:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:46.118 08:44:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:46.118 08:44:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:46.118 08:44:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:46.118 08:44:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:46.118 08:44:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:46.118 08:44:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:46.118 08:44:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:48.024 08:44:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:48.024 00:10:48.024 real 0m28.004s 00:10:48.024 user 1m30.830s 00:10:48.024 sys 0m5.129s 00:10:48.024 08:44:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:48.024 08:44:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:48.024 ************************************ 00:10:48.024 END TEST nvmf_filesystem 00:10:48.024 ************************************ 00:10:48.024 08:44:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:10:48.024 08:44:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:48.024 08:44:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:48.024 08:44:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:48.024 ************************************ 00:10:48.024 START TEST nvmf_target_discovery 00:10:48.024 ************************************ 00:10:48.024 08:44:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:10:48.024 * Looking for test storage... 00:10:48.024 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:48.024 08:44:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:48.024 08:44:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:10:48.024 08:44:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:48.024 08:44:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:48.024 08:44:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:48.024 08:44:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:48.024 08:44:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:48.024 08:44:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:48.024 08:44:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:48.024 08:44:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:48.024 08:44:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:48.024 08:44:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:48.024 08:44:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:48.024 08:44:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:48.024 08:44:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:48.024 08:44:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:48.024 08:44:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:48.024 08:44:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:48.024 08:44:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:48.024 08:44:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:48.024 08:44:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:48.024 08:44:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:48.024 08:44:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:48.024 08:44:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:48.024 08:44:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:48.024 08:44:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:10:48.024 08:44:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:48.024 08:44:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:10:48.024 08:44:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:48.024 08:44:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:48.024 08:44:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:48.024 08:44:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:48.025 08:44:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:48.025 08:44:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:48.025 08:44:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:48.025 08:44:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:48.025 08:44:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:10:48.025 08:44:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:10:48.025 08:44:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:10:48.025 08:44:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:10:48.025 08:44:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:10:48.025 08:44:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:48.025 08:44:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:48.025 08:44:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:48.025 08:44:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:48.025 08:44:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:48.025 08:44:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:48.025 08:44:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:48.025 08:44:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:48.025 08:44:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:48.025 08:44:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:48.025 08:44:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:10:48.025 08:44:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:49.932 08:44:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:49.932 08:44:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:10:49.932 08:44:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:49.932 08:44:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:49.932 08:44:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:49.932 08:44:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:49.932 08:44:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:49.932 08:44:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:10:49.932 08:44:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:49.932 08:44:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:10:49.932 08:44:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:10:49.932 08:44:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:10:49.932 08:44:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:10:49.932 08:44:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:10:49.932 08:44:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:10:49.932 08:44:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:49.932 08:44:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:49.932 08:44:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:49.932 08:44:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:49.932 08:44:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:49.932 08:44:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:49.932 08:44:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:49.932 08:44:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:49.932 08:44:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:49.932 08:44:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:49.932 08:44:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:49.932 08:44:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:49.932 08:44:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:49.932 08:44:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:49.932 08:44:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:49.932 08:44:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:49.932 08:44:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:49.932 08:44:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:49.932 08:44:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:49.932 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:49.932 08:44:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:49.932 08:44:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:49.932 08:44:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:49.932 08:44:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:49.932 08:44:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:49.932 08:44:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:49.932 08:44:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:49.932 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:49.932 08:44:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:49.932 08:44:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:49.932 08:44:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:49.932 08:44:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:49.933 08:44:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:49.933 08:44:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:49.933 08:44:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:49.933 08:44:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:49.933 08:44:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:49.933 08:44:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:49.933 08:44:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:49.933 08:44:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:49.933 08:44:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:49.933 08:44:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:49.933 08:44:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:49.933 08:44:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:49.933 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:49.933 08:44:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:49.933 08:44:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:49.933 08:44:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:49.933 08:44:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:49.933 08:44:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:49.933 08:44:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:49.933 08:44:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:49.933 08:44:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:49.933 08:44:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:49.933 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:49.933 08:44:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:49.933 08:44:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:49.933 08:44:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:10:49.933 08:44:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:49.933 08:44:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:49.933 08:44:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:49.933 08:44:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:49.933 08:44:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:49.933 08:44:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:49.933 08:44:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:49.933 08:44:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:49.933 08:44:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:49.933 08:44:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:49.933 08:44:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:49.933 08:44:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:49.933 08:44:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:50.193 08:44:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:50.193 08:44:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:50.193 08:44:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:50.193 08:44:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:50.193 08:44:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:50.193 08:44:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:50.193 08:44:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:50.193 08:44:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:50.194 08:44:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:50.194 08:44:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:50.194 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:50.194 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.150 ms 00:10:50.194 00:10:50.194 --- 10.0.0.2 ping statistics --- 00:10:50.194 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:50.194 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:10:50.194 08:44:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:50.194 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:50.194 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.092 ms 00:10:50.194 00:10:50.194 --- 10.0.0.1 ping statistics --- 00:10:50.194 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:50.194 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:10:50.194 08:44:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:50.194 08:44:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:10:50.194 08:44:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:50.194 08:44:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:50.194 08:44:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:50.194 08:44:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:50.194 08:44:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:50.194 08:44:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:50.194 08:44:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:50.194 08:44:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:10:50.194 08:44:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:50.194 08:44:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:50.194 08:44:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:50.194 08:44:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=901734 00:10:50.194 08:44:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:50.194 08:44:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 901734 00:10:50.194 08:44:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@831 -- # '[' -z 901734 ']' 00:10:50.194 08:44:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:50.194 08:44:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:50.194 08:44:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:50.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:50.194 08:44:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:50.194 08:44:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:50.194 [2024-07-26 08:44:08.607018] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:10:50.194 [2024-07-26 08:44:08.607137] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:50.194 EAL: No free 2048 kB hugepages reported on node 1 00:10:50.499 [2024-07-26 08:44:08.654547] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:50.499 [2024-07-26 08:44:08.705638] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:50.499 [2024-07-26 08:44:08.818884] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:50.499 [2024-07-26 08:44:08.818969] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:50.499 [2024-07-26 08:44:08.819003] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:50.499 [2024-07-26 08:44:08.819030] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:50.499 [2024-07-26 08:44:08.819054] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:50.499 [2024-07-26 08:44:08.819140] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:50.499 [2024-07-26 08:44:08.819199] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:50.499 [2024-07-26 08:44:08.819268] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:50.500 [2024-07-26 08:44:08.819258] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:50.760 08:44:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:50.760 08:44:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # return 0 00:10:50.760 08:44:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:50.760 08:44:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:50.760 08:44:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:50.760 08:44:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:50.760 08:44:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:50.760 08:44:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.760 08:44:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:50.760 [2024-07-26 08:44:08.999972] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:50.760 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.760 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:10:50.760 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:50.760 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:10:50.760 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.760 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:50.760 Null1 00:10:50.760 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.760 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:50.760 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.761 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:50.761 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.761 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:10:50.761 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.761 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:50.761 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.761 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:50.761 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.761 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:50.761 [2024-07-26 08:44:09.040329] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:50.761 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.761 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:50.761 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:10:50.761 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.761 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:50.761 Null2 00:10:50.761 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.761 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:10:50.761 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.761 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:50.761 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.761 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:10:50.761 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.761 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:50.761 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.761 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:10:50.761 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.761 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:50.761 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.761 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:50.761 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:10:50.761 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.761 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:50.761 Null3 00:10:50.761 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.761 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:10:50.761 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.761 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:50.761 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.761 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:10:50.761 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.761 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:50.761 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.761 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:10:50.761 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.761 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:50.761 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.761 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:50.761 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:10:50.761 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.761 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:50.761 Null4 00:10:50.761 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.761 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:10:50.761 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.761 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:50.761 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.761 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:10:50.761 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.761 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:50.761 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.761 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:10:50.761 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.761 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:50.761 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.761 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:50.761 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.761 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:50.761 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.761 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:10:50.761 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.761 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:50.761 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.761 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:10:51.021 00:10:51.021 Discovery Log Number of Records 6, Generation counter 6 00:10:51.021 =====Discovery Log Entry 0====== 00:10:51.021 trtype: tcp 00:10:51.021 adrfam: ipv4 00:10:51.021 subtype: current discovery subsystem 00:10:51.021 treq: not required 00:10:51.021 portid: 0 00:10:51.021 trsvcid: 4420 00:10:51.021 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:10:51.021 traddr: 10.0.0.2 00:10:51.021 eflags: explicit discovery connections, duplicate discovery information 00:10:51.021 sectype: none 00:10:51.021 =====Discovery Log Entry 1====== 00:10:51.021 trtype: tcp 00:10:51.021 adrfam: ipv4 00:10:51.021 subtype: nvme subsystem 00:10:51.021 treq: not required 00:10:51.021 portid: 0 00:10:51.021 trsvcid: 4420 00:10:51.021 subnqn: nqn.2016-06.io.spdk:cnode1 00:10:51.021 traddr: 10.0.0.2 00:10:51.021 eflags: none 00:10:51.021 sectype: none 00:10:51.021 =====Discovery Log Entry 2====== 00:10:51.021 trtype: tcp 00:10:51.021 adrfam: ipv4 00:10:51.021 subtype: nvme subsystem 00:10:51.021 treq: not required 00:10:51.021 portid: 0 00:10:51.021 trsvcid: 4420 00:10:51.021 subnqn: nqn.2016-06.io.spdk:cnode2 00:10:51.021 traddr: 10.0.0.2 00:10:51.021 eflags: none 00:10:51.021 sectype: none 00:10:51.021 =====Discovery Log Entry 3====== 00:10:51.021 trtype: tcp 00:10:51.021 adrfam: ipv4 00:10:51.021 subtype: nvme subsystem 00:10:51.021 treq: not required 00:10:51.021 portid: 0 00:10:51.021 trsvcid: 4420 00:10:51.021 subnqn: nqn.2016-06.io.spdk:cnode3 00:10:51.021 traddr: 10.0.0.2 00:10:51.021 eflags: none 00:10:51.022 sectype: none 00:10:51.022 =====Discovery Log Entry 4====== 00:10:51.022 trtype: tcp 00:10:51.022 adrfam: ipv4 00:10:51.022 subtype: nvme subsystem 00:10:51.022 treq: not required 00:10:51.022 portid: 0 00:10:51.022 trsvcid: 4420 00:10:51.022 subnqn: nqn.2016-06.io.spdk:cnode4 00:10:51.022 traddr: 10.0.0.2 00:10:51.022 eflags: none 00:10:51.022 sectype: none 00:10:51.022 =====Discovery Log Entry 5====== 00:10:51.022 trtype: tcp 00:10:51.022 adrfam: ipv4 00:10:51.022 subtype: discovery subsystem referral 00:10:51.022 treq: not required 00:10:51.022 portid: 0 00:10:51.022 trsvcid: 4430 00:10:51.022 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:10:51.022 traddr: 10.0.0.2 00:10:51.022 eflags: none 00:10:51.022 sectype: none 00:10:51.022 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:10:51.022 Perform nvmf subsystem discovery via RPC 00:10:51.022 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:10:51.022 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.022 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:51.022 [ 00:10:51.022 { 00:10:51.022 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:10:51.022 "subtype": "Discovery", 00:10:51.022 "listen_addresses": [ 00:10:51.022 { 00:10:51.022 "trtype": "TCP", 00:10:51.022 "adrfam": "IPv4", 00:10:51.022 "traddr": "10.0.0.2", 00:10:51.022 "trsvcid": "4420" 00:10:51.022 } 00:10:51.022 ], 00:10:51.022 "allow_any_host": true, 00:10:51.022 "hosts": [] 00:10:51.022 }, 00:10:51.022 { 00:10:51.022 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:10:51.022 "subtype": "NVMe", 00:10:51.022 "listen_addresses": [ 00:10:51.022 { 00:10:51.022 "trtype": "TCP", 00:10:51.022 "adrfam": "IPv4", 00:10:51.022 "traddr": "10.0.0.2", 00:10:51.022 "trsvcid": "4420" 00:10:51.022 } 00:10:51.022 ], 00:10:51.022 "allow_any_host": true, 00:10:51.022 "hosts": [], 00:10:51.022 "serial_number": "SPDK00000000000001", 00:10:51.022 "model_number": "SPDK bdev Controller", 00:10:51.022 "max_namespaces": 32, 00:10:51.022 "min_cntlid": 1, 00:10:51.022 "max_cntlid": 65519, 00:10:51.022 "namespaces": [ 00:10:51.022 { 00:10:51.022 "nsid": 1, 00:10:51.022 "bdev_name": "Null1", 00:10:51.022 "name": "Null1", 00:10:51.022 "nguid": "99C1EC4EBBF34B3893405F103D697ACB", 00:10:51.022 "uuid": "99c1ec4e-bbf3-4b38-9340-5f103d697acb" 00:10:51.022 } 00:10:51.022 ] 00:10:51.022 }, 00:10:51.022 { 00:10:51.022 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:51.022 "subtype": "NVMe", 00:10:51.022 "listen_addresses": [ 00:10:51.022 { 00:10:51.022 "trtype": "TCP", 00:10:51.022 "adrfam": "IPv4", 00:10:51.022 "traddr": "10.0.0.2", 00:10:51.022 "trsvcid": "4420" 00:10:51.022 } 00:10:51.022 ], 00:10:51.022 "allow_any_host": true, 00:10:51.022 "hosts": [], 00:10:51.022 "serial_number": "SPDK00000000000002", 00:10:51.022 "model_number": "SPDK bdev Controller", 00:10:51.022 "max_namespaces": 32, 00:10:51.022 "min_cntlid": 1, 00:10:51.022 "max_cntlid": 65519, 00:10:51.022 "namespaces": [ 00:10:51.022 { 00:10:51.022 "nsid": 1, 00:10:51.022 "bdev_name": "Null2", 00:10:51.022 "name": "Null2", 00:10:51.022 "nguid": "4075ED66A65B4D8F8A45101FB013C7F4", 00:10:51.022 "uuid": "4075ed66-a65b-4d8f-8a45-101fb013c7f4" 00:10:51.022 } 00:10:51.022 ] 00:10:51.022 }, 00:10:51.022 { 00:10:51.022 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:10:51.022 "subtype": "NVMe", 00:10:51.022 "listen_addresses": [ 00:10:51.022 { 00:10:51.022 "trtype": "TCP", 00:10:51.022 "adrfam": "IPv4", 00:10:51.022 "traddr": "10.0.0.2", 00:10:51.022 "trsvcid": "4420" 00:10:51.022 } 00:10:51.022 ], 00:10:51.022 "allow_any_host": true, 00:10:51.022 "hosts": [], 00:10:51.022 "serial_number": "SPDK00000000000003", 00:10:51.022 "model_number": "SPDK bdev Controller", 00:10:51.022 "max_namespaces": 32, 00:10:51.022 "min_cntlid": 1, 00:10:51.022 "max_cntlid": 65519, 00:10:51.022 "namespaces": [ 00:10:51.022 { 00:10:51.022 "nsid": 1, 00:10:51.022 "bdev_name": "Null3", 00:10:51.022 "name": "Null3", 00:10:51.022 "nguid": "2F95FF777553467F8C350960CA84F0C7", 00:10:51.022 "uuid": "2f95ff77-7553-467f-8c35-0960ca84f0c7" 00:10:51.022 } 00:10:51.022 ] 00:10:51.022 }, 00:10:51.022 { 00:10:51.022 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:10:51.022 "subtype": "NVMe", 00:10:51.022 "listen_addresses": [ 00:10:51.022 { 00:10:51.022 "trtype": "TCP", 00:10:51.022 "adrfam": "IPv4", 00:10:51.022 "traddr": "10.0.0.2", 00:10:51.022 "trsvcid": "4420" 00:10:51.022 } 00:10:51.022 ], 00:10:51.022 "allow_any_host": true, 00:10:51.022 "hosts": [], 00:10:51.022 "serial_number": "SPDK00000000000004", 00:10:51.022 "model_number": "SPDK bdev Controller", 00:10:51.022 "max_namespaces": 32, 00:10:51.022 "min_cntlid": 1, 00:10:51.022 "max_cntlid": 65519, 00:10:51.022 "namespaces": [ 00:10:51.022 { 00:10:51.022 "nsid": 1, 00:10:51.022 "bdev_name": "Null4", 00:10:51.022 "name": "Null4", 00:10:51.022 "nguid": "D4B9B38C8EF7418E84E4B0E799E35531", 00:10:51.022 "uuid": "d4b9b38c-8ef7-418e-84e4-b0e799e35531" 00:10:51.022 } 00:10:51.022 ] 00:10:51.022 } 00:10:51.022 ] 00:10:51.022 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.022 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:10:51.022 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:51.022 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:51.022 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.022 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:51.022 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.022 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:10:51.022 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.022 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:51.022 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.022 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:51.022 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:10:51.022 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.022 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:51.022 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.022 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:10:51.022 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.022 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:51.022 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.022 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:51.022 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:10:51.023 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.023 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:51.023 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.023 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:10:51.023 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.023 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:51.023 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.023 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:51.023 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:10:51.023 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.023 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:51.023 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.023 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:10:51.023 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.023 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:51.023 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.023 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:10:51.023 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.023 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:51.023 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.023 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:10:51.023 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:10:51.023 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.023 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:51.023 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.023 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:10:51.023 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:10:51.023 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:10:51.023 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:10:51.023 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:51.023 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:10:51.023 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:51.023 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:10:51.023 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:51.023 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:51.023 rmmod nvme_tcp 00:10:51.023 rmmod nvme_fabrics 00:10:51.023 rmmod nvme_keyring 00:10:51.023 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:51.023 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:10:51.023 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:10:51.023 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 901734 ']' 00:10:51.023 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 901734 00:10:51.023 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@950 -- # '[' -z 901734 ']' 00:10:51.023 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # kill -0 901734 00:10:51.023 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # uname 00:10:51.023 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:51.023 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 901734 00:10:51.281 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:51.281 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:51.281 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 901734' 00:10:51.282 killing process with pid 901734 00:10:51.282 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@969 -- # kill 901734 00:10:51.282 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@974 -- # wait 901734 00:10:51.282 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:51.282 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:51.282 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:51.282 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:51.282 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:51.282 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:51.282 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:51.282 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:53.826 08:44:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:53.826 00:10:53.826 real 0m5.356s 00:10:53.826 user 0m4.423s 00:10:53.826 sys 0m1.792s 00:10:53.826 08:44:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:53.826 08:44:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:53.826 ************************************ 00:10:53.826 END TEST nvmf_target_discovery 00:10:53.826 ************************************ 00:10:53.826 08:44:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:10:53.826 08:44:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:53.826 08:44:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:53.826 08:44:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:53.826 ************************************ 00:10:53.826 START TEST nvmf_referrals 00:10:53.826 ************************************ 00:10:53.826 08:44:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:10:53.826 * Looking for test storage... 00:10:53.826 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:53.826 08:44:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:53.826 08:44:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:10:53.826 08:44:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:53.826 08:44:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:53.826 08:44:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:53.826 08:44:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:53.826 08:44:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:53.826 08:44:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:53.826 08:44:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:53.826 08:44:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:53.826 08:44:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:53.826 08:44:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:53.826 08:44:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:53.826 08:44:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:53.826 08:44:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:53.826 08:44:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:53.826 08:44:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:53.826 08:44:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:53.826 08:44:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:53.826 08:44:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:53.826 08:44:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:53.826 08:44:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:53.826 08:44:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:53.827 08:44:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:53.827 08:44:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:53.827 08:44:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:10:53.827 08:44:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:53.827 08:44:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:10:53.827 08:44:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:53.827 08:44:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:53.827 08:44:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:53.827 08:44:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:53.827 08:44:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:53.827 08:44:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:53.827 08:44:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:53.827 08:44:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:53.827 08:44:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:10:53.827 08:44:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:10:53.827 08:44:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:10:53.827 08:44:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:10:53.827 08:44:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:10:53.827 08:44:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:10:53.827 08:44:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:10:53.827 08:44:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:53.827 08:44:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:53.827 08:44:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:53.827 08:44:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:53.827 08:44:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:53.827 08:44:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:53.827 08:44:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:53.827 08:44:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:53.827 08:44:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:53.827 08:44:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:53.827 08:44:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:10:53.827 08:44:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:55.734 08:44:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:55.734 08:44:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:10:55.734 08:44:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:55.734 08:44:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:55.734 08:44:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:55.734 08:44:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:55.734 08:44:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:55.734 08:44:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:10:55.734 08:44:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:55.734 08:44:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:10:55.734 08:44:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:10:55.734 08:44:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:10:55.734 08:44:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:10:55.734 08:44:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:10:55.734 08:44:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:10:55.734 08:44:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:55.734 08:44:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:55.734 08:44:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:55.734 08:44:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:55.734 08:44:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:55.734 08:44:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:55.734 08:44:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:55.734 08:44:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:55.734 08:44:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:55.734 08:44:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:55.734 08:44:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:55.734 08:44:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:55.734 08:44:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:55.734 08:44:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:55.734 08:44:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:55.734 08:44:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:55.734 08:44:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:55.734 08:44:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:55.734 08:44:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:55.734 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:55.734 08:44:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:55.734 08:44:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:55.734 08:44:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:55.734 08:44:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:55.734 08:44:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:55.734 08:44:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:55.734 08:44:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:55.734 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:55.734 08:44:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:55.734 08:44:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:55.734 08:44:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:55.734 08:44:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:55.734 08:44:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:55.734 08:44:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:55.734 08:44:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:55.734 08:44:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:55.734 08:44:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:55.734 08:44:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:55.734 08:44:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:55.734 08:44:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:55.734 08:44:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:55.734 08:44:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:55.735 08:44:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:55.735 08:44:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:55.735 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:55.735 08:44:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:55.735 08:44:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:55.735 08:44:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:55.735 08:44:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:55.735 08:44:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:55.735 08:44:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:55.735 08:44:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:55.735 08:44:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:55.735 08:44:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:55.735 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:55.735 08:44:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:55.735 08:44:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:55.735 08:44:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:10:55.735 08:44:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:55.735 08:44:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:55.735 08:44:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:55.735 08:44:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:55.735 08:44:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:55.735 08:44:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:55.735 08:44:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:55.735 08:44:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:55.735 08:44:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:55.735 08:44:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:55.735 08:44:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:55.735 08:44:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:55.735 08:44:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:55.735 08:44:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:55.735 08:44:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:55.735 08:44:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:55.735 08:44:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:55.735 08:44:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:55.735 08:44:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:55.735 08:44:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:55.735 08:44:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:55.735 08:44:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:55.735 08:44:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:55.735 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:55.735 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.158 ms 00:10:55.735 00:10:55.735 --- 10.0.0.2 ping statistics --- 00:10:55.735 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:55.735 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:10:55.735 08:44:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:55.735 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:55.735 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.121 ms 00:10:55.735 00:10:55.735 --- 10.0.0.1 ping statistics --- 00:10:55.735 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:55.735 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:10:55.735 08:44:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:55.735 08:44:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:10:55.735 08:44:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:55.735 08:44:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:55.735 08:44:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:55.735 08:44:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:55.735 08:44:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:55.735 08:44:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:55.735 08:44:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:55.735 08:44:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:10:55.735 08:44:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:55.735 08:44:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:55.735 08:44:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:55.735 08:44:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=903737 00:10:55.735 08:44:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:55.735 08:44:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 903737 00:10:55.735 08:44:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@831 -- # '[' -z 903737 ']' 00:10:55.735 08:44:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:55.735 08:44:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:55.735 08:44:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:55.735 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:55.735 08:44:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:55.735 08:44:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:55.735 [2024-07-26 08:44:14.020338] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:10:55.735 [2024-07-26 08:44:14.020440] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:55.735 EAL: No free 2048 kB hugepages reported on node 1 00:10:55.735 [2024-07-26 08:44:14.059734] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:55.735 [2024-07-26 08:44:14.092269] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:55.735 [2024-07-26 08:44:14.188705] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:55.735 [2024-07-26 08:44:14.188770] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:55.735 [2024-07-26 08:44:14.188787] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:55.735 [2024-07-26 08:44:14.188801] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:55.735 [2024-07-26 08:44:14.188813] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:55.735 [2024-07-26 08:44:14.188903] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:55.735 [2024-07-26 08:44:14.188957] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:55.735 [2024-07-26 08:44:14.189008] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:55.735 [2024-07-26 08:44:14.189011] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:55.994 08:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:55.994 08:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # return 0 00:10:55.994 08:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:55.994 08:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:55.994 08:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:55.994 08:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:55.994 08:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:55.994 08:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.994 08:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:55.994 [2024-07-26 08:44:14.349777] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:55.994 08:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.994 08:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:10:55.994 08:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.994 08:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:55.994 [2024-07-26 08:44:14.362033] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:10:55.994 08:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.994 08:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:10:55.994 08:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.994 08:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:55.994 08:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.994 08:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:10:55.994 08:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.994 08:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:55.994 08:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.994 08:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:10:55.994 08:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.994 08:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:55.994 08:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.994 08:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:55.994 08:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:10:55.994 08:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.994 08:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:55.994 08:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.994 08:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:10:55.994 08:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:10:55.994 08:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:10:55.994 08:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:55.994 08:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:10:55.994 08:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.994 08:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:55.994 08:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:10:55.994 08:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.253 08:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:10:56.253 08:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:10:56.253 08:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:10:56.253 08:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:56.253 08:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:56.253 08:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:56.253 08:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:56.253 08:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:56.253 08:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:10:56.253 08:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:10:56.253 08:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:10:56.253 08:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.253 08:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:56.253 08:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.253 08:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:10:56.253 08:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.253 08:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:56.253 08:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.253 08:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:10:56.253 08:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.253 08:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:56.253 08:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.253 08:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:56.253 08:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.253 08:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:10:56.253 08:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:56.253 08:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.512 08:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:10:56.512 08:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:10:56.512 08:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:56.512 08:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:56.512 08:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:56.512 08:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:56.512 08:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:56.512 08:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:10:56.512 08:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:10:56.512 08:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:10:56.512 08:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.512 08:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:56.512 08:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.512 08:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:10:56.512 08:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.512 08:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:56.512 08:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.512 08:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:10:56.512 08:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:10:56.512 08:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:56.512 08:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:10:56.512 08:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.512 08:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:56.512 08:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:10:56.512 08:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.512 08:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:10:56.513 08:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:10:56.513 08:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:10:56.513 08:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:56.513 08:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:56.513 08:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:56.513 08:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:56.513 08:44:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:56.772 08:44:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:10:56.772 08:44:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:10:56.772 08:44:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:10:56.772 08:44:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:10:56.772 08:44:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:10:56.772 08:44:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:56.772 08:44:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:10:56.772 08:44:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:10:56.772 08:44:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:10:56.772 08:44:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:10:56.772 08:44:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:10:56.772 08:44:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:56.772 08:44:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:10:57.032 08:44:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:10:57.032 08:44:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:10:57.032 08:44:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.032 08:44:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:57.032 08:44:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.032 08:44:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:10:57.032 08:44:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:10:57.032 08:44:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:57.032 08:44:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:10:57.032 08:44:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.032 08:44:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:57.032 08:44:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:10:57.032 08:44:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.032 08:44:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:10:57.032 08:44:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:10:57.032 08:44:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:10:57.032 08:44:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:57.032 08:44:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:57.032 08:44:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:57.032 08:44:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:57.032 08:44:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:57.293 08:44:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:10:57.293 08:44:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:10:57.293 08:44:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:10:57.293 08:44:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:10:57.293 08:44:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:10:57.293 08:44:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:57.293 08:44:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:10:57.293 08:44:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:10:57.293 08:44:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:10:57.293 08:44:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:10:57.293 08:44:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:10:57.293 08:44:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:57.293 08:44:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:10:57.552 08:44:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:10:57.552 08:44:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:10:57.552 08:44:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.552 08:44:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:57.552 08:44:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.552 08:44:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:57.552 08:44:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:10:57.552 08:44:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.552 08:44:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:57.552 08:44:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.552 08:44:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:10:57.552 08:44:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:10:57.552 08:44:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:57.552 08:44:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:57.552 08:44:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:57.552 08:44:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:57.552 08:44:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:57.812 08:44:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:10:57.812 08:44:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:10:57.812 08:44:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:10:57.812 08:44:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:10:57.812 08:44:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:57.812 08:44:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:10:57.812 08:44:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:57.812 08:44:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:10:57.812 08:44:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:57.813 08:44:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:57.813 rmmod nvme_tcp 00:10:57.813 rmmod nvme_fabrics 00:10:57.813 rmmod nvme_keyring 00:10:57.813 08:44:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:57.813 08:44:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:10:57.813 08:44:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:10:57.813 08:44:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 903737 ']' 00:10:57.813 08:44:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 903737 00:10:57.813 08:44:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@950 -- # '[' -z 903737 ']' 00:10:57.813 08:44:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # kill -0 903737 00:10:57.813 08:44:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # uname 00:10:57.813 08:44:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:57.813 08:44:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 903737 00:10:57.813 08:44:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:57.813 08:44:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:57.813 08:44:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@968 -- # echo 'killing process with pid 903737' 00:10:57.813 killing process with pid 903737 00:10:57.813 08:44:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@969 -- # kill 903737 00:10:57.813 08:44:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@974 -- # wait 903737 00:10:58.074 08:44:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:58.074 08:44:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:58.074 08:44:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:58.074 08:44:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:58.074 08:44:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:58.074 08:44:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:58.074 08:44:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:58.074 08:44:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:59.980 08:44:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:59.980 00:10:59.980 real 0m6.591s 00:10:59.980 user 0m9.849s 00:10:59.980 sys 0m2.103s 00:10:59.980 08:44:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:59.980 08:44:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:59.980 ************************************ 00:10:59.980 END TEST nvmf_referrals 00:10:59.980 ************************************ 00:10:59.980 08:44:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:10:59.980 08:44:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:59.980 08:44:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:59.980 08:44:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:59.980 ************************************ 00:10:59.980 START TEST nvmf_connect_disconnect 00:10:59.980 ************************************ 00:10:59.980 08:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:00.239 * Looking for test storage... 00:11:00.239 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:00.239 08:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:00.239 08:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:11:00.239 08:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:00.239 08:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:00.239 08:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:00.239 08:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:00.239 08:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:00.239 08:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:00.239 08:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:00.239 08:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:00.239 08:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:00.239 08:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:00.239 08:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:00.239 08:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:00.239 08:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:00.239 08:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:00.239 08:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:00.239 08:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:00.239 08:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:00.240 08:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:00.240 08:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:00.240 08:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:00.240 08:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.240 08:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.240 08:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.240 08:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:11:00.240 08:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.240 08:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:11:00.240 08:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:00.240 08:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:00.240 08:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:00.240 08:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:00.240 08:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:00.240 08:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:00.240 08:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:00.240 08:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:00.240 08:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:00.240 08:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:00.240 08:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:11:00.240 08:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:00.240 08:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:00.240 08:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:00.240 08:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:00.240 08:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:00.240 08:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:00.240 08:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:00.240 08:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:00.240 08:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:00.240 08:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:00.240 08:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:11:00.240 08:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:02.145 08:44:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:02.145 08:44:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:11:02.145 08:44:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:02.145 08:44:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:02.145 08:44:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:02.145 08:44:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:02.145 08:44:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:02.145 08:44:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:11:02.145 08:44:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:02.145 08:44:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:11:02.145 08:44:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:11:02.145 08:44:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:11:02.145 08:44:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:11:02.145 08:44:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:11:02.145 08:44:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:11:02.145 08:44:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:02.145 08:44:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:02.145 08:44:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:02.145 08:44:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:02.145 08:44:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:02.145 08:44:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:02.145 08:44:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:02.145 08:44:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:02.145 08:44:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:02.145 08:44:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:02.145 08:44:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:02.145 08:44:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:02.145 08:44:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:02.145 08:44:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:02.145 08:44:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:02.145 08:44:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:02.145 08:44:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:02.145 08:44:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:02.145 08:44:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:02.145 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:02.145 08:44:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:02.145 08:44:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:02.145 08:44:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:02.145 08:44:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:02.145 08:44:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:02.145 08:44:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:02.145 08:44:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:02.145 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:02.145 08:44:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:02.145 08:44:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:02.145 08:44:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:02.145 08:44:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:02.145 08:44:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:02.145 08:44:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:02.145 08:44:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:02.145 08:44:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:02.145 08:44:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:02.145 08:44:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:02.145 08:44:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:02.145 08:44:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:02.145 08:44:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:02.145 08:44:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:02.145 08:44:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:02.145 08:44:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:02.145 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:02.145 08:44:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:02.145 08:44:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:02.145 08:44:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:02.145 08:44:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:02.145 08:44:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:02.145 08:44:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:02.145 08:44:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:02.145 08:44:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:02.145 08:44:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:02.145 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:02.145 08:44:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:02.145 08:44:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:02.145 08:44:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:11:02.145 08:44:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:02.145 08:44:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:02.145 08:44:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:02.145 08:44:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:02.145 08:44:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:02.145 08:44:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:02.145 08:44:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:02.145 08:44:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:02.145 08:44:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:02.145 08:44:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:02.145 08:44:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:02.145 08:44:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:02.145 08:44:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:02.145 08:44:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:02.145 08:44:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:02.145 08:44:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:02.145 08:44:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:02.145 08:44:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:02.146 08:44:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:02.146 08:44:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:02.404 08:44:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:02.404 08:44:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:02.404 08:44:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:02.404 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:02.404 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.147 ms 00:11:02.404 00:11:02.404 --- 10.0.0.2 ping statistics --- 00:11:02.404 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:02.404 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:11:02.404 08:44:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:02.404 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:02.404 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:11:02.404 00:11:02.404 --- 10.0.0.1 ping statistics --- 00:11:02.404 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:02.404 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:11:02.404 08:44:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:02.404 08:44:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:11:02.404 08:44:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:02.404 08:44:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:02.404 08:44:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:02.404 08:44:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:02.404 08:44:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:02.404 08:44:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:02.404 08:44:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:02.404 08:44:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:11:02.404 08:44:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:02.404 08:44:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:02.404 08:44:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:02.404 08:44:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=906029 00:11:02.404 08:44:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:02.404 08:44:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 906029 00:11:02.404 08:44:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@831 -- # '[' -z 906029 ']' 00:11:02.404 08:44:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:02.404 08:44:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:02.404 08:44:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:02.404 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:02.404 08:44:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:02.404 08:44:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:02.404 [2024-07-26 08:44:20.733282] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:11:02.404 [2024-07-26 08:44:20.733372] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:02.404 EAL: No free 2048 kB hugepages reported on node 1 00:11:02.404 [2024-07-26 08:44:20.771705] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:11:02.405 [2024-07-26 08:44:20.803803] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:02.664 [2024-07-26 08:44:20.898641] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:02.664 [2024-07-26 08:44:20.898697] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:02.664 [2024-07-26 08:44:20.898723] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:02.664 [2024-07-26 08:44:20.898737] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:02.664 [2024-07-26 08:44:20.898748] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:02.664 [2024-07-26 08:44:20.898834] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:02.664 [2024-07-26 08:44:20.898889] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:02.664 [2024-07-26 08:44:20.898942] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:02.664 [2024-07-26 08:44:20.898945] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:02.664 08:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:02.664 08:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # return 0 00:11:02.664 08:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:02.664 08:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:02.664 08:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:02.664 08:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:02.664 08:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:02.664 08:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.664 08:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:02.664 [2024-07-26 08:44:21.055670] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:02.664 08:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.664 08:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:11:02.664 08:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.664 08:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:02.664 08:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.664 08:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:11:02.664 08:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:02.664 08:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.664 08:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:02.664 08:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.664 08:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:02.664 08:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.664 08:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:02.664 08:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.664 08:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:02.664 08:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.664 08:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:02.664 [2024-07-26 08:44:21.112601] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:02.664 08:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.664 08:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:11:02.664 08:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:11:02.664 08:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:11:02.664 08:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:11:05.197 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:07.746 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:10.285 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:12.186 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:14.721 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:17.274 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:19.178 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:21.710 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:24.248 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:26.154 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:28.732 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:31.262 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:33.167 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:35.715 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:37.623 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:40.159 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:42.692 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:44.596 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:47.130 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:49.690 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:51.592 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:54.145 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:56.045 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:58.582 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:01.115 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:03.021 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:05.558 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:08.095 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:10.028 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:12.558 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:14.464 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:17.018 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:18.931 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:21.465 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:23.997 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:25.901 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:28.434 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:30.993 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:32.900 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:35.435 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:37.342 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:39.880 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:42.414 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:44.318 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:46.853 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:48.757 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:51.321 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:53.849 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:56.384 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:58.285 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:00.815 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:02.722 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:05.257 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:07.796 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:09.699 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:12.267 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:14.172 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:16.708 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:19.240 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:21.145 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:23.683 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:26.218 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:28.123 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:30.656 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:33.234 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:35.129 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:37.654 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:39.560 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:42.093 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:44.629 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:46.531 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:49.067 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:51.662 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:54.217 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:56.128 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:58.664 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:00.566 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:03.102 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:05.636 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:07.540 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:10.077 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:12.647 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:14.550 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:17.083 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:19.616 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:21.620 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:24.153 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:26.684 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:28.590 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:31.115 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:33.640 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:36.165 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:38.061 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:40.587 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:43.112 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:45.658 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:47.555 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:50.079 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:51.978 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:54.553 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:54.553 08:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:14:54.553 08:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:14:54.553 08:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:54.553 08:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:14:54.553 08:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:54.553 08:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:14:54.553 08:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:54.553 08:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:54.553 rmmod nvme_tcp 00:14:54.553 rmmod nvme_fabrics 00:14:54.553 rmmod nvme_keyring 00:14:54.553 08:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:54.553 08:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:14:54.553 08:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:14:54.553 08:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 906029 ']' 00:14:54.553 08:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 906029 00:14:54.553 08:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@950 -- # '[' -z 906029 ']' 00:14:54.553 08:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # kill -0 906029 00:14:54.553 08:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # uname 00:14:54.553 08:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:54.553 08:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 906029 00:14:54.553 08:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:54.553 08:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:54.553 08:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 906029' 00:14:54.553 killing process with pid 906029 00:14:54.553 08:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@969 -- # kill 906029 00:14:54.553 08:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@974 -- # wait 906029 00:14:54.553 08:48:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:54.553 08:48:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:54.553 08:48:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:54.553 08:48:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:54.553 08:48:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:54.553 08:48:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:54.553 08:48:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:54.553 08:48:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:57.090 08:48:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:57.090 00:14:57.090 real 3m56.611s 00:14:57.090 user 15m0.921s 00:14:57.090 sys 0m34.413s 00:14:57.090 08:48:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:57.090 08:48:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:57.090 ************************************ 00:14:57.090 END TEST nvmf_connect_disconnect 00:14:57.090 ************************************ 00:14:57.090 08:48:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:14:57.090 08:48:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:57.090 08:48:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:57.090 08:48:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:57.090 ************************************ 00:14:57.090 START TEST nvmf_multitarget 00:14:57.090 ************************************ 00:14:57.090 08:48:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:14:57.090 * Looking for test storage... 00:14:57.090 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:57.090 08:48:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:57.090 08:48:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:14:57.090 08:48:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:57.090 08:48:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:57.090 08:48:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:57.090 08:48:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:57.090 08:48:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:57.090 08:48:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:57.090 08:48:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:57.090 08:48:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:57.090 08:48:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:57.090 08:48:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:57.090 08:48:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:57.090 08:48:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:57.091 08:48:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:57.091 08:48:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:57.091 08:48:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:57.091 08:48:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:57.091 08:48:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:57.091 08:48:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:57.091 08:48:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:57.091 08:48:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:57.091 08:48:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:57.091 08:48:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:57.091 08:48:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:57.091 08:48:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:14:57.091 08:48:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:57.091 08:48:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:14:57.091 08:48:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:57.091 08:48:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:57.091 08:48:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:57.091 08:48:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:57.091 08:48:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:57.091 08:48:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:57.091 08:48:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:57.091 08:48:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:57.091 08:48:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:14:57.091 08:48:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:14:57.091 08:48:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:57.091 08:48:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:57.091 08:48:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:57.091 08:48:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:57.091 08:48:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:57.091 08:48:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:57.091 08:48:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:57.091 08:48:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:57.091 08:48:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:57.091 08:48:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:57.091 08:48:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:14:57.091 08:48:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:58.996 08:48:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:58.996 08:48:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:14:58.996 08:48:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:58.996 08:48:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:58.996 08:48:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:58.996 08:48:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:58.996 08:48:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:58.996 08:48:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:14:58.996 08:48:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:58.996 08:48:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:14:58.996 08:48:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:14:58.996 08:48:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:14:58.996 08:48:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:14:58.996 08:48:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:14:58.996 08:48:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:14:58.996 08:48:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:58.996 08:48:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:58.996 08:48:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:58.996 08:48:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:58.996 08:48:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:58.996 08:48:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:58.996 08:48:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:58.996 08:48:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:58.996 08:48:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:58.996 08:48:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:58.996 08:48:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:58.996 08:48:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:58.996 08:48:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:58.996 08:48:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:58.996 08:48:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:58.996 08:48:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:58.996 08:48:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:58.996 08:48:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:58.996 08:48:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:58.996 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:58.996 08:48:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:58.996 08:48:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:58.996 08:48:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:58.996 08:48:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:58.996 08:48:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:58.996 08:48:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:58.996 08:48:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:58.996 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:58.996 08:48:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:58.996 08:48:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:58.996 08:48:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:58.996 08:48:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:58.996 08:48:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:58.996 08:48:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:58.997 08:48:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:58.997 08:48:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:58.997 08:48:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:58.997 08:48:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:58.997 08:48:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:58.997 08:48:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:58.997 08:48:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:58.997 08:48:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:58.997 08:48:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:58.997 08:48:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:58.997 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:58.997 08:48:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:58.997 08:48:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:58.997 08:48:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:58.997 08:48:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:58.997 08:48:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:58.997 08:48:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:58.997 08:48:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:58.997 08:48:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:58.997 08:48:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:58.997 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:58.997 08:48:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:58.997 08:48:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:58.997 08:48:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:14:58.997 08:48:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:58.997 08:48:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:58.997 08:48:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:58.997 08:48:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:58.997 08:48:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:58.997 08:48:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:58.997 08:48:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:58.997 08:48:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:58.997 08:48:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:58.997 08:48:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:58.997 08:48:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:58.997 08:48:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:58.997 08:48:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:58.997 08:48:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:58.997 08:48:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:58.997 08:48:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:58.997 08:48:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:58.997 08:48:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:58.997 08:48:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:58.997 08:48:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:58.997 08:48:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:58.997 08:48:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:58.997 08:48:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:58.997 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:58.997 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.136 ms 00:14:58.997 00:14:58.997 --- 10.0.0.2 ping statistics --- 00:14:58.997 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:58.997 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:14:58.997 08:48:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:58.997 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:58.997 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:14:58.997 00:14:58.997 --- 10.0.0.1 ping statistics --- 00:14:58.997 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:58.997 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:14:58.997 08:48:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:58.997 08:48:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:14:58.997 08:48:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:58.997 08:48:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:58.997 08:48:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:58.997 08:48:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:58.997 08:48:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:58.997 08:48:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:58.997 08:48:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:58.997 08:48:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:14:58.997 08:48:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:58.997 08:48:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:58.997 08:48:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:58.997 08:48:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=937783 00:14:58.997 08:48:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:58.997 08:48:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 937783 00:14:58.997 08:48:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@831 -- # '[' -z 937783 ']' 00:14:58.997 08:48:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:58.997 08:48:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:58.997 08:48:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:58.997 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:58.997 08:48:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:58.997 08:48:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:58.997 [2024-07-26 08:48:17.325036] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:14:58.997 [2024-07-26 08:48:17.325129] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:58.997 EAL: No free 2048 kB hugepages reported on node 1 00:14:58.997 [2024-07-26 08:48:17.361773] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:58.997 [2024-07-26 08:48:17.388658] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:59.256 [2024-07-26 08:48:17.479232] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:59.256 [2024-07-26 08:48:17.479290] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:59.256 [2024-07-26 08:48:17.479304] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:59.256 [2024-07-26 08:48:17.479316] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:59.256 [2024-07-26 08:48:17.479325] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:59.256 [2024-07-26 08:48:17.479445] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:59.256 [2024-07-26 08:48:17.479513] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:59.256 [2024-07-26 08:48:17.479580] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:59.256 [2024-07-26 08:48:17.479583] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:59.256 08:48:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:59.256 08:48:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # return 0 00:14:59.256 08:48:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:59.256 08:48:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:59.256 08:48:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:59.256 08:48:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:59.256 08:48:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:14:59.256 08:48:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:14:59.256 08:48:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:14:59.513 08:48:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:14:59.514 08:48:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:14:59.514 "nvmf_tgt_1" 00:14:59.514 08:48:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:14:59.772 "nvmf_tgt_2" 00:14:59.772 08:48:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:14:59.772 08:48:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:14:59.772 08:48:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:14:59.772 08:48:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:14:59.772 true 00:15:00.029 08:48:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:15:00.029 true 00:15:00.029 08:48:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:15:00.029 08:48:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:15:00.287 08:48:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:15:00.287 08:48:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:15:00.287 08:48:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:15:00.287 08:48:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:00.287 08:48:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:15:00.287 08:48:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:00.287 08:48:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:15:00.287 08:48:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:00.287 08:48:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:00.287 rmmod nvme_tcp 00:15:00.287 rmmod nvme_fabrics 00:15:00.287 rmmod nvme_keyring 00:15:00.287 08:48:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:00.287 08:48:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:15:00.287 08:48:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:15:00.287 08:48:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 937783 ']' 00:15:00.287 08:48:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 937783 00:15:00.287 08:48:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@950 -- # '[' -z 937783 ']' 00:15:00.287 08:48:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # kill -0 937783 00:15:00.287 08:48:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # uname 00:15:00.287 08:48:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:00.287 08:48:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 937783 00:15:00.287 08:48:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:00.287 08:48:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:00.287 08:48:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@968 -- # echo 'killing process with pid 937783' 00:15:00.287 killing process with pid 937783 00:15:00.287 08:48:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@969 -- # kill 937783 00:15:00.287 08:48:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@974 -- # wait 937783 00:15:00.547 08:48:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:00.547 08:48:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:00.547 08:48:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:00.547 08:48:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:00.547 08:48:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:00.547 08:48:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:00.547 08:48:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:00.547 08:48:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:02.452 08:48:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:02.452 00:15:02.452 real 0m5.723s 00:15:02.452 user 0m6.650s 00:15:02.452 sys 0m1.897s 00:15:02.452 08:48:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:02.452 08:48:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:15:02.452 ************************************ 00:15:02.452 END TEST nvmf_multitarget 00:15:02.452 ************************************ 00:15:02.452 08:48:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:15:02.452 08:48:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:02.452 08:48:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:02.452 08:48:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:02.452 ************************************ 00:15:02.452 START TEST nvmf_rpc 00:15:02.452 ************************************ 00:15:02.452 08:48:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:15:02.711 * Looking for test storage... 00:15:02.711 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:02.711 08:48:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:02.711 08:48:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:15:02.711 08:48:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:02.711 08:48:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:02.711 08:48:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:02.711 08:48:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:02.711 08:48:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:02.711 08:48:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:02.711 08:48:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:02.711 08:48:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:02.711 08:48:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:02.711 08:48:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:02.711 08:48:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:02.711 08:48:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:02.711 08:48:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:02.711 08:48:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:02.711 08:48:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:02.711 08:48:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:02.711 08:48:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:02.711 08:48:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:02.711 08:48:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:02.711 08:48:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:02.711 08:48:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:02.711 08:48:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:02.711 08:48:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:02.711 08:48:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:15:02.711 08:48:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:02.711 08:48:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:15:02.711 08:48:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:02.711 08:48:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:02.711 08:48:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:02.711 08:48:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:02.711 08:48:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:02.711 08:48:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:02.711 08:48:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:02.711 08:48:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:02.711 08:48:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:15:02.711 08:48:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:15:02.711 08:48:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:02.711 08:48:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:02.711 08:48:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:02.712 08:48:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:02.712 08:48:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:02.712 08:48:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:02.712 08:48:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:02.712 08:48:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:02.712 08:48:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:02.712 08:48:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:02.712 08:48:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:15:02.712 08:48:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:04.619 08:48:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:04.620 08:48:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:15:04.620 08:48:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:04.620 08:48:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:04.620 08:48:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:04.620 08:48:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:04.620 08:48:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:04.620 08:48:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:15:04.620 08:48:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:04.620 08:48:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:15:04.620 08:48:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:15:04.620 08:48:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:15:04.620 08:48:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:15:04.620 08:48:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:15:04.620 08:48:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:15:04.620 08:48:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:04.620 08:48:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:04.620 08:48:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:04.620 08:48:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:04.620 08:48:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:04.620 08:48:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:04.620 08:48:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:04.620 08:48:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:04.620 08:48:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:04.620 08:48:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:04.620 08:48:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:04.620 08:48:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:04.620 08:48:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:04.620 08:48:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:04.620 08:48:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:04.620 08:48:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:04.620 08:48:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:04.620 08:48:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:04.620 08:48:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:15:04.620 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:15:04.620 08:48:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:04.620 08:48:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:04.620 08:48:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:04.620 08:48:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:04.620 08:48:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:04.620 08:48:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:04.620 08:48:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:15:04.620 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:15:04.620 08:48:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:04.620 08:48:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:04.620 08:48:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:04.620 08:48:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:04.620 08:48:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:04.620 08:48:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:04.620 08:48:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:04.620 08:48:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:04.620 08:48:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:04.620 08:48:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:04.620 08:48:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:04.620 08:48:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:04.620 08:48:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:04.620 08:48:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:04.620 08:48:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:04.620 08:48:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:15:04.620 Found net devices under 0000:0a:00.0: cvl_0_0 00:15:04.620 08:48:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:04.620 08:48:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:04.620 08:48:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:04.620 08:48:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:04.620 08:48:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:04.620 08:48:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:04.620 08:48:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:04.620 08:48:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:04.620 08:48:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:15:04.620 Found net devices under 0000:0a:00.1: cvl_0_1 00:15:04.620 08:48:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:04.620 08:48:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:04.620 08:48:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:15:04.620 08:48:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:04.620 08:48:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:04.620 08:48:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:04.620 08:48:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:04.620 08:48:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:04.620 08:48:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:04.620 08:48:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:04.620 08:48:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:04.620 08:48:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:04.620 08:48:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:04.620 08:48:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:04.620 08:48:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:04.620 08:48:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:04.620 08:48:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:04.620 08:48:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:04.620 08:48:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:04.620 08:48:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:04.620 08:48:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:04.620 08:48:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:04.620 08:48:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:04.620 08:48:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:04.620 08:48:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:04.620 08:48:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:04.881 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:04.881 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.192 ms 00:15:04.881 00:15:04.881 --- 10.0.0.2 ping statistics --- 00:15:04.881 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:04.881 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:15:04.881 08:48:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:04.881 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:04.881 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.090 ms 00:15:04.881 00:15:04.881 --- 10.0.0.1 ping statistics --- 00:15:04.881 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:04.881 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:15:04.881 08:48:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:04.881 08:48:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:15:04.881 08:48:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:04.881 08:48:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:04.881 08:48:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:04.881 08:48:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:04.881 08:48:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:04.881 08:48:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:04.881 08:48:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:04.881 08:48:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:15:04.881 08:48:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:04.881 08:48:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:04.881 08:48:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:04.881 08:48:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=939880 00:15:04.881 08:48:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:04.881 08:48:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 939880 00:15:04.881 08:48:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@831 -- # '[' -z 939880 ']' 00:15:04.881 08:48:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:04.881 08:48:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:04.881 08:48:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:04.881 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:04.881 08:48:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:04.881 08:48:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:04.881 [2024-07-26 08:48:23.155258] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:15:04.881 [2024-07-26 08:48:23.155347] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:04.881 EAL: No free 2048 kB hugepages reported on node 1 00:15:04.881 [2024-07-26 08:48:23.192560] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:04.881 [2024-07-26 08:48:23.219756] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:04.881 [2024-07-26 08:48:23.304653] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:04.881 [2024-07-26 08:48:23.304704] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:04.881 [2024-07-26 08:48:23.304727] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:04.881 [2024-07-26 08:48:23.304738] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:04.882 [2024-07-26 08:48:23.304748] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:04.882 [2024-07-26 08:48:23.304832] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:04.882 [2024-07-26 08:48:23.304897] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:04.882 [2024-07-26 08:48:23.304965] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:04.882 [2024-07-26 08:48:23.304968] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:05.139 08:48:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:05.139 08:48:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # return 0 00:15:05.139 08:48:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:05.139 08:48:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:05.139 08:48:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:05.139 08:48:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:05.139 08:48:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:15:05.139 08:48:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.139 08:48:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:05.139 08:48:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.139 08:48:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:15:05.139 "tick_rate": 2700000000, 00:15:05.139 "poll_groups": [ 00:15:05.139 { 00:15:05.139 "name": "nvmf_tgt_poll_group_000", 00:15:05.139 "admin_qpairs": 0, 00:15:05.139 "io_qpairs": 0, 00:15:05.139 "current_admin_qpairs": 0, 00:15:05.139 "current_io_qpairs": 0, 00:15:05.139 "pending_bdev_io": 0, 00:15:05.139 "completed_nvme_io": 0, 00:15:05.139 "transports": [] 00:15:05.139 }, 00:15:05.139 { 00:15:05.139 "name": "nvmf_tgt_poll_group_001", 00:15:05.139 "admin_qpairs": 0, 00:15:05.139 "io_qpairs": 0, 00:15:05.139 "current_admin_qpairs": 0, 00:15:05.139 "current_io_qpairs": 0, 00:15:05.139 "pending_bdev_io": 0, 00:15:05.139 "completed_nvme_io": 0, 00:15:05.139 "transports": [] 00:15:05.139 }, 00:15:05.139 { 00:15:05.139 "name": "nvmf_tgt_poll_group_002", 00:15:05.139 "admin_qpairs": 0, 00:15:05.139 "io_qpairs": 0, 00:15:05.139 "current_admin_qpairs": 0, 00:15:05.139 "current_io_qpairs": 0, 00:15:05.139 "pending_bdev_io": 0, 00:15:05.139 "completed_nvme_io": 0, 00:15:05.139 "transports": [] 00:15:05.139 }, 00:15:05.139 { 00:15:05.139 "name": "nvmf_tgt_poll_group_003", 00:15:05.139 "admin_qpairs": 0, 00:15:05.139 "io_qpairs": 0, 00:15:05.139 "current_admin_qpairs": 0, 00:15:05.139 "current_io_qpairs": 0, 00:15:05.139 "pending_bdev_io": 0, 00:15:05.139 "completed_nvme_io": 0, 00:15:05.139 "transports": [] 00:15:05.139 } 00:15:05.139 ] 00:15:05.139 }' 00:15:05.139 08:48:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:15:05.139 08:48:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:15:05.139 08:48:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:15:05.139 08:48:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:15:05.139 08:48:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:15:05.139 08:48:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:15:05.139 08:48:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:15:05.139 08:48:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:05.139 08:48:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.139 08:48:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:05.139 [2024-07-26 08:48:23.554905] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:05.139 08:48:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.139 08:48:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:15:05.139 08:48:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.139 08:48:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:05.139 08:48:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.139 08:48:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:15:05.139 "tick_rate": 2700000000, 00:15:05.139 "poll_groups": [ 00:15:05.139 { 00:15:05.139 "name": "nvmf_tgt_poll_group_000", 00:15:05.139 "admin_qpairs": 0, 00:15:05.139 "io_qpairs": 0, 00:15:05.139 "current_admin_qpairs": 0, 00:15:05.139 "current_io_qpairs": 0, 00:15:05.139 "pending_bdev_io": 0, 00:15:05.139 "completed_nvme_io": 0, 00:15:05.139 "transports": [ 00:15:05.139 { 00:15:05.139 "trtype": "TCP" 00:15:05.139 } 00:15:05.139 ] 00:15:05.139 }, 00:15:05.139 { 00:15:05.139 "name": "nvmf_tgt_poll_group_001", 00:15:05.139 "admin_qpairs": 0, 00:15:05.139 "io_qpairs": 0, 00:15:05.139 "current_admin_qpairs": 0, 00:15:05.139 "current_io_qpairs": 0, 00:15:05.139 "pending_bdev_io": 0, 00:15:05.139 "completed_nvme_io": 0, 00:15:05.139 "transports": [ 00:15:05.139 { 00:15:05.139 "trtype": "TCP" 00:15:05.139 } 00:15:05.139 ] 00:15:05.139 }, 00:15:05.139 { 00:15:05.139 "name": "nvmf_tgt_poll_group_002", 00:15:05.139 "admin_qpairs": 0, 00:15:05.139 "io_qpairs": 0, 00:15:05.139 "current_admin_qpairs": 0, 00:15:05.139 "current_io_qpairs": 0, 00:15:05.139 "pending_bdev_io": 0, 00:15:05.139 "completed_nvme_io": 0, 00:15:05.139 "transports": [ 00:15:05.139 { 00:15:05.139 "trtype": "TCP" 00:15:05.139 } 00:15:05.139 ] 00:15:05.139 }, 00:15:05.139 { 00:15:05.139 "name": "nvmf_tgt_poll_group_003", 00:15:05.139 "admin_qpairs": 0, 00:15:05.139 "io_qpairs": 0, 00:15:05.139 "current_admin_qpairs": 0, 00:15:05.139 "current_io_qpairs": 0, 00:15:05.139 "pending_bdev_io": 0, 00:15:05.139 "completed_nvme_io": 0, 00:15:05.139 "transports": [ 00:15:05.139 { 00:15:05.139 "trtype": "TCP" 00:15:05.139 } 00:15:05.139 ] 00:15:05.139 } 00:15:05.139 ] 00:15:05.139 }' 00:15:05.139 08:48:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:15:05.139 08:48:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:15:05.139 08:48:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:15:05.139 08:48:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:05.398 08:48:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:15:05.398 08:48:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:15:05.398 08:48:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:15:05.398 08:48:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:15:05.398 08:48:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:05.398 08:48:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:15:05.398 08:48:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:15:05.398 08:48:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:15:05.398 08:48:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:15:05.398 08:48:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:15:05.398 08:48:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.398 08:48:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:05.398 Malloc1 00:15:05.398 08:48:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.398 08:48:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:05.398 08:48:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.398 08:48:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:05.398 08:48:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.398 08:48:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:05.398 08:48:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.398 08:48:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:05.398 08:48:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.398 08:48:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:15:05.398 08:48:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.398 08:48:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:05.398 08:48:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.398 08:48:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:05.398 08:48:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.398 08:48:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:05.398 [2024-07-26 08:48:23.720809] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:05.398 08:48:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.398 08:48:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:15:05.398 08:48:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:15:05.398 08:48:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:15:05.398 08:48:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:15:05.398 08:48:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:05.398 08:48:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:15:05.398 08:48:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:05.398 08:48:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:15:05.398 08:48:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:05.398 08:48:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:15:05.398 08:48:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:15:05.398 08:48:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:15:05.398 [2024-07-26 08:48:23.743341] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:15:05.398 Failed to write to /dev/nvme-fabrics: Input/output error 00:15:05.398 could not add new controller: failed to write to nvme-fabrics device 00:15:05.398 08:48:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:15:05.398 08:48:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:05.398 08:48:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:05.398 08:48:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:05.398 08:48:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:05.398 08:48:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.398 08:48:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:05.398 08:48:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.398 08:48:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:06.336 08:48:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:15:06.336 08:48:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:15:06.336 08:48:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:06.336 08:48:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:06.336 08:48:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:15:08.243 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:08.243 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:08.243 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:08.243 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:08.243 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:08.243 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:15:08.243 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:08.243 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:08.243 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:08.243 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:15:08.243 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:08.243 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:08.243 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:08.243 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:08.243 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:15:08.243 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:08.243 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.243 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:08.243 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.243 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:08.243 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:15:08.243 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:08.243 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:15:08.243 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:08.243 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:15:08.243 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:08.243 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:15:08.243 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:08.243 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:15:08.243 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:15:08.243 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:08.243 [2024-07-26 08:48:26.584103] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:15:08.243 Failed to write to /dev/nvme-fabrics: Input/output error 00:15:08.243 could not add new controller: failed to write to nvme-fabrics device 00:15:08.243 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:15:08.243 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:08.244 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:08.244 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:08.244 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:15:08.244 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.244 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:08.244 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.244 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:09.182 08:48:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:15:09.182 08:48:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:15:09.182 08:48:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:09.182 08:48:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:09.182 08:48:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:15:11.131 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:11.132 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:11.132 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:11.132 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:11.132 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:11.132 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:15:11.132 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:11.132 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:11.132 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:11.132 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:15:11.132 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:11.132 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:11.132 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:11.132 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:11.132 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:15:11.132 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:11.132 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.132 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:11.132 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.132 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:15:11.132 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:11.132 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:11.132 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.132 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:11.132 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.132 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:11.132 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.132 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:11.132 [2024-07-26 08:48:29.409137] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:11.132 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.132 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:11.132 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.132 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:11.132 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.132 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:11.132 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.132 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:11.132 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.132 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:11.701 08:48:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:11.701 08:48:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:15:11.701 08:48:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:11.701 08:48:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:11.701 08:48:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:15:13.602 08:48:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:13.602 08:48:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:13.602 08:48:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:13.602 08:48:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:13.602 08:48:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:13.602 08:48:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:15:13.602 08:48:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:13.860 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:13.860 08:48:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:13.860 08:48:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:15:13.860 08:48:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:13.860 08:48:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:13.860 08:48:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:13.860 08:48:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:13.860 08:48:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:15:13.860 08:48:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:13.861 08:48:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.861 08:48:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:13.861 08:48:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.861 08:48:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:13.861 08:48:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.861 08:48:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:13.861 08:48:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.861 08:48:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:13.861 08:48:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:13.861 08:48:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.861 08:48:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:13.861 08:48:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.861 08:48:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:13.861 08:48:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.861 08:48:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:13.861 [2024-07-26 08:48:32.170242] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:13.861 08:48:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.861 08:48:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:13.861 08:48:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.861 08:48:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:13.861 08:48:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.861 08:48:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:13.861 08:48:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.861 08:48:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:13.861 08:48:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.861 08:48:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:14.429 08:48:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:14.429 08:48:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:15:14.429 08:48:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:14.429 08:48:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:14.429 08:48:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:15:16.333 08:48:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:16.333 08:48:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:16.333 08:48:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:16.592 08:48:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:16.592 08:48:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:16.592 08:48:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:15:16.592 08:48:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:16.592 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:16.592 08:48:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:16.592 08:48:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:15:16.592 08:48:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:16.592 08:48:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:16.592 08:48:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:16.592 08:48:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:16.592 08:48:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:15:16.592 08:48:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:16.592 08:48:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.592 08:48:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:16.592 08:48:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.592 08:48:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:16.592 08:48:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.592 08:48:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:16.592 08:48:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.592 08:48:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:16.592 08:48:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:16.592 08:48:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.592 08:48:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:16.592 08:48:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.592 08:48:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:16.592 08:48:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.592 08:48:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:16.592 [2024-07-26 08:48:34.902442] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:16.592 08:48:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.592 08:48:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:16.592 08:48:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.592 08:48:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:16.592 08:48:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.592 08:48:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:16.592 08:48:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.592 08:48:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:16.592 08:48:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.592 08:48:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:17.159 08:48:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:17.159 08:48:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:15:17.159 08:48:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:17.159 08:48:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:17.159 08:48:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:15:19.695 08:48:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:19.695 08:48:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:19.695 08:48:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:19.695 08:48:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:19.695 08:48:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:19.695 08:48:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:15:19.695 08:48:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:19.695 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:19.695 08:48:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:19.695 08:48:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:15:19.695 08:48:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:19.695 08:48:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:19.695 08:48:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:19.695 08:48:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:19.695 08:48:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:15:19.695 08:48:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:19.695 08:48:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.695 08:48:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:19.695 08:48:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.695 08:48:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:19.695 08:48:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.695 08:48:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:19.695 08:48:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.695 08:48:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:19.695 08:48:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:19.695 08:48:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.695 08:48:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:19.695 08:48:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.695 08:48:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:19.695 08:48:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.695 08:48:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:19.695 [2024-07-26 08:48:37.638443] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:19.695 08:48:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.695 08:48:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:19.695 08:48:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.695 08:48:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:19.695 08:48:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.695 08:48:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:19.695 08:48:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.695 08:48:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:19.695 08:48:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.695 08:48:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:19.953 08:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:19.953 08:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:15:19.953 08:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:19.953 08:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:19.953 08:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:15:22.489 08:48:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:22.489 08:48:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:22.489 08:48:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:22.489 08:48:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:22.489 08:48:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:22.489 08:48:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:15:22.489 08:48:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:22.489 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:22.489 08:48:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:22.489 08:48:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:15:22.489 08:48:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:22.490 08:48:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:22.490 08:48:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:22.490 08:48:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:22.490 08:48:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:15:22.490 08:48:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:22.490 08:48:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.490 08:48:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:22.490 08:48:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.490 08:48:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:22.490 08:48:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.490 08:48:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:22.490 08:48:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.490 08:48:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:22.490 08:48:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:22.490 08:48:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.490 08:48:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:22.490 08:48:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.490 08:48:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:22.490 08:48:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.490 08:48:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:22.490 [2024-07-26 08:48:40.459293] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:22.490 08:48:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.490 08:48:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:22.490 08:48:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.490 08:48:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:22.490 08:48:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.490 08:48:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:22.490 08:48:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.490 08:48:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:22.490 08:48:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.490 08:48:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:22.748 08:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:22.748 08:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:15:22.748 08:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:22.748 08:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:22.748 08:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:15:24.652 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:24.652 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:24.652 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:24.652 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:24.652 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:24.652 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:15:24.652 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:24.913 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:24.913 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:24.913 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:15:24.913 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:24.913 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:24.913 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:24.913 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:24.913 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:15:24.913 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:24.913 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.913 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:24.913 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.913 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:24.913 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.913 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:24.913 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.913 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:15:24.913 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:24.913 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:24.913 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.913 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:24.913 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.913 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:24.913 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.913 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:24.913 [2024-07-26 08:48:43.229394] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:24.913 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.913 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:24.913 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.913 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:24.913 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.913 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:24.913 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.913 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:24.913 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.913 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:24.913 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.913 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:24.913 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.913 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:24.913 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.913 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:24.913 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.913 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:24.913 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:24.913 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.913 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:24.913 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.913 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:24.913 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.913 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:24.913 [2024-07-26 08:48:43.277449] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:24.913 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.913 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:24.913 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.913 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:24.913 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.913 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:24.913 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.913 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:24.913 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.913 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:24.913 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.913 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:24.913 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.913 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:24.913 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.913 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:24.913 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.913 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:24.913 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:24.913 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.914 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:24.914 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.914 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:24.914 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.914 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:24.914 [2024-07-26 08:48:43.325589] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:24.914 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.914 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:24.914 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.914 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:24.914 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.914 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:24.914 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.914 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:24.914 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.914 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:24.914 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.914 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:24.914 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.914 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:24.914 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.914 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:24.914 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.914 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:24.914 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:24.914 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.914 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:24.914 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.914 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:24.914 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.914 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:25.172 [2024-07-26 08:48:43.373774] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:25.172 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.172 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:25.172 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.172 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:25.172 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.172 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:25.172 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.172 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:25.172 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.172 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:25.172 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.172 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:25.172 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.172 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:25.172 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.172 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:25.172 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.172 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:25.172 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:25.172 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.172 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:25.172 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.172 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:25.172 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.172 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:25.172 [2024-07-26 08:48:43.421898] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:25.172 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.172 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:25.172 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.172 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:25.173 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.173 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:25.173 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.173 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:25.173 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.173 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:25.173 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.173 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:25.173 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.173 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:25.173 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.173 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:25.173 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.173 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:15:25.173 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.173 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:25.173 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.173 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:15:25.173 "tick_rate": 2700000000, 00:15:25.173 "poll_groups": [ 00:15:25.173 { 00:15:25.173 "name": "nvmf_tgt_poll_group_000", 00:15:25.173 "admin_qpairs": 2, 00:15:25.173 "io_qpairs": 84, 00:15:25.173 "current_admin_qpairs": 0, 00:15:25.173 "current_io_qpairs": 0, 00:15:25.173 "pending_bdev_io": 0, 00:15:25.173 "completed_nvme_io": 183, 00:15:25.173 "transports": [ 00:15:25.173 { 00:15:25.173 "trtype": "TCP" 00:15:25.173 } 00:15:25.173 ] 00:15:25.173 }, 00:15:25.173 { 00:15:25.173 "name": "nvmf_tgt_poll_group_001", 00:15:25.173 "admin_qpairs": 2, 00:15:25.173 "io_qpairs": 84, 00:15:25.173 "current_admin_qpairs": 0, 00:15:25.173 "current_io_qpairs": 0, 00:15:25.173 "pending_bdev_io": 0, 00:15:25.173 "completed_nvme_io": 217, 00:15:25.173 "transports": [ 00:15:25.173 { 00:15:25.173 "trtype": "TCP" 00:15:25.173 } 00:15:25.173 ] 00:15:25.173 }, 00:15:25.173 { 00:15:25.173 "name": "nvmf_tgt_poll_group_002", 00:15:25.173 "admin_qpairs": 1, 00:15:25.173 "io_qpairs": 84, 00:15:25.173 "current_admin_qpairs": 0, 00:15:25.173 "current_io_qpairs": 0, 00:15:25.173 "pending_bdev_io": 0, 00:15:25.173 "completed_nvme_io": 135, 00:15:25.173 "transports": [ 00:15:25.173 { 00:15:25.173 "trtype": "TCP" 00:15:25.173 } 00:15:25.173 ] 00:15:25.173 }, 00:15:25.173 { 00:15:25.173 "name": "nvmf_tgt_poll_group_003", 00:15:25.173 "admin_qpairs": 2, 00:15:25.173 "io_qpairs": 84, 00:15:25.173 "current_admin_qpairs": 0, 00:15:25.173 "current_io_qpairs": 0, 00:15:25.173 "pending_bdev_io": 0, 00:15:25.173 "completed_nvme_io": 151, 00:15:25.173 "transports": [ 00:15:25.173 { 00:15:25.173 "trtype": "TCP" 00:15:25.173 } 00:15:25.173 ] 00:15:25.173 } 00:15:25.173 ] 00:15:25.173 }' 00:15:25.173 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:15:25.173 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:15:25.173 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:15:25.173 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:25.173 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:15:25.173 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:15:25.173 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:15:25.173 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:15:25.173 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:25.173 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:15:25.173 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:15:25.173 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:15:25.173 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:15:25.173 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:25.173 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:15:25.173 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:25.173 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:15:25.173 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:25.173 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:25.173 rmmod nvme_tcp 00:15:25.173 rmmod nvme_fabrics 00:15:25.173 rmmod nvme_keyring 00:15:25.173 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:25.173 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:15:25.173 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:15:25.173 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 939880 ']' 00:15:25.173 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 939880 00:15:25.173 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@950 -- # '[' -z 939880 ']' 00:15:25.173 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # kill -0 939880 00:15:25.173 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # uname 00:15:25.173 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:25.173 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 939880 00:15:25.431 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:25.431 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:25.431 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 939880' 00:15:25.431 killing process with pid 939880 00:15:25.431 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@969 -- # kill 939880 00:15:25.431 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@974 -- # wait 939880 00:15:25.692 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:25.692 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:25.692 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:25.692 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:25.692 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:25.692 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:25.692 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:25.692 08:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:27.596 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:27.596 00:15:27.596 real 0m25.084s 00:15:27.596 user 1m21.519s 00:15:27.596 sys 0m4.078s 00:15:27.596 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:27.596 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:27.596 ************************************ 00:15:27.596 END TEST nvmf_rpc 00:15:27.596 ************************************ 00:15:27.596 08:48:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:15:27.596 08:48:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:27.596 08:48:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:27.596 08:48:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:27.597 ************************************ 00:15:27.597 START TEST nvmf_invalid 00:15:27.597 ************************************ 00:15:27.597 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:15:27.597 * Looking for test storage... 00:15:27.597 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:27.597 08:48:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:27.597 08:48:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:15:27.597 08:48:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:27.597 08:48:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:27.597 08:48:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:27.597 08:48:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:27.597 08:48:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:27.597 08:48:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:27.597 08:48:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:27.597 08:48:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:27.597 08:48:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:27.855 08:48:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:27.855 08:48:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:27.855 08:48:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:27.855 08:48:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:27.855 08:48:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:27.855 08:48:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:27.855 08:48:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:27.855 08:48:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:27.855 08:48:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:27.855 08:48:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:27.855 08:48:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:27.855 08:48:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:27.855 08:48:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:27.855 08:48:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:27.855 08:48:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:15:27.855 08:48:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:27.855 08:48:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:15:27.855 08:48:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:27.855 08:48:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:27.855 08:48:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:27.855 08:48:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:27.855 08:48:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:27.855 08:48:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:27.855 08:48:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:27.855 08:48:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:27.855 08:48:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:15:27.855 08:48:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:27.855 08:48:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:15:27.855 08:48:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:15:27.855 08:48:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:15:27.855 08:48:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:15:27.855 08:48:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:27.855 08:48:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:27.855 08:48:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:27.855 08:48:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:27.855 08:48:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:27.855 08:48:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:27.855 08:48:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:27.855 08:48:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:27.855 08:48:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:27.855 08:48:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:27.855 08:48:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:15:27.855 08:48:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:15:29.761 08:48:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:29.761 08:48:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:15:29.761 08:48:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:29.761 08:48:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:29.761 08:48:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:29.761 08:48:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:29.761 08:48:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:29.761 08:48:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:15:29.761 08:48:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:29.761 08:48:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:15:29.761 08:48:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:15:29.761 08:48:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:15:29.761 08:48:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:15:29.761 08:48:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:15:29.761 08:48:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:15:29.761 08:48:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:29.761 08:48:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:29.761 08:48:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:29.761 08:48:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:29.761 08:48:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:29.761 08:48:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:29.761 08:48:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:29.761 08:48:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:29.761 08:48:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:29.761 08:48:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:29.761 08:48:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:29.761 08:48:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:29.761 08:48:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:29.761 08:48:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:29.761 08:48:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:29.761 08:48:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:29.761 08:48:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:29.761 08:48:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:29.761 08:48:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:15:29.761 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:15:29.761 08:48:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:29.761 08:48:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:29.761 08:48:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:29.761 08:48:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:29.761 08:48:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:29.761 08:48:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:29.761 08:48:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:15:29.761 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:15:29.761 08:48:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:29.761 08:48:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:29.761 08:48:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:29.761 08:48:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:29.761 08:48:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:29.761 08:48:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:29.761 08:48:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:29.761 08:48:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:29.761 08:48:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:29.761 08:48:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:29.761 08:48:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:29.761 08:48:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:29.761 08:48:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:29.761 08:48:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:29.761 08:48:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:29.761 08:48:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:15:29.761 Found net devices under 0000:0a:00.0: cvl_0_0 00:15:29.761 08:48:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:29.761 08:48:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:29.761 08:48:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:29.761 08:48:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:29.761 08:48:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:29.761 08:48:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:29.761 08:48:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:29.761 08:48:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:29.761 08:48:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:15:29.761 Found net devices under 0000:0a:00.1: cvl_0_1 00:15:29.761 08:48:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:29.761 08:48:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:29.761 08:48:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:15:29.761 08:48:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:29.761 08:48:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:29.761 08:48:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:29.761 08:48:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:29.761 08:48:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:29.761 08:48:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:29.761 08:48:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:29.761 08:48:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:29.761 08:48:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:29.761 08:48:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:29.761 08:48:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:29.761 08:48:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:29.761 08:48:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:29.761 08:48:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:29.761 08:48:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:29.761 08:48:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:29.761 08:48:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:29.762 08:48:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:29.762 08:48:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:29.762 08:48:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:29.762 08:48:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:29.762 08:48:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:29.762 08:48:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:29.762 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:29.762 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.197 ms 00:15:29.762 00:15:29.762 --- 10.0.0.2 ping statistics --- 00:15:29.762 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:29.762 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:15:29.762 08:48:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:29.762 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:29.762 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.183 ms 00:15:29.762 00:15:29.762 --- 10.0.0.1 ping statistics --- 00:15:29.762 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:29.762 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:15:29.762 08:48:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:29.762 08:48:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:15:29.762 08:48:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:29.762 08:48:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:29.762 08:48:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:29.762 08:48:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:29.762 08:48:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:29.762 08:48:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:29.762 08:48:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:29.762 08:48:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:15:29.762 08:48:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:29.762 08:48:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:29.762 08:48:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:15:29.762 08:48:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=944355 00:15:29.762 08:48:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:29.762 08:48:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 944355 00:15:29.762 08:48:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@831 -- # '[' -z 944355 ']' 00:15:29.762 08:48:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:29.762 08:48:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:29.762 08:48:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:29.762 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:29.762 08:48:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:29.762 08:48:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:15:30.021 [2024-07-26 08:48:48.227715] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:15:30.021 [2024-07-26 08:48:48.227798] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:30.021 EAL: No free 2048 kB hugepages reported on node 1 00:15:30.021 [2024-07-26 08:48:48.273388] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:30.021 [2024-07-26 08:48:48.305161] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:30.021 [2024-07-26 08:48:48.399041] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:30.021 [2024-07-26 08:48:48.399115] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:30.021 [2024-07-26 08:48:48.399143] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:30.021 [2024-07-26 08:48:48.399157] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:30.021 [2024-07-26 08:48:48.399169] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:30.021 [2024-07-26 08:48:48.399259] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:30.022 [2024-07-26 08:48:48.399316] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:30.022 [2024-07-26 08:48:48.399378] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:30.022 [2024-07-26 08:48:48.399381] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:30.280 08:48:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:30.280 08:48:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # return 0 00:15:30.280 08:48:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:30.280 08:48:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:30.280 08:48:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:15:30.280 08:48:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:30.280 08:48:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:15:30.280 08:48:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode8086 00:15:30.539 [2024-07-26 08:48:48.793416] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:15:30.539 08:48:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:15:30.539 { 00:15:30.539 "nqn": "nqn.2016-06.io.spdk:cnode8086", 00:15:30.539 "tgt_name": "foobar", 00:15:30.539 "method": "nvmf_create_subsystem", 00:15:30.539 "req_id": 1 00:15:30.539 } 00:15:30.539 Got JSON-RPC error response 00:15:30.539 response: 00:15:30.539 { 00:15:30.539 "code": -32603, 00:15:30.539 "message": "Unable to find target foobar" 00:15:30.539 }' 00:15:30.539 08:48:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:15:30.539 { 00:15:30.539 "nqn": "nqn.2016-06.io.spdk:cnode8086", 00:15:30.539 "tgt_name": "foobar", 00:15:30.539 "method": "nvmf_create_subsystem", 00:15:30.539 "req_id": 1 00:15:30.539 } 00:15:30.539 Got JSON-RPC error response 00:15:30.539 response: 00:15:30.539 { 00:15:30.539 "code": -32603, 00:15:30.539 "message": "Unable to find target foobar" 00:15:30.539 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:15:30.539 08:48:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:15:30.539 08:48:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode27750 00:15:30.797 [2024-07-26 08:48:49.054307] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27750: invalid serial number 'SPDKISFASTANDAWESOME' 00:15:30.797 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:15:30.797 { 00:15:30.797 "nqn": "nqn.2016-06.io.spdk:cnode27750", 00:15:30.797 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:15:30.797 "method": "nvmf_create_subsystem", 00:15:30.797 "req_id": 1 00:15:30.797 } 00:15:30.797 Got JSON-RPC error response 00:15:30.797 response: 00:15:30.797 { 00:15:30.797 "code": -32602, 00:15:30.797 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:15:30.797 }' 00:15:30.797 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:15:30.797 { 00:15:30.797 "nqn": "nqn.2016-06.io.spdk:cnode27750", 00:15:30.797 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:15:30.797 "method": "nvmf_create_subsystem", 00:15:30.797 "req_id": 1 00:15:30.797 } 00:15:30.797 Got JSON-RPC error response 00:15:30.797 response: 00:15:30.797 { 00:15:30.797 "code": -32602, 00:15:30.797 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:15:30.797 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:15:30.797 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:15:30.797 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode15887 00:15:31.056 [2024-07-26 08:48:49.311129] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15887: invalid model number 'SPDK_Controller' 00:15:31.056 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:15:31.056 { 00:15:31.056 "nqn": "nqn.2016-06.io.spdk:cnode15887", 00:15:31.056 "model_number": "SPDK_Controller\u001f", 00:15:31.056 "method": "nvmf_create_subsystem", 00:15:31.056 "req_id": 1 00:15:31.056 } 00:15:31.056 Got JSON-RPC error response 00:15:31.056 response: 00:15:31.056 { 00:15:31.056 "code": -32602, 00:15:31.056 "message": "Invalid MN SPDK_Controller\u001f" 00:15:31.056 }' 00:15:31.056 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:15:31.056 { 00:15:31.056 "nqn": "nqn.2016-06.io.spdk:cnode15887", 00:15:31.056 "model_number": "SPDK_Controller\u001f", 00:15:31.056 "method": "nvmf_create_subsystem", 00:15:31.056 "req_id": 1 00:15:31.056 } 00:15:31.056 Got JSON-RPC error response 00:15:31.056 response: 00:15:31.056 { 00:15:31.056 "code": -32602, 00:15:31.056 "message": "Invalid MN SPDK_Controller\u001f" 00:15:31.056 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:15:31.056 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:15:31.056 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:15:31.056 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:15:31.056 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:15:31.056 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:15:31.056 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:15:31.056 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:31.056 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:15:31.056 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:15:31.056 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:15:31.056 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:31.056 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:31.056 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:15:31.056 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:15:31.056 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:15:31.056 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:31.056 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:31.056 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:15:31.056 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:15:31.056 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:15:31.056 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:31.056 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:31.056 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:15:31.056 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:15:31.056 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:15:31.056 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:31.056 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:31.056 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:15:31.056 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:15:31.056 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:15:31.056 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:31.056 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:31.056 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:15:31.056 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:15:31.056 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:15:31.056 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:31.056 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:31.056 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:15:31.056 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:15:31.056 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:15:31.056 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:31.056 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:31.056 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:15:31.056 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:15:31.056 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:15:31.056 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:31.056 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:31.056 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:15:31.056 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:15:31.056 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:15:31.056 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:31.056 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:31.056 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:15:31.056 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:15:31.056 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:15:31.057 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:31.057 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:31.057 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:15:31.057 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:15:31.057 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:15:31.057 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:31.057 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:31.057 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:15:31.057 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:15:31.057 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:15:31.057 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:31.057 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:31.057 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:15:31.057 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:15:31.057 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:15:31.057 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:31.057 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:31.057 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:15:31.057 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:15:31.057 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:15:31.057 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:31.057 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:31.057 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:15:31.057 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:15:31.057 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:15:31.057 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:31.057 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:31.057 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:15:31.057 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:15:31.057 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:15:31.057 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:31.057 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:31.057 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:15:31.057 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:15:31.057 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:15:31.057 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:31.057 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:31.057 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:15:31.057 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:15:31.057 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:15:31.057 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:31.057 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:31.057 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:15:31.057 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:15:31.057 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:15:31.057 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:31.057 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:31.057 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:15:31.057 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:15:31.057 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:15:31.057 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:31.057 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:31.057 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:15:31.057 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:15:31.057 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:15:31.057 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:31.057 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:31.057 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ G == \- ]] 00:15:31.057 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'G*&N0se)2w+y=-!a?x>f"' 00:15:31.057 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'G*&N0se)2w+y=-!a?x>f"' nqn.2016-06.io.spdk:cnode3709 00:15:31.317 [2024-07-26 08:48:49.624249] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode3709: invalid serial number 'G*&N0se)2w+y=-!a?x>f"' 00:15:31.317 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:15:31.317 { 00:15:31.317 "nqn": "nqn.2016-06.io.spdk:cnode3709", 00:15:31.317 "serial_number": "G*&N0se)2w+y=-!a?x>f\"", 00:15:31.317 "method": "nvmf_create_subsystem", 00:15:31.317 "req_id": 1 00:15:31.317 } 00:15:31.317 Got JSON-RPC error response 00:15:31.317 response: 00:15:31.317 { 00:15:31.317 "code": -32602, 00:15:31.317 "message": "Invalid SN G*&N0se)2w+y=-!a?x>f\"" 00:15:31.317 }' 00:15:31.317 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:15:31.317 { 00:15:31.317 "nqn": "nqn.2016-06.io.spdk:cnode3709", 00:15:31.317 "serial_number": "G*&N0se)2w+y=-!a?x>f\"", 00:15:31.317 "method": "nvmf_create_subsystem", 00:15:31.317 "req_id": 1 00:15:31.317 } 00:15:31.317 Got JSON-RPC error response 00:15:31.317 response: 00:15:31.317 { 00:15:31.317 "code": -32602, 00:15:31.317 "message": "Invalid SN G*&N0se)2w+y=-!a?x>f\"" 00:15:31.317 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:15:31.317 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:15:31.317 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:15:31.317 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:15:31.317 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:15:31.317 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:15:31.317 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:15:31.317 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:31.317 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:15:31.317 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:15:31.317 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:15:31.317 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:31.317 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:31.317 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:15:31.317 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:15:31.317 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:15:31.317 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:31.317 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:31.317 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:15:31.317 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:15:31.317 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:15:31.317 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:31.317 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:31.317 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:15:31.317 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:15:31.317 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:15:31.317 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:31.317 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:31.317 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:15:31.317 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:15:31.317 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:15:31.317 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:31.317 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:31.317 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:15:31.317 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:15:31.317 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:15:31.317 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:31.317 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:31.317 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:15:31.317 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:15:31.317 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:15:31.317 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:31.317 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:31.317 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:15:31.317 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:15:31.317 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:15:31.317 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:31.317 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:31.317 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:15:31.317 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:15:31.317 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:15:31.317 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:31.317 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:31.317 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:15:31.317 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:15:31.317 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:15:31.317 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:31.317 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:31.317 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:15:31.317 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:15:31.317 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:15:31.317 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:31.317 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:31.317 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:15:31.317 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:15:31.317 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:15:31.317 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:31.317 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:31.317 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:15:31.317 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:15:31.317 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:15:31.317 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:31.317 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:31.317 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:15:31.317 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:15:31.317 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:15:31.317 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:31.317 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:31.317 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:15:31.317 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:15:31.317 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:15:31.317 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:31.317 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:31.317 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:15:31.317 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:15:31.317 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:15:31.317 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:31.317 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:31.317 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:15:31.317 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:15:31.317 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:15:31.317 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:31.317 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:31.317 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:15:31.317 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:15:31.317 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:15:31.317 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:31.318 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:31.318 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:15:31.318 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:15:31.318 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:15:31.318 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:31.318 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:31.318 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:15:31.318 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:15:31.318 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:15:31.318 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:31.318 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:31.318 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:15:31.318 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:15:31.318 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:15:31.318 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:31.318 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:31.318 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:15:31.318 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:15:31.318 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:15:31.318 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:31.318 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:31.318 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:15:31.318 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:15:31.318 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:15:31.318 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:31.318 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:31.318 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:15:31.318 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:15:31.318 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:15:31.318 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:31.318 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:31.318 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:15:31.318 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:15:31.318 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:15:31.318 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:31.318 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:31.318 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:15:31.318 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:15:31.318 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:15:31.318 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:31.318 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:31.318 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:15:31.318 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:15:31.318 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:15:31.318 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:31.318 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:31.318 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:15:31.318 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:15:31.318 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:15:31.318 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:31.318 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:31.318 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:15:31.318 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:15:31.318 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:15:31.318 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:31.318 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:31.318 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:15:31.318 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:15:31.318 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:15:31.318 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:31.318 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:31.318 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:15:31.318 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:15:31.318 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:15:31.318 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:31.318 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:31.318 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:15:31.318 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:15:31.318 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:15:31.318 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:31.318 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:31.318 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:15:31.318 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:15:31.318 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:15:31.318 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:31.318 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:31.318 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:15:31.318 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:15:31.318 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:15:31.318 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:31.318 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:31.577 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:15:31.577 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:15:31.577 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:15:31.577 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:31.577 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:31.577 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:15:31.577 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:15:31.577 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:15:31.577 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:31.577 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:31.577 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:15:31.577 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:15:31.577 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:15:31.577 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:31.577 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:31.577 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:15:31.577 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:15:31.577 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:15:31.577 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:31.577 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:31.577 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:15:31.577 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:15:31.577 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:15:31.577 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:31.577 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:31.577 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:15:31.577 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:15:31.577 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:15:31.577 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:31.577 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:31.577 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:15:31.577 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:15:31.577 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:15:31.577 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:31.577 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:31.577 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ d == \- ]] 00:15:31.577 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'd0~-LY.l5jk?Of74uKbhT9Wn%*|>HL:_E^q}hM%&R' 00:15:31.577 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'd0~-LY.l5jk?Of74uKbhT9Wn%*|>HL:_E^q}hM%&R' nqn.2016-06.io.spdk:cnode18556 00:15:31.837 [2024-07-26 08:48:50.041610] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode18556: invalid model number 'd0~-LY.l5jk?Of74uKbhT9Wn%*|>HL:_E^q}hM%&R' 00:15:31.837 08:48:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:15:31.837 { 00:15:31.837 "nqn": "nqn.2016-06.io.spdk:cnode18556", 00:15:31.837 "model_number": "d0~-LY.l5jk?Of74uKbhT9Wn%*|>HL:_E^q}hM%&R", 00:15:31.837 "method": "nvmf_create_subsystem", 00:15:31.837 "req_id": 1 00:15:31.837 } 00:15:31.837 Got JSON-RPC error response 00:15:31.837 response: 00:15:31.837 { 00:15:31.837 "code": -32602, 00:15:31.837 "message": "Invalid MN d0~-LY.l5jk?Of74uKbhT9Wn%*|>HL:_E^q}hM%&R" 00:15:31.837 }' 00:15:31.837 08:48:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:15:31.837 { 00:15:31.837 "nqn": "nqn.2016-06.io.spdk:cnode18556", 00:15:31.837 "model_number": "d0~-LY.l5jk?Of74uKbhT9Wn%*|>HL:_E^q}hM%&R", 00:15:31.837 "method": "nvmf_create_subsystem", 00:15:31.837 "req_id": 1 00:15:31.837 } 00:15:31.837 Got JSON-RPC error response 00:15:31.837 response: 00:15:31.837 { 00:15:31.837 "code": -32602, 00:15:31.837 "message": "Invalid MN d0~-LY.l5jk?Of74uKbhT9Wn%*|>HL:_E^q}hM%&R" 00:15:31.837 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:15:31.837 08:48:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:15:32.126 [2024-07-26 08:48:50.298482] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:32.126 08:48:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:15:32.384 08:48:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:15:32.384 08:48:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:15:32.384 08:48:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:15:32.384 08:48:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:15:32.384 08:48:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:15:32.384 [2024-07-26 08:48:50.800136] nvmf_rpc.c: 809:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:15:32.384 08:48:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:15:32.384 { 00:15:32.384 "nqn": "nqn.2016-06.io.spdk:cnode", 00:15:32.384 "listen_address": { 00:15:32.384 "trtype": "tcp", 00:15:32.384 "traddr": "", 00:15:32.384 "trsvcid": "4421" 00:15:32.384 }, 00:15:32.384 "method": "nvmf_subsystem_remove_listener", 00:15:32.384 "req_id": 1 00:15:32.384 } 00:15:32.384 Got JSON-RPC error response 00:15:32.384 response: 00:15:32.384 { 00:15:32.384 "code": -32602, 00:15:32.384 "message": "Invalid parameters" 00:15:32.384 }' 00:15:32.384 08:48:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:15:32.384 { 00:15:32.384 "nqn": "nqn.2016-06.io.spdk:cnode", 00:15:32.384 "listen_address": { 00:15:32.384 "trtype": "tcp", 00:15:32.384 "traddr": "", 00:15:32.384 "trsvcid": "4421" 00:15:32.384 }, 00:15:32.384 "method": "nvmf_subsystem_remove_listener", 00:15:32.384 "req_id": 1 00:15:32.384 } 00:15:32.384 Got JSON-RPC error response 00:15:32.384 response: 00:15:32.384 { 00:15:32.384 "code": -32602, 00:15:32.384 "message": "Invalid parameters" 00:15:32.384 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:15:32.384 08:48:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20318 -i 0 00:15:32.643 [2024-07-26 08:48:51.044894] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode20318: invalid cntlid range [0-65519] 00:15:32.643 08:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:15:32.643 { 00:15:32.643 "nqn": "nqn.2016-06.io.spdk:cnode20318", 00:15:32.643 "min_cntlid": 0, 00:15:32.643 "method": "nvmf_create_subsystem", 00:15:32.643 "req_id": 1 00:15:32.643 } 00:15:32.643 Got JSON-RPC error response 00:15:32.643 response: 00:15:32.643 { 00:15:32.643 "code": -32602, 00:15:32.643 "message": "Invalid cntlid range [0-65519]" 00:15:32.643 }' 00:15:32.643 08:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:15:32.643 { 00:15:32.643 "nqn": "nqn.2016-06.io.spdk:cnode20318", 00:15:32.643 "min_cntlid": 0, 00:15:32.643 "method": "nvmf_create_subsystem", 00:15:32.643 "req_id": 1 00:15:32.643 } 00:15:32.643 Got JSON-RPC error response 00:15:32.643 response: 00:15:32.643 { 00:15:32.643 "code": -32602, 00:15:32.643 "message": "Invalid cntlid range [0-65519]" 00:15:32.643 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:32.643 08:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode25576 -i 65520 00:15:32.901 [2024-07-26 08:48:51.309781] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode25576: invalid cntlid range [65520-65519] 00:15:32.901 08:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:15:32.901 { 00:15:32.901 "nqn": "nqn.2016-06.io.spdk:cnode25576", 00:15:32.901 "min_cntlid": 65520, 00:15:32.901 "method": "nvmf_create_subsystem", 00:15:32.901 "req_id": 1 00:15:32.901 } 00:15:32.901 Got JSON-RPC error response 00:15:32.901 response: 00:15:32.901 { 00:15:32.901 "code": -32602, 00:15:32.901 "message": "Invalid cntlid range [65520-65519]" 00:15:32.901 }' 00:15:32.901 08:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:15:32.901 { 00:15:32.901 "nqn": "nqn.2016-06.io.spdk:cnode25576", 00:15:32.901 "min_cntlid": 65520, 00:15:32.901 "method": "nvmf_create_subsystem", 00:15:32.901 "req_id": 1 00:15:32.901 } 00:15:32.901 Got JSON-RPC error response 00:15:32.901 response: 00:15:32.901 { 00:15:32.901 "code": -32602, 00:15:32.901 "message": "Invalid cntlid range [65520-65519]" 00:15:32.901 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:32.901 08:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode30024 -I 0 00:15:33.159 [2024-07-26 08:48:51.558638] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode30024: invalid cntlid range [1-0] 00:15:33.159 08:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:15:33.159 { 00:15:33.159 "nqn": "nqn.2016-06.io.spdk:cnode30024", 00:15:33.159 "max_cntlid": 0, 00:15:33.159 "method": "nvmf_create_subsystem", 00:15:33.159 "req_id": 1 00:15:33.159 } 00:15:33.159 Got JSON-RPC error response 00:15:33.159 response: 00:15:33.159 { 00:15:33.159 "code": -32602, 00:15:33.159 "message": "Invalid cntlid range [1-0]" 00:15:33.159 }' 00:15:33.159 08:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:15:33.159 { 00:15:33.159 "nqn": "nqn.2016-06.io.spdk:cnode30024", 00:15:33.159 "max_cntlid": 0, 00:15:33.159 "method": "nvmf_create_subsystem", 00:15:33.159 "req_id": 1 00:15:33.159 } 00:15:33.159 Got JSON-RPC error response 00:15:33.159 response: 00:15:33.159 { 00:15:33.159 "code": -32602, 00:15:33.159 "message": "Invalid cntlid range [1-0]" 00:15:33.159 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:33.159 08:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode27831 -I 65520 00:15:33.417 [2024-07-26 08:48:51.807475] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27831: invalid cntlid range [1-65520] 00:15:33.417 08:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:15:33.417 { 00:15:33.417 "nqn": "nqn.2016-06.io.spdk:cnode27831", 00:15:33.417 "max_cntlid": 65520, 00:15:33.417 "method": "nvmf_create_subsystem", 00:15:33.417 "req_id": 1 00:15:33.417 } 00:15:33.417 Got JSON-RPC error response 00:15:33.417 response: 00:15:33.417 { 00:15:33.417 "code": -32602, 00:15:33.417 "message": "Invalid cntlid range [1-65520]" 00:15:33.417 }' 00:15:33.417 08:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:15:33.417 { 00:15:33.417 "nqn": "nqn.2016-06.io.spdk:cnode27831", 00:15:33.417 "max_cntlid": 65520, 00:15:33.417 "method": "nvmf_create_subsystem", 00:15:33.417 "req_id": 1 00:15:33.417 } 00:15:33.417 Got JSON-RPC error response 00:15:33.417 response: 00:15:33.417 { 00:15:33.417 "code": -32602, 00:15:33.417 "message": "Invalid cntlid range [1-65520]" 00:15:33.417 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:33.417 08:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20525 -i 6 -I 5 00:15:33.675 [2024-07-26 08:48:52.052303] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode20525: invalid cntlid range [6-5] 00:15:33.675 08:48:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:15:33.675 { 00:15:33.675 "nqn": "nqn.2016-06.io.spdk:cnode20525", 00:15:33.675 "min_cntlid": 6, 00:15:33.675 "max_cntlid": 5, 00:15:33.675 "method": "nvmf_create_subsystem", 00:15:33.675 "req_id": 1 00:15:33.675 } 00:15:33.675 Got JSON-RPC error response 00:15:33.675 response: 00:15:33.675 { 00:15:33.675 "code": -32602, 00:15:33.675 "message": "Invalid cntlid range [6-5]" 00:15:33.675 }' 00:15:33.675 08:48:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:15:33.675 { 00:15:33.675 "nqn": "nqn.2016-06.io.spdk:cnode20525", 00:15:33.675 "min_cntlid": 6, 00:15:33.675 "max_cntlid": 5, 00:15:33.675 "method": "nvmf_create_subsystem", 00:15:33.675 "req_id": 1 00:15:33.675 } 00:15:33.675 Got JSON-RPC error response 00:15:33.675 response: 00:15:33.675 { 00:15:33.675 "code": -32602, 00:15:33.675 "message": "Invalid cntlid range [6-5]" 00:15:33.675 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:33.675 08:48:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:15:33.935 08:48:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:15:33.935 { 00:15:33.935 "name": "foobar", 00:15:33.935 "method": "nvmf_delete_target", 00:15:33.935 "req_id": 1 00:15:33.935 } 00:15:33.935 Got JSON-RPC error response 00:15:33.935 response: 00:15:33.935 { 00:15:33.935 "code": -32602, 00:15:33.935 "message": "The specified target doesn'\''t exist, cannot delete it." 00:15:33.935 }' 00:15:33.935 08:48:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:15:33.935 { 00:15:33.935 "name": "foobar", 00:15:33.935 "method": "nvmf_delete_target", 00:15:33.935 "req_id": 1 00:15:33.935 } 00:15:33.935 Got JSON-RPC error response 00:15:33.935 response: 00:15:33.935 { 00:15:33.935 "code": -32602, 00:15:33.935 "message": "The specified target doesn't exist, cannot delete it." 00:15:33.935 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:15:33.935 08:48:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:15:33.935 08:48:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:15:33.935 08:48:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:33.935 08:48:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:15:33.935 08:48:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:33.936 08:48:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:15:33.936 08:48:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:33.936 08:48:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:33.936 rmmod nvme_tcp 00:15:33.936 rmmod nvme_fabrics 00:15:33.936 rmmod nvme_keyring 00:15:33.936 08:48:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:33.936 08:48:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:15:33.936 08:48:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:15:33.936 08:48:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 944355 ']' 00:15:33.936 08:48:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 944355 00:15:33.936 08:48:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@950 -- # '[' -z 944355 ']' 00:15:33.936 08:48:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # kill -0 944355 00:15:33.936 08:48:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # uname 00:15:33.936 08:48:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:33.936 08:48:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 944355 00:15:33.936 08:48:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:33.936 08:48:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:33.936 08:48:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 944355' 00:15:33.936 killing process with pid 944355 00:15:33.936 08:48:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@969 -- # kill 944355 00:15:33.936 08:48:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@974 -- # wait 944355 00:15:34.196 08:48:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:34.196 08:48:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:34.196 08:48:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:34.196 08:48:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:34.196 08:48:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:34.196 08:48:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:34.196 08:48:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:34.196 08:48:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:36.106 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:36.106 00:15:36.106 real 0m8.550s 00:15:36.106 user 0m19.967s 00:15:36.106 sys 0m2.378s 00:15:36.106 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:36.106 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:15:36.106 ************************************ 00:15:36.106 END TEST nvmf_invalid 00:15:36.106 ************************************ 00:15:36.365 08:48:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:15:36.365 08:48:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:36.365 08:48:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:36.365 08:48:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:36.365 ************************************ 00:15:36.365 START TEST nvmf_connect_stress 00:15:36.365 ************************************ 00:15:36.365 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:15:36.365 * Looking for test storage... 00:15:36.365 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:36.365 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:36.365 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:15:36.365 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:36.365 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:36.365 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:36.365 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:36.365 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:36.365 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:36.365 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:36.365 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:36.365 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:36.365 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:36.365 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:36.365 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:36.366 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:36.366 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:36.366 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:36.366 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:36.366 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:36.366 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:36.366 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:36.366 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:36.366 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:36.366 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:36.366 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:36.366 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:15:36.366 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:36.366 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:15:36.366 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:36.366 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:36.366 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:36.366 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:36.366 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:36.366 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:36.366 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:36.366 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:36.366 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:15:36.366 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:36.366 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:36.366 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:36.366 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:36.366 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:36.366 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:36.366 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:36.366 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:36.366 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:36.366 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:36.366 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:15:36.366 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:38.271 08:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:38.271 08:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:15:38.271 08:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:38.271 08:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:38.271 08:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:38.271 08:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:38.271 08:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:38.271 08:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:15:38.271 08:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:38.271 08:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:15:38.271 08:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:15:38.271 08:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:15:38.271 08:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:15:38.271 08:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:15:38.271 08:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:15:38.271 08:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:38.271 08:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:38.271 08:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:38.271 08:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:38.271 08:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:38.271 08:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:38.271 08:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:38.271 08:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:38.271 08:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:38.271 08:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:38.271 08:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:38.271 08:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:38.271 08:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:38.271 08:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:38.271 08:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:38.271 08:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:38.271 08:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:38.271 08:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:38.271 08:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:15:38.271 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:15:38.271 08:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:38.271 08:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:38.271 08:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:38.271 08:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:38.271 08:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:38.271 08:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:38.271 08:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:15:38.271 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:15:38.271 08:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:38.271 08:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:38.271 08:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:38.271 08:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:38.271 08:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:38.271 08:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:38.271 08:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:38.271 08:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:38.271 08:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:38.272 08:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:38.272 08:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:38.272 08:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:38.272 08:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:38.272 08:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:38.272 08:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:38.272 08:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:15:38.272 Found net devices under 0000:0a:00.0: cvl_0_0 00:15:38.272 08:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:38.272 08:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:38.272 08:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:38.272 08:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:38.272 08:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:38.272 08:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:38.272 08:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:38.272 08:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:38.272 08:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:15:38.272 Found net devices under 0000:0a:00.1: cvl_0_1 00:15:38.272 08:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:38.272 08:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:38.272 08:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:15:38.272 08:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:38.272 08:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:38.272 08:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:38.272 08:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:38.272 08:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:38.272 08:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:38.272 08:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:38.272 08:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:38.272 08:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:38.272 08:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:38.272 08:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:38.272 08:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:38.272 08:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:38.272 08:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:38.272 08:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:38.272 08:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:38.272 08:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:38.272 08:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:38.272 08:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:38.272 08:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:38.272 08:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:38.272 08:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:38.272 08:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:38.272 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:38.272 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.154 ms 00:15:38.272 00:15:38.272 --- 10.0.0.2 ping statistics --- 00:15:38.272 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:38.272 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:15:38.272 08:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:38.272 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:38.272 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.133 ms 00:15:38.272 00:15:38.272 --- 10.0.0.1 ping statistics --- 00:15:38.272 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:38.272 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:15:38.272 08:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:38.272 08:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:15:38.272 08:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:38.272 08:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:38.272 08:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:38.272 08:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:38.272 08:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:38.272 08:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:38.272 08:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:38.272 08:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:15:38.272 08:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:38.272 08:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:38.272 08:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:38.272 08:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=946871 00:15:38.272 08:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 946871 00:15:38.272 08:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@831 -- # '[' -z 946871 ']' 00:15:38.272 08:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:15:38.272 08:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:38.272 08:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:38.272 08:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:38.272 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:38.272 08:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:38.272 08:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:38.531 [2024-07-26 08:48:56.774718] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:15:38.531 [2024-07-26 08:48:56.774801] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:38.531 EAL: No free 2048 kB hugepages reported on node 1 00:15:38.531 [2024-07-26 08:48:56.821336] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:38.531 [2024-07-26 08:48:56.853480] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:38.531 [2024-07-26 08:48:56.949170] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:38.531 [2024-07-26 08:48:56.949236] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:38.531 [2024-07-26 08:48:56.949262] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:38.531 [2024-07-26 08:48:56.949276] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:38.531 [2024-07-26 08:48:56.949288] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:38.531 [2024-07-26 08:48:56.949377] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:38.531 [2024-07-26 08:48:56.949446] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:38.531 [2024-07-26 08:48:56.949449] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:38.792 08:48:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:38.792 08:48:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # return 0 00:15:38.792 08:48:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:38.792 08:48:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:38.792 08:48:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:38.792 08:48:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:38.792 08:48:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:38.792 08:48:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.792 08:48:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:38.792 [2024-07-26 08:48:57.099510] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:38.792 08:48:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.792 08:48:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:38.792 08:48:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.792 08:48:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:38.792 08:48:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.792 08:48:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:38.792 08:48:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.792 08:48:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:38.792 [2024-07-26 08:48:57.124191] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:38.792 08:48:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.792 08:48:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:15:38.792 08:48:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.792 08:48:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:38.792 NULL1 00:15:38.792 08:48:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.793 08:48:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=947008 00:15:38.793 08:48:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:15:38.793 08:48:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:15:38.793 08:48:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:15:38.793 08:48:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:15:38.793 08:48:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:38.793 08:48:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:38.793 08:48:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:38.793 08:48:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:38.793 08:48:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:38.793 08:48:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:38.793 08:48:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:38.793 08:48:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:38.793 08:48:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:38.793 08:48:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:38.793 08:48:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:38.793 08:48:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:38.793 08:48:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:38.793 08:48:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:38.793 08:48:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:38.793 08:48:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:38.793 08:48:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:38.793 08:48:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:38.793 08:48:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:38.793 08:48:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:38.793 08:48:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:38.793 08:48:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:38.793 08:48:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:38.793 08:48:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:38.793 EAL: No free 2048 kB hugepages reported on node 1 00:15:38.793 08:48:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:38.793 08:48:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:38.793 08:48:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:38.793 08:48:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:38.793 08:48:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:38.793 08:48:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:38.793 08:48:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:38.793 08:48:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:38.793 08:48:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:38.793 08:48:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:38.793 08:48:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:38.793 08:48:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:38.793 08:48:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:38.793 08:48:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:38.793 08:48:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:38.793 08:48:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:38.793 08:48:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 947008 00:15:38.793 08:48:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:38.793 08:48:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.793 08:48:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:39.053 08:48:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.053 08:48:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 947008 00:15:39.053 08:48:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:39.053 08:48:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.053 08:48:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:39.619 08:48:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.619 08:48:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 947008 00:15:39.619 08:48:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:39.619 08:48:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.619 08:48:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:39.877 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.878 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 947008 00:15:39.878 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:39.878 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.878 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:40.137 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.137 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 947008 00:15:40.137 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:40.137 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.137 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:40.396 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.396 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 947008 00:15:40.396 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:40.396 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.396 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:40.655 08:48:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.655 08:48:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 947008 00:15:40.655 08:48:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:40.655 08:48:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.655 08:48:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:41.221 08:48:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.221 08:48:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 947008 00:15:41.221 08:48:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:41.221 08:48:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.221 08:48:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:41.481 08:48:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.481 08:48:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 947008 00:15:41.481 08:48:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:41.481 08:48:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.481 08:48:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:41.740 08:49:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.740 08:49:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 947008 00:15:41.740 08:49:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:41.740 08:49:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.740 08:49:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:42.001 08:49:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.001 08:49:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 947008 00:15:42.001 08:49:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:42.001 08:49:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.001 08:49:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:42.260 08:49:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.260 08:49:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 947008 00:15:42.260 08:49:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:42.260 08:49:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.260 08:49:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:42.829 08:49:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.829 08:49:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 947008 00:15:42.829 08:49:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:42.829 08:49:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.829 08:49:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:43.090 08:49:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.090 08:49:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 947008 00:15:43.090 08:49:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:43.090 08:49:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.090 08:49:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:43.350 08:49:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.350 08:49:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 947008 00:15:43.350 08:49:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:43.350 08:49:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.350 08:49:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:43.611 08:49:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.611 08:49:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 947008 00:15:43.611 08:49:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:43.611 08:49:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.611 08:49:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:43.869 08:49:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.869 08:49:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 947008 00:15:43.870 08:49:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:43.870 08:49:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.870 08:49:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:44.438 08:49:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.438 08:49:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 947008 00:15:44.438 08:49:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:44.438 08:49:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.438 08:49:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:44.698 08:49:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.698 08:49:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 947008 00:15:44.698 08:49:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:44.698 08:49:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.698 08:49:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:44.958 08:49:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.958 08:49:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 947008 00:15:44.958 08:49:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:44.958 08:49:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.958 08:49:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:45.216 08:49:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.216 08:49:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 947008 00:15:45.216 08:49:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:45.216 08:49:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.216 08:49:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:45.782 08:49:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.782 08:49:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 947008 00:15:45.782 08:49:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:45.782 08:49:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.782 08:49:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:46.042 08:49:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.042 08:49:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 947008 00:15:46.042 08:49:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:46.042 08:49:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.042 08:49:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:46.302 08:49:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.302 08:49:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 947008 00:15:46.302 08:49:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:46.302 08:49:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.302 08:49:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:46.561 08:49:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.561 08:49:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 947008 00:15:46.561 08:49:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:46.561 08:49:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.561 08:49:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:46.821 08:49:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.821 08:49:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 947008 00:15:46.822 08:49:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:46.822 08:49:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.822 08:49:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:47.390 08:49:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.390 08:49:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 947008 00:15:47.390 08:49:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:47.390 08:49:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.390 08:49:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:47.650 08:49:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.650 08:49:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 947008 00:15:47.650 08:49:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:47.650 08:49:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.650 08:49:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:47.910 08:49:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.910 08:49:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 947008 00:15:47.910 08:49:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:47.910 08:49:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.910 08:49:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:48.213 08:49:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.213 08:49:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 947008 00:15:48.213 08:49:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:48.213 08:49:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.213 08:49:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:48.471 08:49:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.471 08:49:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 947008 00:15:48.471 08:49:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:48.472 08:49:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.472 08:49:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:48.730 08:49:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.730 08:49:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 947008 00:15:48.730 08:49:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:48.730 08:49:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.730 08:49:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:48.990 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:49.248 08:49:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.248 08:49:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 947008 00:15:49.248 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (947008) - No such process 00:15:49.248 08:49:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 947008 00:15:49.248 08:49:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:15:49.248 08:49:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:15:49.248 08:49:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:15:49.248 08:49:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:49.248 08:49:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:15:49.248 08:49:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:49.248 08:49:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:15:49.248 08:49:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:49.248 08:49:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:49.248 rmmod nvme_tcp 00:15:49.248 rmmod nvme_fabrics 00:15:49.248 rmmod nvme_keyring 00:15:49.248 08:49:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:49.248 08:49:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:15:49.248 08:49:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:15:49.248 08:49:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 946871 ']' 00:15:49.248 08:49:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 946871 00:15:49.248 08:49:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@950 -- # '[' -z 946871 ']' 00:15:49.248 08:49:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # kill -0 946871 00:15:49.248 08:49:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # uname 00:15:49.248 08:49:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:49.248 08:49:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 946871 00:15:49.248 08:49:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:15:49.248 08:49:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:15:49.248 08:49:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 946871' 00:15:49.248 killing process with pid 946871 00:15:49.248 08:49:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@969 -- # kill 946871 00:15:49.248 08:49:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@974 -- # wait 946871 00:15:49.507 08:49:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:49.507 08:49:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:49.507 08:49:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:49.507 08:49:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:49.507 08:49:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:49.507 08:49:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:49.507 08:49:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:49.507 08:49:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:51.412 08:49:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:51.412 00:15:51.412 real 0m15.240s 00:15:51.412 user 0m38.292s 00:15:51.412 sys 0m5.883s 00:15:51.412 08:49:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:51.412 08:49:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:51.412 ************************************ 00:15:51.412 END TEST nvmf_connect_stress 00:15:51.412 ************************************ 00:15:51.412 08:49:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:15:51.412 08:49:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:51.412 08:49:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:51.412 08:49:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:51.671 ************************************ 00:15:51.671 START TEST nvmf_fused_ordering 00:15:51.671 ************************************ 00:15:51.671 08:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:15:51.671 * Looking for test storage... 00:15:51.671 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:51.671 08:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:51.671 08:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:15:51.671 08:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:51.671 08:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:51.671 08:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:51.671 08:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:51.671 08:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:51.671 08:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:51.671 08:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:51.671 08:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:51.671 08:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:51.671 08:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:51.671 08:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:51.671 08:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:51.671 08:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:51.671 08:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:51.671 08:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:51.671 08:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:51.671 08:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:51.671 08:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:51.671 08:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:51.671 08:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:51.671 08:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:51.671 08:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:51.671 08:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:51.671 08:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:15:51.671 08:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:51.671 08:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:15:51.671 08:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:51.671 08:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:51.671 08:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:51.671 08:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:51.671 08:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:51.671 08:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:51.671 08:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:51.671 08:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:51.671 08:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:15:51.671 08:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:51.671 08:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:51.671 08:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:51.671 08:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:51.671 08:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:51.671 08:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:51.671 08:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:51.671 08:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:51.671 08:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:51.671 08:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:51.671 08:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:15:51.671 08:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:53.573 08:49:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:53.573 08:49:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:15:53.573 08:49:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:53.573 08:49:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:53.573 08:49:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:53.573 08:49:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:53.573 08:49:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:53.573 08:49:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:15:53.573 08:49:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:53.573 08:49:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:15:53.573 08:49:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:15:53.573 08:49:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:15:53.573 08:49:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:15:53.573 08:49:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:15:53.573 08:49:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:15:53.573 08:49:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:53.573 08:49:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:53.573 08:49:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:53.573 08:49:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:53.573 08:49:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:53.573 08:49:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:53.573 08:49:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:53.573 08:49:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:53.573 08:49:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:53.573 08:49:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:53.573 08:49:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:53.573 08:49:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:53.573 08:49:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:53.573 08:49:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:53.573 08:49:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:53.573 08:49:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:53.573 08:49:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:53.573 08:49:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:53.573 08:49:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:15:53.573 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:15:53.573 08:49:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:53.573 08:49:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:53.573 08:49:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:53.573 08:49:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:53.573 08:49:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:53.573 08:49:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:53.573 08:49:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:15:53.573 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:15:53.573 08:49:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:53.573 08:49:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:53.573 08:49:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:53.573 08:49:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:53.573 08:49:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:53.573 08:49:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:53.573 08:49:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:53.573 08:49:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:53.573 08:49:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:53.573 08:49:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:53.573 08:49:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:53.573 08:49:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:53.573 08:49:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:53.573 08:49:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:53.573 08:49:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:53.573 08:49:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:15:53.573 Found net devices under 0000:0a:00.0: cvl_0_0 00:15:53.573 08:49:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:53.573 08:49:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:53.573 08:49:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:53.573 08:49:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:53.573 08:49:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:53.573 08:49:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:53.573 08:49:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:53.573 08:49:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:53.573 08:49:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:15:53.573 Found net devices under 0000:0a:00.1: cvl_0_1 00:15:53.573 08:49:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:53.573 08:49:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:53.573 08:49:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:15:53.573 08:49:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:53.573 08:49:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:53.573 08:49:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:53.573 08:49:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:53.573 08:49:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:53.573 08:49:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:53.573 08:49:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:53.573 08:49:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:53.573 08:49:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:53.573 08:49:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:53.573 08:49:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:53.573 08:49:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:53.573 08:49:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:53.573 08:49:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:53.573 08:49:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:53.573 08:49:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:53.573 08:49:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:53.573 08:49:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:53.573 08:49:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:53.573 08:49:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:53.832 08:49:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:53.832 08:49:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:53.832 08:49:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:53.832 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:53.832 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.201 ms 00:15:53.832 00:15:53.832 --- 10.0.0.2 ping statistics --- 00:15:53.832 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:53.832 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:15:53.832 08:49:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:53.832 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:53.832 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.123 ms 00:15:53.832 00:15:53.832 --- 10.0.0.1 ping statistics --- 00:15:53.832 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:53.832 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:15:53.832 08:49:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:53.832 08:49:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:15:53.832 08:49:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:53.832 08:49:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:53.832 08:49:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:53.832 08:49:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:53.832 08:49:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:53.832 08:49:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:53.832 08:49:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:53.832 08:49:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:15:53.832 08:49:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:53.832 08:49:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:53.832 08:49:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:53.832 08:49:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=950147 00:15:53.832 08:49:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:53.832 08:49:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 950147 00:15:53.832 08:49:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@831 -- # '[' -z 950147 ']' 00:15:53.832 08:49:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:53.832 08:49:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:53.832 08:49:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:53.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:53.832 08:49:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:53.832 08:49:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:53.832 [2024-07-26 08:49:12.145901] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:15:53.832 [2024-07-26 08:49:12.145989] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:53.832 EAL: No free 2048 kB hugepages reported on node 1 00:15:53.832 [2024-07-26 08:49:12.193869] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:53.832 [2024-07-26 08:49:12.224350] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:54.092 [2024-07-26 08:49:12.320810] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:54.092 [2024-07-26 08:49:12.320868] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:54.092 [2024-07-26 08:49:12.320885] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:54.092 [2024-07-26 08:49:12.320899] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:54.092 [2024-07-26 08:49:12.320911] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:54.092 [2024-07-26 08:49:12.320939] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:54.092 08:49:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:54.092 08:49:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # return 0 00:15:54.092 08:49:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:54.092 08:49:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:54.092 08:49:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:54.093 08:49:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:54.093 08:49:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:54.093 08:49:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.093 08:49:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:54.093 [2024-07-26 08:49:12.472518] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:54.093 08:49:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.093 08:49:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:54.093 08:49:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.093 08:49:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:54.093 08:49:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.093 08:49:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:54.093 08:49:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.093 08:49:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:54.093 [2024-07-26 08:49:12.488735] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:54.093 08:49:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.093 08:49:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:15:54.093 08:49:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.093 08:49:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:54.093 NULL1 00:15:54.093 08:49:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.093 08:49:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:15:54.093 08:49:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.093 08:49:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:54.093 08:49:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.093 08:49:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:15:54.093 08:49:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.093 08:49:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:54.093 08:49:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.093 08:49:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:15:54.093 [2024-07-26 08:49:12.534548] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:15:54.093 [2024-07-26 08:49:12.534590] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid950179 ] 00:15:54.352 EAL: No free 2048 kB hugepages reported on node 1 00:15:54.352 [2024-07-26 08:49:12.568207] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:54.612 Attached to nqn.2016-06.io.spdk:cnode1 00:15:54.612 Namespace ID: 1 size: 1GB 00:15:54.612 fused_ordering(0) 00:15:54.612 fused_ordering(1) 00:15:54.612 fused_ordering(2) 00:15:54.612 fused_ordering(3) 00:15:54.612 fused_ordering(4) 00:15:54.612 fused_ordering(5) 00:15:54.612 fused_ordering(6) 00:15:54.612 fused_ordering(7) 00:15:54.612 fused_ordering(8) 00:15:54.612 fused_ordering(9) 00:15:54.612 fused_ordering(10) 00:15:54.612 fused_ordering(11) 00:15:54.612 fused_ordering(12) 00:15:54.612 fused_ordering(13) 00:15:54.612 fused_ordering(14) 00:15:54.612 fused_ordering(15) 00:15:54.612 fused_ordering(16) 00:15:54.612 fused_ordering(17) 00:15:54.612 fused_ordering(18) 00:15:54.612 fused_ordering(19) 00:15:54.612 fused_ordering(20) 00:15:54.612 fused_ordering(21) 00:15:54.612 fused_ordering(22) 00:15:54.612 fused_ordering(23) 00:15:54.612 fused_ordering(24) 00:15:54.612 fused_ordering(25) 00:15:54.612 fused_ordering(26) 00:15:54.612 fused_ordering(27) 00:15:54.612 fused_ordering(28) 00:15:54.612 fused_ordering(29) 00:15:54.612 fused_ordering(30) 00:15:54.612 fused_ordering(31) 00:15:54.612 fused_ordering(32) 00:15:54.612 fused_ordering(33) 00:15:54.612 fused_ordering(34) 00:15:54.612 fused_ordering(35) 00:15:54.612 fused_ordering(36) 00:15:54.612 fused_ordering(37) 00:15:54.612 fused_ordering(38) 00:15:54.612 fused_ordering(39) 00:15:54.612 fused_ordering(40) 00:15:54.612 fused_ordering(41) 00:15:54.612 fused_ordering(42) 00:15:54.612 fused_ordering(43) 00:15:54.612 fused_ordering(44) 00:15:54.612 fused_ordering(45) 00:15:54.612 fused_ordering(46) 00:15:54.612 fused_ordering(47) 00:15:54.612 fused_ordering(48) 00:15:54.612 fused_ordering(49) 00:15:54.612 fused_ordering(50) 00:15:54.612 fused_ordering(51) 00:15:54.612 fused_ordering(52) 00:15:54.612 fused_ordering(53) 00:15:54.612 fused_ordering(54) 00:15:54.612 fused_ordering(55) 00:15:54.612 fused_ordering(56) 00:15:54.612 fused_ordering(57) 00:15:54.612 fused_ordering(58) 00:15:54.612 fused_ordering(59) 00:15:54.612 fused_ordering(60) 00:15:54.613 fused_ordering(61) 00:15:54.613 fused_ordering(62) 00:15:54.613 fused_ordering(63) 00:15:54.613 fused_ordering(64) 00:15:54.613 fused_ordering(65) 00:15:54.613 fused_ordering(66) 00:15:54.613 fused_ordering(67) 00:15:54.613 fused_ordering(68) 00:15:54.613 fused_ordering(69) 00:15:54.613 fused_ordering(70) 00:15:54.613 fused_ordering(71) 00:15:54.613 fused_ordering(72) 00:15:54.613 fused_ordering(73) 00:15:54.613 fused_ordering(74) 00:15:54.613 fused_ordering(75) 00:15:54.613 fused_ordering(76) 00:15:54.613 fused_ordering(77) 00:15:54.613 fused_ordering(78) 00:15:54.613 fused_ordering(79) 00:15:54.613 fused_ordering(80) 00:15:54.613 fused_ordering(81) 00:15:54.613 fused_ordering(82) 00:15:54.613 fused_ordering(83) 00:15:54.613 fused_ordering(84) 00:15:54.613 fused_ordering(85) 00:15:54.613 fused_ordering(86) 00:15:54.613 fused_ordering(87) 00:15:54.613 fused_ordering(88) 00:15:54.613 fused_ordering(89) 00:15:54.613 fused_ordering(90) 00:15:54.613 fused_ordering(91) 00:15:54.613 fused_ordering(92) 00:15:54.613 fused_ordering(93) 00:15:54.613 fused_ordering(94) 00:15:54.613 fused_ordering(95) 00:15:54.613 fused_ordering(96) 00:15:54.613 fused_ordering(97) 00:15:54.613 fused_ordering(98) 00:15:54.613 fused_ordering(99) 00:15:54.613 fused_ordering(100) 00:15:54.613 fused_ordering(101) 00:15:54.613 fused_ordering(102) 00:15:54.613 fused_ordering(103) 00:15:54.613 fused_ordering(104) 00:15:54.613 fused_ordering(105) 00:15:54.613 fused_ordering(106) 00:15:54.613 fused_ordering(107) 00:15:54.613 fused_ordering(108) 00:15:54.613 fused_ordering(109) 00:15:54.613 fused_ordering(110) 00:15:54.613 fused_ordering(111) 00:15:54.613 fused_ordering(112) 00:15:54.613 fused_ordering(113) 00:15:54.613 fused_ordering(114) 00:15:54.613 fused_ordering(115) 00:15:54.613 fused_ordering(116) 00:15:54.613 fused_ordering(117) 00:15:54.613 fused_ordering(118) 00:15:54.613 fused_ordering(119) 00:15:54.613 fused_ordering(120) 00:15:54.613 fused_ordering(121) 00:15:54.613 fused_ordering(122) 00:15:54.613 fused_ordering(123) 00:15:54.613 fused_ordering(124) 00:15:54.613 fused_ordering(125) 00:15:54.613 fused_ordering(126) 00:15:54.613 fused_ordering(127) 00:15:54.613 fused_ordering(128) 00:15:54.613 fused_ordering(129) 00:15:54.613 fused_ordering(130) 00:15:54.613 fused_ordering(131) 00:15:54.613 fused_ordering(132) 00:15:54.613 fused_ordering(133) 00:15:54.613 fused_ordering(134) 00:15:54.613 fused_ordering(135) 00:15:54.613 fused_ordering(136) 00:15:54.613 fused_ordering(137) 00:15:54.613 fused_ordering(138) 00:15:54.613 fused_ordering(139) 00:15:54.613 fused_ordering(140) 00:15:54.613 fused_ordering(141) 00:15:54.613 fused_ordering(142) 00:15:54.613 fused_ordering(143) 00:15:54.613 fused_ordering(144) 00:15:54.613 fused_ordering(145) 00:15:54.613 fused_ordering(146) 00:15:54.613 fused_ordering(147) 00:15:54.613 fused_ordering(148) 00:15:54.613 fused_ordering(149) 00:15:54.613 fused_ordering(150) 00:15:54.613 fused_ordering(151) 00:15:54.613 fused_ordering(152) 00:15:54.613 fused_ordering(153) 00:15:54.613 fused_ordering(154) 00:15:54.613 fused_ordering(155) 00:15:54.613 fused_ordering(156) 00:15:54.613 fused_ordering(157) 00:15:54.613 fused_ordering(158) 00:15:54.613 fused_ordering(159) 00:15:54.613 fused_ordering(160) 00:15:54.613 fused_ordering(161) 00:15:54.613 fused_ordering(162) 00:15:54.613 fused_ordering(163) 00:15:54.613 fused_ordering(164) 00:15:54.613 fused_ordering(165) 00:15:54.613 fused_ordering(166) 00:15:54.613 fused_ordering(167) 00:15:54.613 fused_ordering(168) 00:15:54.613 fused_ordering(169) 00:15:54.613 fused_ordering(170) 00:15:54.613 fused_ordering(171) 00:15:54.613 fused_ordering(172) 00:15:54.613 fused_ordering(173) 00:15:54.613 fused_ordering(174) 00:15:54.613 fused_ordering(175) 00:15:54.613 fused_ordering(176) 00:15:54.613 fused_ordering(177) 00:15:54.613 fused_ordering(178) 00:15:54.613 fused_ordering(179) 00:15:54.613 fused_ordering(180) 00:15:54.613 fused_ordering(181) 00:15:54.613 fused_ordering(182) 00:15:54.613 fused_ordering(183) 00:15:54.613 fused_ordering(184) 00:15:54.613 fused_ordering(185) 00:15:54.613 fused_ordering(186) 00:15:54.613 fused_ordering(187) 00:15:54.613 fused_ordering(188) 00:15:54.613 fused_ordering(189) 00:15:54.613 fused_ordering(190) 00:15:54.613 fused_ordering(191) 00:15:54.613 fused_ordering(192) 00:15:54.613 fused_ordering(193) 00:15:54.613 fused_ordering(194) 00:15:54.613 fused_ordering(195) 00:15:54.613 fused_ordering(196) 00:15:54.613 fused_ordering(197) 00:15:54.613 fused_ordering(198) 00:15:54.613 fused_ordering(199) 00:15:54.613 fused_ordering(200) 00:15:54.613 fused_ordering(201) 00:15:54.613 fused_ordering(202) 00:15:54.613 fused_ordering(203) 00:15:54.613 fused_ordering(204) 00:15:54.613 fused_ordering(205) 00:15:55.181 fused_ordering(206) 00:15:55.181 fused_ordering(207) 00:15:55.181 fused_ordering(208) 00:15:55.181 fused_ordering(209) 00:15:55.181 fused_ordering(210) 00:15:55.181 fused_ordering(211) 00:15:55.181 fused_ordering(212) 00:15:55.181 fused_ordering(213) 00:15:55.181 fused_ordering(214) 00:15:55.182 fused_ordering(215) 00:15:55.182 fused_ordering(216) 00:15:55.182 fused_ordering(217) 00:15:55.182 fused_ordering(218) 00:15:55.182 fused_ordering(219) 00:15:55.182 fused_ordering(220) 00:15:55.182 fused_ordering(221) 00:15:55.182 fused_ordering(222) 00:15:55.182 fused_ordering(223) 00:15:55.182 fused_ordering(224) 00:15:55.182 fused_ordering(225) 00:15:55.182 fused_ordering(226) 00:15:55.182 fused_ordering(227) 00:15:55.182 fused_ordering(228) 00:15:55.182 fused_ordering(229) 00:15:55.182 fused_ordering(230) 00:15:55.182 fused_ordering(231) 00:15:55.182 fused_ordering(232) 00:15:55.182 fused_ordering(233) 00:15:55.182 fused_ordering(234) 00:15:55.182 fused_ordering(235) 00:15:55.182 fused_ordering(236) 00:15:55.182 fused_ordering(237) 00:15:55.182 fused_ordering(238) 00:15:55.182 fused_ordering(239) 00:15:55.182 fused_ordering(240) 00:15:55.182 fused_ordering(241) 00:15:55.182 fused_ordering(242) 00:15:55.182 fused_ordering(243) 00:15:55.182 fused_ordering(244) 00:15:55.182 fused_ordering(245) 00:15:55.182 fused_ordering(246) 00:15:55.182 fused_ordering(247) 00:15:55.182 fused_ordering(248) 00:15:55.182 fused_ordering(249) 00:15:55.182 fused_ordering(250) 00:15:55.182 fused_ordering(251) 00:15:55.182 fused_ordering(252) 00:15:55.182 fused_ordering(253) 00:15:55.182 fused_ordering(254) 00:15:55.182 fused_ordering(255) 00:15:55.182 fused_ordering(256) 00:15:55.182 fused_ordering(257) 00:15:55.182 fused_ordering(258) 00:15:55.182 fused_ordering(259) 00:15:55.182 fused_ordering(260) 00:15:55.182 fused_ordering(261) 00:15:55.182 fused_ordering(262) 00:15:55.182 fused_ordering(263) 00:15:55.182 fused_ordering(264) 00:15:55.182 fused_ordering(265) 00:15:55.182 fused_ordering(266) 00:15:55.182 fused_ordering(267) 00:15:55.182 fused_ordering(268) 00:15:55.182 fused_ordering(269) 00:15:55.182 fused_ordering(270) 00:15:55.182 fused_ordering(271) 00:15:55.182 fused_ordering(272) 00:15:55.182 fused_ordering(273) 00:15:55.182 fused_ordering(274) 00:15:55.182 fused_ordering(275) 00:15:55.182 fused_ordering(276) 00:15:55.182 fused_ordering(277) 00:15:55.182 fused_ordering(278) 00:15:55.182 fused_ordering(279) 00:15:55.182 fused_ordering(280) 00:15:55.182 fused_ordering(281) 00:15:55.182 fused_ordering(282) 00:15:55.182 fused_ordering(283) 00:15:55.182 fused_ordering(284) 00:15:55.182 fused_ordering(285) 00:15:55.182 fused_ordering(286) 00:15:55.182 fused_ordering(287) 00:15:55.182 fused_ordering(288) 00:15:55.182 fused_ordering(289) 00:15:55.182 fused_ordering(290) 00:15:55.182 fused_ordering(291) 00:15:55.182 fused_ordering(292) 00:15:55.182 fused_ordering(293) 00:15:55.182 fused_ordering(294) 00:15:55.182 fused_ordering(295) 00:15:55.182 fused_ordering(296) 00:15:55.182 fused_ordering(297) 00:15:55.182 fused_ordering(298) 00:15:55.182 fused_ordering(299) 00:15:55.182 fused_ordering(300) 00:15:55.182 fused_ordering(301) 00:15:55.182 fused_ordering(302) 00:15:55.182 fused_ordering(303) 00:15:55.182 fused_ordering(304) 00:15:55.182 fused_ordering(305) 00:15:55.182 fused_ordering(306) 00:15:55.182 fused_ordering(307) 00:15:55.182 fused_ordering(308) 00:15:55.182 fused_ordering(309) 00:15:55.182 fused_ordering(310) 00:15:55.182 fused_ordering(311) 00:15:55.182 fused_ordering(312) 00:15:55.182 fused_ordering(313) 00:15:55.182 fused_ordering(314) 00:15:55.182 fused_ordering(315) 00:15:55.182 fused_ordering(316) 00:15:55.182 fused_ordering(317) 00:15:55.182 fused_ordering(318) 00:15:55.182 fused_ordering(319) 00:15:55.182 fused_ordering(320) 00:15:55.182 fused_ordering(321) 00:15:55.182 fused_ordering(322) 00:15:55.182 fused_ordering(323) 00:15:55.182 fused_ordering(324) 00:15:55.182 fused_ordering(325) 00:15:55.182 fused_ordering(326) 00:15:55.182 fused_ordering(327) 00:15:55.182 fused_ordering(328) 00:15:55.182 fused_ordering(329) 00:15:55.182 fused_ordering(330) 00:15:55.182 fused_ordering(331) 00:15:55.182 fused_ordering(332) 00:15:55.182 fused_ordering(333) 00:15:55.182 fused_ordering(334) 00:15:55.182 fused_ordering(335) 00:15:55.182 fused_ordering(336) 00:15:55.182 fused_ordering(337) 00:15:55.182 fused_ordering(338) 00:15:55.182 fused_ordering(339) 00:15:55.182 fused_ordering(340) 00:15:55.182 fused_ordering(341) 00:15:55.182 fused_ordering(342) 00:15:55.182 fused_ordering(343) 00:15:55.182 fused_ordering(344) 00:15:55.182 fused_ordering(345) 00:15:55.182 fused_ordering(346) 00:15:55.182 fused_ordering(347) 00:15:55.182 fused_ordering(348) 00:15:55.182 fused_ordering(349) 00:15:55.182 fused_ordering(350) 00:15:55.182 fused_ordering(351) 00:15:55.182 fused_ordering(352) 00:15:55.182 fused_ordering(353) 00:15:55.182 fused_ordering(354) 00:15:55.182 fused_ordering(355) 00:15:55.182 fused_ordering(356) 00:15:55.182 fused_ordering(357) 00:15:55.182 fused_ordering(358) 00:15:55.182 fused_ordering(359) 00:15:55.182 fused_ordering(360) 00:15:55.182 fused_ordering(361) 00:15:55.182 fused_ordering(362) 00:15:55.182 fused_ordering(363) 00:15:55.182 fused_ordering(364) 00:15:55.182 fused_ordering(365) 00:15:55.182 fused_ordering(366) 00:15:55.182 fused_ordering(367) 00:15:55.182 fused_ordering(368) 00:15:55.182 fused_ordering(369) 00:15:55.182 fused_ordering(370) 00:15:55.182 fused_ordering(371) 00:15:55.182 fused_ordering(372) 00:15:55.182 fused_ordering(373) 00:15:55.182 fused_ordering(374) 00:15:55.182 fused_ordering(375) 00:15:55.182 fused_ordering(376) 00:15:55.182 fused_ordering(377) 00:15:55.182 fused_ordering(378) 00:15:55.182 fused_ordering(379) 00:15:55.182 fused_ordering(380) 00:15:55.182 fused_ordering(381) 00:15:55.182 fused_ordering(382) 00:15:55.182 fused_ordering(383) 00:15:55.182 fused_ordering(384) 00:15:55.182 fused_ordering(385) 00:15:55.182 fused_ordering(386) 00:15:55.182 fused_ordering(387) 00:15:55.182 fused_ordering(388) 00:15:55.182 fused_ordering(389) 00:15:55.182 fused_ordering(390) 00:15:55.182 fused_ordering(391) 00:15:55.182 fused_ordering(392) 00:15:55.182 fused_ordering(393) 00:15:55.182 fused_ordering(394) 00:15:55.182 fused_ordering(395) 00:15:55.182 fused_ordering(396) 00:15:55.182 fused_ordering(397) 00:15:55.182 fused_ordering(398) 00:15:55.182 fused_ordering(399) 00:15:55.182 fused_ordering(400) 00:15:55.182 fused_ordering(401) 00:15:55.182 fused_ordering(402) 00:15:55.182 fused_ordering(403) 00:15:55.182 fused_ordering(404) 00:15:55.182 fused_ordering(405) 00:15:55.182 fused_ordering(406) 00:15:55.182 fused_ordering(407) 00:15:55.182 fused_ordering(408) 00:15:55.182 fused_ordering(409) 00:15:55.182 fused_ordering(410) 00:15:55.440 fused_ordering(411) 00:15:55.440 fused_ordering(412) 00:15:55.440 fused_ordering(413) 00:15:55.440 fused_ordering(414) 00:15:55.440 fused_ordering(415) 00:15:55.440 fused_ordering(416) 00:15:55.440 fused_ordering(417) 00:15:55.440 fused_ordering(418) 00:15:55.440 fused_ordering(419) 00:15:55.440 fused_ordering(420) 00:15:55.440 fused_ordering(421) 00:15:55.440 fused_ordering(422) 00:15:55.440 fused_ordering(423) 00:15:55.440 fused_ordering(424) 00:15:55.440 fused_ordering(425) 00:15:55.440 fused_ordering(426) 00:15:55.440 fused_ordering(427) 00:15:55.440 fused_ordering(428) 00:15:55.440 fused_ordering(429) 00:15:55.440 fused_ordering(430) 00:15:55.440 fused_ordering(431) 00:15:55.440 fused_ordering(432) 00:15:55.440 fused_ordering(433) 00:15:55.440 fused_ordering(434) 00:15:55.440 fused_ordering(435) 00:15:55.440 fused_ordering(436) 00:15:55.440 fused_ordering(437) 00:15:55.440 fused_ordering(438) 00:15:55.440 fused_ordering(439) 00:15:55.440 fused_ordering(440) 00:15:55.440 fused_ordering(441) 00:15:55.440 fused_ordering(442) 00:15:55.440 fused_ordering(443) 00:15:55.440 fused_ordering(444) 00:15:55.440 fused_ordering(445) 00:15:55.440 fused_ordering(446) 00:15:55.440 fused_ordering(447) 00:15:55.440 fused_ordering(448) 00:15:55.440 fused_ordering(449) 00:15:55.440 fused_ordering(450) 00:15:55.440 fused_ordering(451) 00:15:55.440 fused_ordering(452) 00:15:55.440 fused_ordering(453) 00:15:55.440 fused_ordering(454) 00:15:55.440 fused_ordering(455) 00:15:55.440 fused_ordering(456) 00:15:55.440 fused_ordering(457) 00:15:55.440 fused_ordering(458) 00:15:55.440 fused_ordering(459) 00:15:55.440 fused_ordering(460) 00:15:55.440 fused_ordering(461) 00:15:55.440 fused_ordering(462) 00:15:55.440 fused_ordering(463) 00:15:55.440 fused_ordering(464) 00:15:55.440 fused_ordering(465) 00:15:55.440 fused_ordering(466) 00:15:55.440 fused_ordering(467) 00:15:55.440 fused_ordering(468) 00:15:55.440 fused_ordering(469) 00:15:55.440 fused_ordering(470) 00:15:55.440 fused_ordering(471) 00:15:55.440 fused_ordering(472) 00:15:55.440 fused_ordering(473) 00:15:55.440 fused_ordering(474) 00:15:55.440 fused_ordering(475) 00:15:55.440 fused_ordering(476) 00:15:55.440 fused_ordering(477) 00:15:55.440 fused_ordering(478) 00:15:55.440 fused_ordering(479) 00:15:55.440 fused_ordering(480) 00:15:55.440 fused_ordering(481) 00:15:55.440 fused_ordering(482) 00:15:55.440 fused_ordering(483) 00:15:55.440 fused_ordering(484) 00:15:55.440 fused_ordering(485) 00:15:55.440 fused_ordering(486) 00:15:55.440 fused_ordering(487) 00:15:55.440 fused_ordering(488) 00:15:55.440 fused_ordering(489) 00:15:55.440 fused_ordering(490) 00:15:55.440 fused_ordering(491) 00:15:55.440 fused_ordering(492) 00:15:55.440 fused_ordering(493) 00:15:55.440 fused_ordering(494) 00:15:55.440 fused_ordering(495) 00:15:55.440 fused_ordering(496) 00:15:55.440 fused_ordering(497) 00:15:55.440 fused_ordering(498) 00:15:55.440 fused_ordering(499) 00:15:55.440 fused_ordering(500) 00:15:55.440 fused_ordering(501) 00:15:55.440 fused_ordering(502) 00:15:55.440 fused_ordering(503) 00:15:55.440 fused_ordering(504) 00:15:55.440 fused_ordering(505) 00:15:55.440 fused_ordering(506) 00:15:55.440 fused_ordering(507) 00:15:55.440 fused_ordering(508) 00:15:55.440 fused_ordering(509) 00:15:55.440 fused_ordering(510) 00:15:55.440 fused_ordering(511) 00:15:55.440 fused_ordering(512) 00:15:55.440 fused_ordering(513) 00:15:55.440 fused_ordering(514) 00:15:55.440 fused_ordering(515) 00:15:55.440 fused_ordering(516) 00:15:55.440 fused_ordering(517) 00:15:55.440 fused_ordering(518) 00:15:55.440 fused_ordering(519) 00:15:55.440 fused_ordering(520) 00:15:55.440 fused_ordering(521) 00:15:55.440 fused_ordering(522) 00:15:55.440 fused_ordering(523) 00:15:55.440 fused_ordering(524) 00:15:55.440 fused_ordering(525) 00:15:55.440 fused_ordering(526) 00:15:55.441 fused_ordering(527) 00:15:55.441 fused_ordering(528) 00:15:55.441 fused_ordering(529) 00:15:55.441 fused_ordering(530) 00:15:55.441 fused_ordering(531) 00:15:55.441 fused_ordering(532) 00:15:55.441 fused_ordering(533) 00:15:55.441 fused_ordering(534) 00:15:55.441 fused_ordering(535) 00:15:55.441 fused_ordering(536) 00:15:55.441 fused_ordering(537) 00:15:55.441 fused_ordering(538) 00:15:55.441 fused_ordering(539) 00:15:55.441 fused_ordering(540) 00:15:55.441 fused_ordering(541) 00:15:55.441 fused_ordering(542) 00:15:55.441 fused_ordering(543) 00:15:55.441 fused_ordering(544) 00:15:55.441 fused_ordering(545) 00:15:55.441 fused_ordering(546) 00:15:55.441 fused_ordering(547) 00:15:55.441 fused_ordering(548) 00:15:55.441 fused_ordering(549) 00:15:55.441 fused_ordering(550) 00:15:55.441 fused_ordering(551) 00:15:55.441 fused_ordering(552) 00:15:55.441 fused_ordering(553) 00:15:55.441 fused_ordering(554) 00:15:55.441 fused_ordering(555) 00:15:55.441 fused_ordering(556) 00:15:55.441 fused_ordering(557) 00:15:55.441 fused_ordering(558) 00:15:55.441 fused_ordering(559) 00:15:55.441 fused_ordering(560) 00:15:55.441 fused_ordering(561) 00:15:55.441 fused_ordering(562) 00:15:55.441 fused_ordering(563) 00:15:55.441 fused_ordering(564) 00:15:55.441 fused_ordering(565) 00:15:55.441 fused_ordering(566) 00:15:55.441 fused_ordering(567) 00:15:55.441 fused_ordering(568) 00:15:55.441 fused_ordering(569) 00:15:55.441 fused_ordering(570) 00:15:55.441 fused_ordering(571) 00:15:55.441 fused_ordering(572) 00:15:55.441 fused_ordering(573) 00:15:55.441 fused_ordering(574) 00:15:55.441 fused_ordering(575) 00:15:55.441 fused_ordering(576) 00:15:55.441 fused_ordering(577) 00:15:55.441 fused_ordering(578) 00:15:55.441 fused_ordering(579) 00:15:55.441 fused_ordering(580) 00:15:55.441 fused_ordering(581) 00:15:55.441 fused_ordering(582) 00:15:55.441 fused_ordering(583) 00:15:55.441 fused_ordering(584) 00:15:55.441 fused_ordering(585) 00:15:55.441 fused_ordering(586) 00:15:55.441 fused_ordering(587) 00:15:55.441 fused_ordering(588) 00:15:55.441 fused_ordering(589) 00:15:55.441 fused_ordering(590) 00:15:55.441 fused_ordering(591) 00:15:55.441 fused_ordering(592) 00:15:55.441 fused_ordering(593) 00:15:55.441 fused_ordering(594) 00:15:55.441 fused_ordering(595) 00:15:55.441 fused_ordering(596) 00:15:55.441 fused_ordering(597) 00:15:55.441 fused_ordering(598) 00:15:55.441 fused_ordering(599) 00:15:55.441 fused_ordering(600) 00:15:55.441 fused_ordering(601) 00:15:55.441 fused_ordering(602) 00:15:55.441 fused_ordering(603) 00:15:55.441 fused_ordering(604) 00:15:55.441 fused_ordering(605) 00:15:55.441 fused_ordering(606) 00:15:55.441 fused_ordering(607) 00:15:55.441 fused_ordering(608) 00:15:55.441 fused_ordering(609) 00:15:55.441 fused_ordering(610) 00:15:55.441 fused_ordering(611) 00:15:55.441 fused_ordering(612) 00:15:55.441 fused_ordering(613) 00:15:55.441 fused_ordering(614) 00:15:55.441 fused_ordering(615) 00:15:56.381 fused_ordering(616) 00:15:56.381 fused_ordering(617) 00:15:56.381 fused_ordering(618) 00:15:56.381 fused_ordering(619) 00:15:56.381 fused_ordering(620) 00:15:56.381 fused_ordering(621) 00:15:56.381 fused_ordering(622) 00:15:56.381 fused_ordering(623) 00:15:56.381 fused_ordering(624) 00:15:56.381 fused_ordering(625) 00:15:56.381 fused_ordering(626) 00:15:56.381 fused_ordering(627) 00:15:56.381 fused_ordering(628) 00:15:56.381 fused_ordering(629) 00:15:56.381 fused_ordering(630) 00:15:56.381 fused_ordering(631) 00:15:56.381 fused_ordering(632) 00:15:56.381 fused_ordering(633) 00:15:56.381 fused_ordering(634) 00:15:56.381 fused_ordering(635) 00:15:56.381 fused_ordering(636) 00:15:56.381 fused_ordering(637) 00:15:56.381 fused_ordering(638) 00:15:56.381 fused_ordering(639) 00:15:56.381 fused_ordering(640) 00:15:56.381 fused_ordering(641) 00:15:56.381 fused_ordering(642) 00:15:56.381 fused_ordering(643) 00:15:56.381 fused_ordering(644) 00:15:56.381 fused_ordering(645) 00:15:56.381 fused_ordering(646) 00:15:56.381 fused_ordering(647) 00:15:56.381 fused_ordering(648) 00:15:56.381 fused_ordering(649) 00:15:56.381 fused_ordering(650) 00:15:56.381 fused_ordering(651) 00:15:56.381 fused_ordering(652) 00:15:56.381 fused_ordering(653) 00:15:56.381 fused_ordering(654) 00:15:56.381 fused_ordering(655) 00:15:56.381 fused_ordering(656) 00:15:56.381 fused_ordering(657) 00:15:56.381 fused_ordering(658) 00:15:56.381 fused_ordering(659) 00:15:56.381 fused_ordering(660) 00:15:56.381 fused_ordering(661) 00:15:56.381 fused_ordering(662) 00:15:56.381 fused_ordering(663) 00:15:56.381 fused_ordering(664) 00:15:56.381 fused_ordering(665) 00:15:56.381 fused_ordering(666) 00:15:56.381 fused_ordering(667) 00:15:56.381 fused_ordering(668) 00:15:56.381 fused_ordering(669) 00:15:56.381 fused_ordering(670) 00:15:56.381 fused_ordering(671) 00:15:56.381 fused_ordering(672) 00:15:56.381 fused_ordering(673) 00:15:56.381 fused_ordering(674) 00:15:56.381 fused_ordering(675) 00:15:56.381 fused_ordering(676) 00:15:56.381 fused_ordering(677) 00:15:56.381 fused_ordering(678) 00:15:56.381 fused_ordering(679) 00:15:56.381 fused_ordering(680) 00:15:56.381 fused_ordering(681) 00:15:56.381 fused_ordering(682) 00:15:56.381 fused_ordering(683) 00:15:56.381 fused_ordering(684) 00:15:56.381 fused_ordering(685) 00:15:56.381 fused_ordering(686) 00:15:56.381 fused_ordering(687) 00:15:56.381 fused_ordering(688) 00:15:56.381 fused_ordering(689) 00:15:56.381 fused_ordering(690) 00:15:56.381 fused_ordering(691) 00:15:56.381 fused_ordering(692) 00:15:56.381 fused_ordering(693) 00:15:56.381 fused_ordering(694) 00:15:56.381 fused_ordering(695) 00:15:56.381 fused_ordering(696) 00:15:56.381 fused_ordering(697) 00:15:56.381 fused_ordering(698) 00:15:56.381 fused_ordering(699) 00:15:56.381 fused_ordering(700) 00:15:56.381 fused_ordering(701) 00:15:56.381 fused_ordering(702) 00:15:56.381 fused_ordering(703) 00:15:56.381 fused_ordering(704) 00:15:56.381 fused_ordering(705) 00:15:56.381 fused_ordering(706) 00:15:56.381 fused_ordering(707) 00:15:56.382 fused_ordering(708) 00:15:56.382 fused_ordering(709) 00:15:56.382 fused_ordering(710) 00:15:56.382 fused_ordering(711) 00:15:56.382 fused_ordering(712) 00:15:56.382 fused_ordering(713) 00:15:56.382 fused_ordering(714) 00:15:56.382 fused_ordering(715) 00:15:56.382 fused_ordering(716) 00:15:56.382 fused_ordering(717) 00:15:56.382 fused_ordering(718) 00:15:56.382 fused_ordering(719) 00:15:56.382 fused_ordering(720) 00:15:56.382 fused_ordering(721) 00:15:56.382 fused_ordering(722) 00:15:56.382 fused_ordering(723) 00:15:56.382 fused_ordering(724) 00:15:56.382 fused_ordering(725) 00:15:56.382 fused_ordering(726) 00:15:56.382 fused_ordering(727) 00:15:56.382 fused_ordering(728) 00:15:56.382 fused_ordering(729) 00:15:56.382 fused_ordering(730) 00:15:56.382 fused_ordering(731) 00:15:56.382 fused_ordering(732) 00:15:56.382 fused_ordering(733) 00:15:56.382 fused_ordering(734) 00:15:56.382 fused_ordering(735) 00:15:56.382 fused_ordering(736) 00:15:56.382 fused_ordering(737) 00:15:56.382 fused_ordering(738) 00:15:56.382 fused_ordering(739) 00:15:56.382 fused_ordering(740) 00:15:56.382 fused_ordering(741) 00:15:56.382 fused_ordering(742) 00:15:56.382 fused_ordering(743) 00:15:56.382 fused_ordering(744) 00:15:56.382 fused_ordering(745) 00:15:56.382 fused_ordering(746) 00:15:56.382 fused_ordering(747) 00:15:56.382 fused_ordering(748) 00:15:56.382 fused_ordering(749) 00:15:56.382 fused_ordering(750) 00:15:56.382 fused_ordering(751) 00:15:56.382 fused_ordering(752) 00:15:56.382 fused_ordering(753) 00:15:56.382 fused_ordering(754) 00:15:56.382 fused_ordering(755) 00:15:56.382 fused_ordering(756) 00:15:56.382 fused_ordering(757) 00:15:56.382 fused_ordering(758) 00:15:56.382 fused_ordering(759) 00:15:56.382 fused_ordering(760) 00:15:56.382 fused_ordering(761) 00:15:56.382 fused_ordering(762) 00:15:56.382 fused_ordering(763) 00:15:56.382 fused_ordering(764) 00:15:56.382 fused_ordering(765) 00:15:56.382 fused_ordering(766) 00:15:56.382 fused_ordering(767) 00:15:56.382 fused_ordering(768) 00:15:56.382 fused_ordering(769) 00:15:56.382 fused_ordering(770) 00:15:56.382 fused_ordering(771) 00:15:56.382 fused_ordering(772) 00:15:56.382 fused_ordering(773) 00:15:56.382 fused_ordering(774) 00:15:56.382 fused_ordering(775) 00:15:56.382 fused_ordering(776) 00:15:56.382 fused_ordering(777) 00:15:56.382 fused_ordering(778) 00:15:56.382 fused_ordering(779) 00:15:56.382 fused_ordering(780) 00:15:56.382 fused_ordering(781) 00:15:56.382 fused_ordering(782) 00:15:56.382 fused_ordering(783) 00:15:56.382 fused_ordering(784) 00:15:56.382 fused_ordering(785) 00:15:56.382 fused_ordering(786) 00:15:56.382 fused_ordering(787) 00:15:56.382 fused_ordering(788) 00:15:56.382 fused_ordering(789) 00:15:56.382 fused_ordering(790) 00:15:56.382 fused_ordering(791) 00:15:56.382 fused_ordering(792) 00:15:56.382 fused_ordering(793) 00:15:56.382 fused_ordering(794) 00:15:56.382 fused_ordering(795) 00:15:56.382 fused_ordering(796) 00:15:56.382 fused_ordering(797) 00:15:56.382 fused_ordering(798) 00:15:56.382 fused_ordering(799) 00:15:56.382 fused_ordering(800) 00:15:56.382 fused_ordering(801) 00:15:56.382 fused_ordering(802) 00:15:56.382 fused_ordering(803) 00:15:56.382 fused_ordering(804) 00:15:56.382 fused_ordering(805) 00:15:56.382 fused_ordering(806) 00:15:56.382 fused_ordering(807) 00:15:56.382 fused_ordering(808) 00:15:56.382 fused_ordering(809) 00:15:56.382 fused_ordering(810) 00:15:56.382 fused_ordering(811) 00:15:56.382 fused_ordering(812) 00:15:56.382 fused_ordering(813) 00:15:56.382 fused_ordering(814) 00:15:56.382 fused_ordering(815) 00:15:56.382 fused_ordering(816) 00:15:56.382 fused_ordering(817) 00:15:56.382 fused_ordering(818) 00:15:56.382 fused_ordering(819) 00:15:56.382 fused_ordering(820) 00:15:56.951 fused_ordering(821) 00:15:56.951 fused_ordering(822) 00:15:56.951 fused_ordering(823) 00:15:56.951 fused_ordering(824) 00:15:56.951 fused_ordering(825) 00:15:56.951 fused_ordering(826) 00:15:56.951 fused_ordering(827) 00:15:56.951 fused_ordering(828) 00:15:56.951 fused_ordering(829) 00:15:56.951 fused_ordering(830) 00:15:56.951 fused_ordering(831) 00:15:56.951 fused_ordering(832) 00:15:56.951 fused_ordering(833) 00:15:56.951 fused_ordering(834) 00:15:56.951 fused_ordering(835) 00:15:56.951 fused_ordering(836) 00:15:56.951 fused_ordering(837) 00:15:56.951 fused_ordering(838) 00:15:56.951 fused_ordering(839) 00:15:56.951 fused_ordering(840) 00:15:56.952 fused_ordering(841) 00:15:56.952 fused_ordering(842) 00:15:56.952 fused_ordering(843) 00:15:56.952 fused_ordering(844) 00:15:56.952 fused_ordering(845) 00:15:56.952 fused_ordering(846) 00:15:56.952 fused_ordering(847) 00:15:56.952 fused_ordering(848) 00:15:56.952 fused_ordering(849) 00:15:56.952 fused_ordering(850) 00:15:56.952 fused_ordering(851) 00:15:56.952 fused_ordering(852) 00:15:56.952 fused_ordering(853) 00:15:56.952 fused_ordering(854) 00:15:56.952 fused_ordering(855) 00:15:56.952 fused_ordering(856) 00:15:56.952 fused_ordering(857) 00:15:56.952 fused_ordering(858) 00:15:56.952 fused_ordering(859) 00:15:56.952 fused_ordering(860) 00:15:56.952 fused_ordering(861) 00:15:56.952 fused_ordering(862) 00:15:56.952 fused_ordering(863) 00:15:56.952 fused_ordering(864) 00:15:56.952 fused_ordering(865) 00:15:56.952 fused_ordering(866) 00:15:56.952 fused_ordering(867) 00:15:56.952 fused_ordering(868) 00:15:56.952 fused_ordering(869) 00:15:56.952 fused_ordering(870) 00:15:56.952 fused_ordering(871) 00:15:56.952 fused_ordering(872) 00:15:56.952 fused_ordering(873) 00:15:56.952 fused_ordering(874) 00:15:56.952 fused_ordering(875) 00:15:56.952 fused_ordering(876) 00:15:56.952 fused_ordering(877) 00:15:56.952 fused_ordering(878) 00:15:56.952 fused_ordering(879) 00:15:56.952 fused_ordering(880) 00:15:56.952 fused_ordering(881) 00:15:56.952 fused_ordering(882) 00:15:56.952 fused_ordering(883) 00:15:56.952 fused_ordering(884) 00:15:56.952 fused_ordering(885) 00:15:56.952 fused_ordering(886) 00:15:56.952 fused_ordering(887) 00:15:56.952 fused_ordering(888) 00:15:56.952 fused_ordering(889) 00:15:56.952 fused_ordering(890) 00:15:56.952 fused_ordering(891) 00:15:56.952 fused_ordering(892) 00:15:56.952 fused_ordering(893) 00:15:56.952 fused_ordering(894) 00:15:56.952 fused_ordering(895) 00:15:56.952 fused_ordering(896) 00:15:56.952 fused_ordering(897) 00:15:56.952 fused_ordering(898) 00:15:56.952 fused_ordering(899) 00:15:56.952 fused_ordering(900) 00:15:56.952 fused_ordering(901) 00:15:56.952 fused_ordering(902) 00:15:56.952 fused_ordering(903) 00:15:56.952 fused_ordering(904) 00:15:56.952 fused_ordering(905) 00:15:56.952 fused_ordering(906) 00:15:56.952 fused_ordering(907) 00:15:56.952 fused_ordering(908) 00:15:56.952 fused_ordering(909) 00:15:56.952 fused_ordering(910) 00:15:56.952 fused_ordering(911) 00:15:56.952 fused_ordering(912) 00:15:56.952 fused_ordering(913) 00:15:56.952 fused_ordering(914) 00:15:56.952 fused_ordering(915) 00:15:56.952 fused_ordering(916) 00:15:56.952 fused_ordering(917) 00:15:56.952 fused_ordering(918) 00:15:56.952 fused_ordering(919) 00:15:56.952 fused_ordering(920) 00:15:56.952 fused_ordering(921) 00:15:56.952 fused_ordering(922) 00:15:56.952 fused_ordering(923) 00:15:56.952 fused_ordering(924) 00:15:56.952 fused_ordering(925) 00:15:56.952 fused_ordering(926) 00:15:56.952 fused_ordering(927) 00:15:56.952 fused_ordering(928) 00:15:56.952 fused_ordering(929) 00:15:56.952 fused_ordering(930) 00:15:56.952 fused_ordering(931) 00:15:56.952 fused_ordering(932) 00:15:56.952 fused_ordering(933) 00:15:56.952 fused_ordering(934) 00:15:56.952 fused_ordering(935) 00:15:56.952 fused_ordering(936) 00:15:56.952 fused_ordering(937) 00:15:56.952 fused_ordering(938) 00:15:56.952 fused_ordering(939) 00:15:56.952 fused_ordering(940) 00:15:56.952 fused_ordering(941) 00:15:56.952 fused_ordering(942) 00:15:56.952 fused_ordering(943) 00:15:56.952 fused_ordering(944) 00:15:56.952 fused_ordering(945) 00:15:56.952 fused_ordering(946) 00:15:56.952 fused_ordering(947) 00:15:56.952 fused_ordering(948) 00:15:56.952 fused_ordering(949) 00:15:56.952 fused_ordering(950) 00:15:56.952 fused_ordering(951) 00:15:56.952 fused_ordering(952) 00:15:56.952 fused_ordering(953) 00:15:56.952 fused_ordering(954) 00:15:56.952 fused_ordering(955) 00:15:56.952 fused_ordering(956) 00:15:56.952 fused_ordering(957) 00:15:56.952 fused_ordering(958) 00:15:56.952 fused_ordering(959) 00:15:56.952 fused_ordering(960) 00:15:56.952 fused_ordering(961) 00:15:56.952 fused_ordering(962) 00:15:56.952 fused_ordering(963) 00:15:56.952 fused_ordering(964) 00:15:56.952 fused_ordering(965) 00:15:56.952 fused_ordering(966) 00:15:56.952 fused_ordering(967) 00:15:56.952 fused_ordering(968) 00:15:56.952 fused_ordering(969) 00:15:56.952 fused_ordering(970) 00:15:56.952 fused_ordering(971) 00:15:56.952 fused_ordering(972) 00:15:56.952 fused_ordering(973) 00:15:56.952 fused_ordering(974) 00:15:56.952 fused_ordering(975) 00:15:56.952 fused_ordering(976) 00:15:56.952 fused_ordering(977) 00:15:56.952 fused_ordering(978) 00:15:56.952 fused_ordering(979) 00:15:56.952 fused_ordering(980) 00:15:56.952 fused_ordering(981) 00:15:56.952 fused_ordering(982) 00:15:56.952 fused_ordering(983) 00:15:56.952 fused_ordering(984) 00:15:56.952 fused_ordering(985) 00:15:56.952 fused_ordering(986) 00:15:56.952 fused_ordering(987) 00:15:56.952 fused_ordering(988) 00:15:56.952 fused_ordering(989) 00:15:56.952 fused_ordering(990) 00:15:56.952 fused_ordering(991) 00:15:56.952 fused_ordering(992) 00:15:56.952 fused_ordering(993) 00:15:56.952 fused_ordering(994) 00:15:56.952 fused_ordering(995) 00:15:56.952 fused_ordering(996) 00:15:56.952 fused_ordering(997) 00:15:56.952 fused_ordering(998) 00:15:56.952 fused_ordering(999) 00:15:56.952 fused_ordering(1000) 00:15:56.952 fused_ordering(1001) 00:15:56.952 fused_ordering(1002) 00:15:56.952 fused_ordering(1003) 00:15:56.952 fused_ordering(1004) 00:15:56.952 fused_ordering(1005) 00:15:56.952 fused_ordering(1006) 00:15:56.952 fused_ordering(1007) 00:15:56.952 fused_ordering(1008) 00:15:56.952 fused_ordering(1009) 00:15:56.952 fused_ordering(1010) 00:15:56.952 fused_ordering(1011) 00:15:56.952 fused_ordering(1012) 00:15:56.952 fused_ordering(1013) 00:15:56.952 fused_ordering(1014) 00:15:56.952 fused_ordering(1015) 00:15:56.952 fused_ordering(1016) 00:15:56.952 fused_ordering(1017) 00:15:56.952 fused_ordering(1018) 00:15:56.952 fused_ordering(1019) 00:15:56.952 fused_ordering(1020) 00:15:56.952 fused_ordering(1021) 00:15:56.952 fused_ordering(1022) 00:15:56.952 fused_ordering(1023) 00:15:56.952 08:49:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:15:56.952 08:49:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:15:56.952 08:49:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:56.952 08:49:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:15:56.952 08:49:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:56.952 08:49:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:15:56.952 08:49:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:56.952 08:49:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:56.952 rmmod nvme_tcp 00:15:56.952 rmmod nvme_fabrics 00:15:56.952 rmmod nvme_keyring 00:15:56.952 08:49:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:56.952 08:49:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:15:56.952 08:49:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:15:56.952 08:49:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 950147 ']' 00:15:56.952 08:49:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 950147 00:15:56.952 08:49:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@950 -- # '[' -z 950147 ']' 00:15:56.952 08:49:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # kill -0 950147 00:15:56.952 08:49:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # uname 00:15:56.952 08:49:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:56.952 08:49:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 950147 00:15:56.952 08:49:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:15:56.952 08:49:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:15:56.952 08:49:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@968 -- # echo 'killing process with pid 950147' 00:15:56.952 killing process with pid 950147 00:15:56.952 08:49:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@969 -- # kill 950147 00:15:56.952 08:49:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@974 -- # wait 950147 00:15:57.212 08:49:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:57.212 08:49:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:57.212 08:49:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:57.212 08:49:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:57.212 08:49:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:57.212 08:49:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:57.212 08:49:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:57.212 08:49:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:59.746 08:49:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:59.746 00:15:59.746 real 0m7.694s 00:15:59.746 user 0m5.252s 00:15:59.746 sys 0m3.439s 00:15:59.746 08:49:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:59.746 08:49:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:59.746 ************************************ 00:15:59.746 END TEST nvmf_fused_ordering 00:15:59.746 ************************************ 00:15:59.746 08:49:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:15:59.746 08:49:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:59.746 08:49:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:59.746 08:49:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:59.747 ************************************ 00:15:59.747 START TEST nvmf_ns_masking 00:15:59.747 ************************************ 00:15:59.747 08:49:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1125 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:15:59.747 * Looking for test storage... 00:15:59.747 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:59.747 08:49:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:59.747 08:49:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:15:59.747 08:49:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:59.747 08:49:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:59.747 08:49:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:59.747 08:49:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:59.747 08:49:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:59.747 08:49:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:59.747 08:49:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:59.747 08:49:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:59.747 08:49:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:59.747 08:49:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:59.747 08:49:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:59.747 08:49:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:59.747 08:49:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:59.747 08:49:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:59.747 08:49:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:59.747 08:49:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:59.747 08:49:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:59.747 08:49:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:59.747 08:49:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:59.747 08:49:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:59.747 08:49:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:59.747 08:49:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:59.747 08:49:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:59.747 08:49:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:15:59.747 08:49:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:59.747 08:49:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:15:59.747 08:49:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:59.747 08:49:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:59.747 08:49:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:59.747 08:49:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:59.747 08:49:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:59.747 08:49:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:59.747 08:49:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:59.747 08:49:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:59.747 08:49:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:59.747 08:49:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:15:59.747 08:49:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:15:59.747 08:49:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:15:59.747 08:49:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=9dc2d18c-7962-4130-be8d-85bdb6dcc2cd 00:15:59.747 08:49:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:15:59.747 08:49:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=41cfc311-06ea-4caa-9c4f-85a2cd84a185 00:15:59.747 08:49:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:15:59.747 08:49:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:15:59.747 08:49:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:15:59.747 08:49:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:15:59.747 08:49:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=cb48ba95-d330-4578-9619-55bb67fbd770 00:15:59.747 08:49:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:15:59.747 08:49:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:59.747 08:49:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:59.747 08:49:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:59.747 08:49:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:59.747 08:49:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:59.747 08:49:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:59.747 08:49:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:59.747 08:49:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:59.747 08:49:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:59.747 08:49:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:59.747 08:49:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:15:59.747 08:49:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:01.652 08:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:01.652 08:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:16:01.652 08:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:01.652 08:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:01.652 08:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:01.652 08:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:01.652 08:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:01.652 08:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:16:01.652 08:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:01.652 08:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:16:01.652 08:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:16:01.652 08:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:16:01.652 08:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:16:01.652 08:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:16:01.652 08:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:16:01.652 08:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:01.652 08:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:01.652 08:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:01.652 08:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:01.652 08:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:01.652 08:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:01.652 08:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:01.652 08:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:01.652 08:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:01.652 08:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:01.652 08:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:01.652 08:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:01.652 08:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:01.652 08:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:01.652 08:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:01.652 08:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:01.652 08:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:01.652 08:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:01.652 08:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:01.652 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:01.652 08:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:01.652 08:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:01.652 08:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:01.652 08:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:01.652 08:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:01.652 08:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:01.652 08:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:01.653 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:01.653 08:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:01.653 08:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:01.653 08:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:01.653 08:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:01.653 08:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:01.653 08:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:01.653 08:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:01.653 08:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:01.653 08:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:01.653 08:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:01.653 08:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:01.653 08:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:01.653 08:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:01.653 08:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:01.653 08:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:01.653 08:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:01.653 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:01.653 08:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:01.653 08:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:01.653 08:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:01.653 08:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:01.653 08:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:01.653 08:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:01.653 08:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:01.653 08:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:01.653 08:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:01.653 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:01.653 08:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:01.653 08:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:01.653 08:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:16:01.653 08:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:01.653 08:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:01.653 08:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:01.653 08:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:01.653 08:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:01.653 08:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:01.653 08:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:01.653 08:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:01.653 08:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:01.653 08:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:01.653 08:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:01.653 08:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:01.653 08:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:01.653 08:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:01.653 08:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:01.653 08:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:01.653 08:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:01.653 08:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:01.653 08:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:01.653 08:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:01.653 08:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:01.653 08:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:01.653 08:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:01.653 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:01.653 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.243 ms 00:16:01.653 00:16:01.653 --- 10.0.0.2 ping statistics --- 00:16:01.653 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:01.653 rtt min/avg/max/mdev = 0.243/0.243/0.243/0.000 ms 00:16:01.653 08:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:01.653 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:01.653 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:16:01.653 00:16:01.653 --- 10.0.0.1 ping statistics --- 00:16:01.653 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:01.653 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:16:01.653 08:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:01.653 08:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:16:01.653 08:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:01.653 08:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:01.653 08:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:01.653 08:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:01.653 08:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:01.653 08:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:01.653 08:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:01.653 08:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:16:01.653 08:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:01.653 08:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:01.653 08:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:01.653 08:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=952500 00:16:01.653 08:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:16:01.653 08:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 952500 00:16:01.653 08:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 952500 ']' 00:16:01.653 08:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:01.653 08:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:01.653 08:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:01.653 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:01.653 08:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:01.653 08:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:01.653 [2024-07-26 08:49:19.810457] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:16:01.653 [2024-07-26 08:49:19.810541] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:01.653 EAL: No free 2048 kB hugepages reported on node 1 00:16:01.653 [2024-07-26 08:49:19.847817] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:01.653 [2024-07-26 08:49:19.873725] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:01.653 [2024-07-26 08:49:19.956267] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:01.653 [2024-07-26 08:49:19.956319] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:01.653 [2024-07-26 08:49:19.956332] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:01.653 [2024-07-26 08:49:19.956343] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:01.653 [2024-07-26 08:49:19.956353] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:01.653 [2024-07-26 08:49:19.956377] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:01.653 08:49:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:01.653 08:49:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:16:01.653 08:49:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:01.653 08:49:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:01.653 08:49:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:01.653 08:49:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:01.653 08:49:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:01.912 [2024-07-26 08:49:20.367693] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:02.170 08:49:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:16:02.170 08:49:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:16:02.170 08:49:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:16:02.428 Malloc1 00:16:02.428 08:49:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:16:02.686 Malloc2 00:16:02.686 08:49:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:02.944 08:49:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:16:03.202 08:49:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:03.460 [2024-07-26 08:49:21.771178] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:03.460 08:49:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:16:03.460 08:49:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I cb48ba95-d330-4578-9619-55bb67fbd770 -a 10.0.0.2 -s 4420 -i 4 00:16:03.460 08:49:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:16:03.460 08:49:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:16:03.460 08:49:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:03.460 08:49:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:03.460 08:49:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:16:05.995 08:49:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:05.995 08:49:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:05.995 08:49:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:05.995 08:49:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:05.995 08:49:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:05.995 08:49:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:16:05.995 08:49:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:16:05.995 08:49:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:16:05.995 08:49:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:16:05.995 08:49:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:16:05.995 08:49:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:16:05.995 08:49:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:05.995 08:49:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:05.995 [ 0]:0x1 00:16:05.996 08:49:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:05.996 08:49:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:05.996 08:49:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=dd49d4196b804f8badb961ae1b56642d 00:16:05.996 08:49:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ dd49d4196b804f8badb961ae1b56642d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:05.996 08:49:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:16:05.996 08:49:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:16:05.996 08:49:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:05.996 08:49:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:05.996 [ 0]:0x1 00:16:05.996 08:49:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:05.996 08:49:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:05.996 08:49:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=dd49d4196b804f8badb961ae1b56642d 00:16:05.996 08:49:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ dd49d4196b804f8badb961ae1b56642d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:05.996 08:49:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:16:05.996 08:49:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:05.996 08:49:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:05.996 [ 1]:0x2 00:16:05.996 08:49:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:05.996 08:49:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:05.996 08:49:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=2043c9d8011f4aafb767166b35a4e27c 00:16:05.996 08:49:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 2043c9d8011f4aafb767166b35a4e27c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:05.996 08:49:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:16:05.996 08:49:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:06.253 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:06.253 08:49:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:06.511 08:49:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:16:06.770 08:49:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:16:06.770 08:49:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I cb48ba95-d330-4578-9619-55bb67fbd770 -a 10.0.0.2 -s 4420 -i 4 00:16:07.031 08:49:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:16:07.031 08:49:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:16:07.031 08:49:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:07.031 08:49:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:16:07.031 08:49:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:16:07.031 08:49:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:16:08.965 08:49:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:08.965 08:49:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:08.965 08:49:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:08.965 08:49:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:08.965 08:49:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:08.965 08:49:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:16:08.965 08:49:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:16:08.965 08:49:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:16:08.965 08:49:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:16:08.965 08:49:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:16:08.965 08:49:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:16:08.965 08:49:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:16:08.965 08:49:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:16:08.965 08:49:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:16:08.965 08:49:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:08.965 08:49:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:16:08.965 08:49:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:08.965 08:49:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:16:08.965 08:49:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:08.965 08:49:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:08.965 08:49:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:08.965 08:49:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:09.223 08:49:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:16:09.223 08:49:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:09.223 08:49:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:16:09.223 08:49:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:09.223 08:49:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:09.223 08:49:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:09.223 08:49:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:16:09.223 08:49:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:09.223 08:49:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:09.223 [ 0]:0x2 00:16:09.223 08:49:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:09.223 08:49:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:09.223 08:49:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=2043c9d8011f4aafb767166b35a4e27c 00:16:09.223 08:49:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 2043c9d8011f4aafb767166b35a4e27c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:09.223 08:49:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:09.482 08:49:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:16:09.482 08:49:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:09.482 08:49:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:09.482 [ 0]:0x1 00:16:09.482 08:49:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:09.482 08:49:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:09.482 08:49:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=dd49d4196b804f8badb961ae1b56642d 00:16:09.482 08:49:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ dd49d4196b804f8badb961ae1b56642d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:09.482 08:49:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:16:09.482 08:49:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:09.482 08:49:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:09.482 [ 1]:0x2 00:16:09.482 08:49:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:09.482 08:49:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:09.482 08:49:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=2043c9d8011f4aafb767166b35a4e27c 00:16:09.482 08:49:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 2043c9d8011f4aafb767166b35a4e27c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:09.482 08:49:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:10.047 08:49:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:16:10.047 08:49:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:16:10.047 08:49:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:16:10.047 08:49:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:16:10.047 08:49:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:10.047 08:49:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:16:10.047 08:49:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:10.047 08:49:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:16:10.047 08:49:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:10.047 08:49:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:10.047 08:49:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:10.047 08:49:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:10.047 08:49:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:16:10.047 08:49:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:10.047 08:49:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:16:10.047 08:49:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:10.047 08:49:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:10.047 08:49:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:10.047 08:49:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:16:10.047 08:49:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:10.047 08:49:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:10.047 [ 0]:0x2 00:16:10.047 08:49:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:10.047 08:49:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:10.047 08:49:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=2043c9d8011f4aafb767166b35a4e27c 00:16:10.047 08:49:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 2043c9d8011f4aafb767166b35a4e27c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:10.047 08:49:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:16:10.047 08:49:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:10.047 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:10.047 08:49:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:10.304 08:49:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:16:10.304 08:49:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I cb48ba95-d330-4578-9619-55bb67fbd770 -a 10.0.0.2 -s 4420 -i 4 00:16:10.562 08:49:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:16:10.562 08:49:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:16:10.562 08:49:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:10.562 08:49:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:16:10.562 08:49:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:16:10.562 08:49:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:16:12.467 08:49:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:12.467 08:49:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:12.467 08:49:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:12.467 08:49:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:16:12.467 08:49:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:12.467 08:49:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:16:12.467 08:49:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:16:12.467 08:49:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:16:12.467 08:49:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:16:12.467 08:49:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:16:12.467 08:49:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:16:12.467 08:49:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:12.467 08:49:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:12.467 [ 0]:0x1 00:16:12.467 08:49:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:12.467 08:49:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:12.467 08:49:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=dd49d4196b804f8badb961ae1b56642d 00:16:12.467 08:49:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ dd49d4196b804f8badb961ae1b56642d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:12.725 08:49:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:16:12.725 08:49:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:12.725 08:49:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:12.725 [ 1]:0x2 00:16:12.725 08:49:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:12.725 08:49:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:12.725 08:49:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=2043c9d8011f4aafb767166b35a4e27c 00:16:12.725 08:49:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 2043c9d8011f4aafb767166b35a4e27c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:12.725 08:49:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:12.983 08:49:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:16:12.983 08:49:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:16:12.983 08:49:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:16:12.983 08:49:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:16:12.983 08:49:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:12.983 08:49:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:16:12.983 08:49:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:12.983 08:49:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:16:12.983 08:49:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:12.983 08:49:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:12.983 08:49:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:12.983 08:49:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:12.983 08:49:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:16:12.983 08:49:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:12.983 08:49:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:16:12.983 08:49:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:12.983 08:49:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:12.983 08:49:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:12.983 08:49:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:16:12.983 08:49:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:12.983 08:49:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:12.983 [ 0]:0x2 00:16:12.983 08:49:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:12.983 08:49:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:12.983 08:49:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=2043c9d8011f4aafb767166b35a4e27c 00:16:12.983 08:49:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 2043c9d8011f4aafb767166b35a4e27c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:12.983 08:49:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:16:12.983 08:49:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:16:12.983 08:49:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:16:12.983 08:49:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:12.983 08:49:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:12.983 08:49:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:12.983 08:49:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:12.983 08:49:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:12.983 08:49:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:12.983 08:49:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:12.983 08:49:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:16:12.983 08:49:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:16:13.240 [2024-07-26 08:49:31.545047] nvmf_rpc.c:1798:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:16:13.240 request: 00:16:13.240 { 00:16:13.240 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:13.240 "nsid": 2, 00:16:13.240 "host": "nqn.2016-06.io.spdk:host1", 00:16:13.240 "method": "nvmf_ns_remove_host", 00:16:13.240 "req_id": 1 00:16:13.240 } 00:16:13.240 Got JSON-RPC error response 00:16:13.240 response: 00:16:13.240 { 00:16:13.240 "code": -32602, 00:16:13.240 "message": "Invalid parameters" 00:16:13.240 } 00:16:13.240 08:49:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:16:13.240 08:49:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:13.240 08:49:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:13.240 08:49:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:13.241 08:49:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:16:13.241 08:49:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:16:13.241 08:49:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:16:13.241 08:49:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:16:13.241 08:49:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:13.241 08:49:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:16:13.241 08:49:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:13.241 08:49:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:16:13.241 08:49:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:13.241 08:49:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:13.241 08:49:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:13.241 08:49:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:13.241 08:49:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:16:13.241 08:49:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:13.241 08:49:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:16:13.241 08:49:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:13.241 08:49:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:13.241 08:49:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:13.241 08:49:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:16:13.241 08:49:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:13.241 08:49:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:13.241 [ 0]:0x2 00:16:13.241 08:49:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:13.241 08:49:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:13.241 08:49:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=2043c9d8011f4aafb767166b35a4e27c 00:16:13.241 08:49:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 2043c9d8011f4aafb767166b35a4e27c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:13.241 08:49:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:16:13.241 08:49:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:13.500 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:13.500 08:49:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=953985 00:16:13.500 08:49:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:16:13.500 08:49:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:16:13.500 08:49:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 953985 /var/tmp/host.sock 00:16:13.500 08:49:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 953985 ']' 00:16:13.500 08:49:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:16:13.500 08:49:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:13.500 08:49:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:16:13.500 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:16:13.500 08:49:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:13.500 08:49:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:13.500 [2024-07-26 08:49:31.759657] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:16:13.500 [2024-07-26 08:49:31.759739] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid953985 ] 00:16:13.500 EAL: No free 2048 kB hugepages reported on node 1 00:16:13.500 [2024-07-26 08:49:31.791405] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:13.500 [2024-07-26 08:49:31.823214] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:13.500 [2024-07-26 08:49:31.917223] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:13.758 08:49:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:13.758 08:49:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:16:13.758 08:49:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:14.016 08:49:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:14.581 08:49:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 9dc2d18c-7962-4130-be8d-85bdb6dcc2cd 00:16:14.582 08:49:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:16:14.582 08:49:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 9DC2D18C79624130BE8D85BDB6DCC2CD -i 00:16:14.839 08:49:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 41cfc311-06ea-4caa-9c4f-85a2cd84a185 00:16:14.839 08:49:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:16:14.839 08:49:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 41CFC31106EA4CAA9C4F85A2CD84A185 -i 00:16:15.098 08:49:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:15.356 08:49:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:16:15.613 08:49:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:16:15.613 08:49:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:16:15.870 nvme0n1 00:16:15.870 08:49:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:16:15.870 08:49:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:16:16.128 nvme1n2 00:16:16.128 08:49:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:16:16.128 08:49:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:16:16.128 08:49:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:16:16.128 08:49:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:16:16.128 08:49:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:16:16.385 08:49:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:16:16.385 08:49:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:16:16.385 08:49:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:16:16.386 08:49:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:16:16.643 08:49:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 9dc2d18c-7962-4130-be8d-85bdb6dcc2cd == \9\d\c\2\d\1\8\c\-\7\9\6\2\-\4\1\3\0\-\b\e\8\d\-\8\5\b\d\b\6\d\c\c\2\c\d ]] 00:16:16.643 08:49:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:16:16.643 08:49:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:16:16.643 08:49:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:16:16.901 08:49:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 41cfc311-06ea-4caa-9c4f-85a2cd84a185 == \4\1\c\f\c\3\1\1\-\0\6\e\a\-\4\c\a\a\-\9\c\4\f\-\8\5\a\2\c\d\8\4\a\1\8\5 ]] 00:16:16.901 08:49:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 953985 00:16:16.901 08:49:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 953985 ']' 00:16:16.901 08:49:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 953985 00:16:16.901 08:49:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:16:16.901 08:49:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:16.901 08:49:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 953985 00:16:16.901 08:49:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:16:16.901 08:49:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:16:16.901 08:49:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 953985' 00:16:16.901 killing process with pid 953985 00:16:16.901 08:49:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 953985 00:16:16.901 08:49:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 953985 00:16:17.468 08:49:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:17.468 08:49:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:16:17.468 08:49:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:16:17.468 08:49:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:17.468 08:49:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:16:17.468 08:49:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:17.468 08:49:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:16:17.468 08:49:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:17.468 08:49:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:17.468 rmmod nvme_tcp 00:16:17.727 rmmod nvme_fabrics 00:16:17.727 rmmod nvme_keyring 00:16:17.727 08:49:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:17.727 08:49:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:16:17.727 08:49:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:16:17.727 08:49:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 952500 ']' 00:16:17.727 08:49:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 952500 00:16:17.727 08:49:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 952500 ']' 00:16:17.727 08:49:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 952500 00:16:17.727 08:49:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:16:17.727 08:49:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:17.727 08:49:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 952500 00:16:17.727 08:49:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:17.727 08:49:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:17.727 08:49:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 952500' 00:16:17.727 killing process with pid 952500 00:16:17.727 08:49:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 952500 00:16:17.727 08:49:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 952500 00:16:17.986 08:49:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:17.986 08:49:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:17.986 08:49:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:17.986 08:49:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:17.986 08:49:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:17.986 08:49:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:17.986 08:49:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:17.986 08:49:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:19.887 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:20.146 00:16:20.146 real 0m20.712s 00:16:20.146 user 0m27.084s 00:16:20.146 sys 0m4.069s 00:16:20.146 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:20.146 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:20.146 ************************************ 00:16:20.146 END TEST nvmf_ns_masking 00:16:20.146 ************************************ 00:16:20.146 08:49:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:16:20.146 08:49:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:16:20.146 08:49:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:20.146 08:49:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:20.146 08:49:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:20.146 ************************************ 00:16:20.146 START TEST nvmf_nvme_cli 00:16:20.146 ************************************ 00:16:20.146 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:16:20.146 * Looking for test storage... 00:16:20.146 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:20.146 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:20.146 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:16:20.146 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:20.146 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:20.146 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:20.146 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:20.146 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:20.146 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:20.146 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:20.146 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:20.146 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:20.146 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:20.146 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:20.146 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:20.146 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:20.146 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:20.146 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:20.146 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:20.146 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:20.146 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:20.146 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:20.146 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:20.147 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:20.147 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:20.147 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:20.147 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:16:20.147 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:20.147 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:16:20.147 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:20.147 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:20.147 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:20.147 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:20.147 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:20.147 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:20.147 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:20.147 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:20.147 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:20.147 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:20.147 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:16:20.147 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:16:20.147 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:20.147 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:20.147 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:20.147 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:20.147 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:20.147 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:20.147 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:20.147 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:20.147 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:20.147 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:20.147 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:16:20.147 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:22.051 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:22.051 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:16:22.051 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:22.051 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:22.051 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:22.051 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:22.051 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:22.051 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:16:22.051 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:22.051 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:16:22.051 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:16:22.051 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:16:22.051 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:16:22.051 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:16:22.051 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:16:22.051 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:22.051 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:22.051 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:22.051 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:22.051 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:22.051 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:22.051 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:22.051 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:22.051 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:22.051 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:22.051 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:22.051 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:22.051 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:22.051 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:22.051 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:22.051 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:22.051 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:22.051 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:22.051 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:22.051 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:22.051 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:22.051 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:22.051 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:22.051 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:22.051 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:22.051 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:22.051 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:22.051 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:22.051 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:22.051 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:22.051 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:22.051 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:22.051 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:22.051 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:22.051 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:22.051 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:22.051 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:22.052 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:22.052 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:22.052 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:22.052 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:22.052 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:22.052 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:22.052 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:22.052 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:22.052 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:22.052 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:22.052 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:22.052 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:22.052 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:22.052 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:22.052 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:22.052 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:22.052 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:22.052 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:22.052 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:22.052 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:22.052 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:16:22.052 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:22.052 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:22.052 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:22.052 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:22.052 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:22.052 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:22.052 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:22.052 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:22.052 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:22.052 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:22.052 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:22.052 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:22.052 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:22.052 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:22.052 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:22.052 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:22.052 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:22.052 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:22.052 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:22.052 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:22.052 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:22.052 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:22.052 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:22.052 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:22.052 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.135 ms 00:16:22.052 00:16:22.052 --- 10.0.0.2 ping statistics --- 00:16:22.052 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:22.052 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:16:22.052 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:22.310 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:22.310 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.245 ms 00:16:22.310 00:16:22.310 --- 10.0.0.1 ping statistics --- 00:16:22.310 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:22.310 rtt min/avg/max/mdev = 0.245/0.245/0.245/0.000 ms 00:16:22.310 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:22.310 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:16:22.310 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:22.310 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:22.310 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:22.310 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:22.310 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:22.310 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:22.310 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:22.310 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:16:22.310 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:22.310 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:22.310 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:22.310 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=956473 00:16:22.310 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:22.310 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 956473 00:16:22.310 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@831 -- # '[' -z 956473 ']' 00:16:22.310 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:22.310 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:22.310 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:22.310 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:22.310 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:22.310 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:22.310 [2024-07-26 08:49:40.592483] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:16:22.310 [2024-07-26 08:49:40.592572] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:22.310 EAL: No free 2048 kB hugepages reported on node 1 00:16:22.310 [2024-07-26 08:49:40.636763] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:22.310 [2024-07-26 08:49:40.667572] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:22.310 [2024-07-26 08:49:40.763125] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:22.310 [2024-07-26 08:49:40.763191] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:22.310 [2024-07-26 08:49:40.763208] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:22.310 [2024-07-26 08:49:40.763222] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:22.310 [2024-07-26 08:49:40.763234] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:22.310 [2024-07-26 08:49:40.763618] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:22.310 [2024-07-26 08:49:40.763672] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:22.310 [2024-07-26 08:49:40.763725] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:22.310 [2024-07-26 08:49:40.763728] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:22.573 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:22.573 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # return 0 00:16:22.573 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:22.573 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:22.573 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:22.573 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:22.573 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:22.573 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.573 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:22.573 [2024-07-26 08:49:40.927765] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:22.573 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.573 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:22.573 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.573 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:22.573 Malloc0 00:16:22.573 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.573 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:16:22.573 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.573 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:22.573 Malloc1 00:16:22.573 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.573 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:16:22.573 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.573 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:22.573 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.573 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:22.573 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.573 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:22.573 08:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.573 08:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:22.573 08:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.573 08:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:22.573 08:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.573 08:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:22.573 08:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.573 08:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:22.573 [2024-07-26 08:49:41.013614] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:22.573 08:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.573 08:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:22.573 08:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.573 08:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:22.573 08:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.573 08:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:16:22.832 00:16:22.832 Discovery Log Number of Records 2, Generation counter 2 00:16:22.832 =====Discovery Log Entry 0====== 00:16:22.832 trtype: tcp 00:16:22.832 adrfam: ipv4 00:16:22.832 subtype: current discovery subsystem 00:16:22.832 treq: not required 00:16:22.832 portid: 0 00:16:22.832 trsvcid: 4420 00:16:22.832 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:16:22.832 traddr: 10.0.0.2 00:16:22.832 eflags: explicit discovery connections, duplicate discovery information 00:16:22.832 sectype: none 00:16:22.832 =====Discovery Log Entry 1====== 00:16:22.832 trtype: tcp 00:16:22.832 adrfam: ipv4 00:16:22.832 subtype: nvme subsystem 00:16:22.832 treq: not required 00:16:22.832 portid: 0 00:16:22.832 trsvcid: 4420 00:16:22.832 subnqn: nqn.2016-06.io.spdk:cnode1 00:16:22.832 traddr: 10.0.0.2 00:16:22.832 eflags: none 00:16:22.832 sectype: none 00:16:22.832 08:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:16:22.832 08:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:16:22.832 08:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:16:22.832 08:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:22.832 08:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:16:22.832 08:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:16:22.832 08:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:22.832 08:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:16:22.832 08:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:22.832 08:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:16:22.832 08:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:23.400 08:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:16:23.400 08:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:16:23.400 08:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:23.400 08:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:16:23.400 08:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:16:23.400 08:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:16:25.934 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:25.934 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:25.934 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:25.934 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:16:25.934 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:25.934 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:16:25.934 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:16:25.934 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:16:25.934 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:25.934 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:16:25.934 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:16:25.934 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:25.934 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:16:25.934 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:25.934 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:16:25.934 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:16:25.934 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:25.934 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:16:25.934 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:16:25.934 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:25.934 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:16:25.934 /dev/nvme0n1 ]] 00:16:25.934 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:16:25.934 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:16:25.934 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:16:25.934 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:25.934 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:16:25.934 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:16:25.934 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:25.934 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:16:25.934 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:25.934 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:16:25.934 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:16:25.934 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:25.934 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:16:25.935 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:16:25.935 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:25.935 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:16:25.935 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:25.935 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:25.935 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:25.935 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:16:25.935 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:25.935 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:25.935 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:25.935 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:25.935 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:16:25.935 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:16:25.935 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:25.935 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.935 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:25.935 08:49:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.935 08:49:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:16:25.935 08:49:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:16:25.935 08:49:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:25.935 08:49:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:16:25.935 08:49:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:25.935 08:49:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:16:25.935 08:49:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:25.935 08:49:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:25.935 rmmod nvme_tcp 00:16:25.935 rmmod nvme_fabrics 00:16:25.935 rmmod nvme_keyring 00:16:25.935 08:49:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:25.935 08:49:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:16:25.935 08:49:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:16:25.935 08:49:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 956473 ']' 00:16:25.935 08:49:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 956473 00:16:25.935 08:49:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@950 -- # '[' -z 956473 ']' 00:16:25.935 08:49:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # kill -0 956473 00:16:25.935 08:49:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # uname 00:16:25.935 08:49:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:25.935 08:49:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 956473 00:16:25.935 08:49:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:25.935 08:49:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:25.935 08:49:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@968 -- # echo 'killing process with pid 956473' 00:16:25.935 killing process with pid 956473 00:16:25.935 08:49:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@969 -- # kill 956473 00:16:25.935 08:49:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@974 -- # wait 956473 00:16:25.935 08:49:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:25.935 08:49:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:25.935 08:49:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:25.935 08:49:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:25.935 08:49:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:25.935 08:49:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:25.935 08:49:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:25.935 08:49:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:28.474 08:49:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:28.474 00:16:28.474 real 0m8.029s 00:16:28.474 user 0m14.822s 00:16:28.474 sys 0m2.124s 00:16:28.474 08:49:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:28.474 08:49:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:28.474 ************************************ 00:16:28.474 END TEST nvmf_nvme_cli 00:16:28.474 ************************************ 00:16:28.474 08:49:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:16:28.474 08:49:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:16:28.474 08:49:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:28.474 08:49:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:28.474 08:49:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:28.474 ************************************ 00:16:28.474 START TEST nvmf_vfio_user 00:16:28.474 ************************************ 00:16:28.474 08:49:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:16:28.474 * Looking for test storage... 00:16:28.474 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:28.475 08:49:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:28.475 08:49:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:16:28.475 08:49:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:28.475 08:49:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:28.475 08:49:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:28.475 08:49:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:28.475 08:49:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:28.475 08:49:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:28.475 08:49:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:28.475 08:49:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:28.475 08:49:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:28.475 08:49:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:28.475 08:49:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:28.475 08:49:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:28.475 08:49:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:28.475 08:49:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:28.475 08:49:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:28.475 08:49:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:28.475 08:49:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:28.475 08:49:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:28.475 08:49:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:28.475 08:49:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:28.475 08:49:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:28.475 08:49:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:28.475 08:49:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:28.475 08:49:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:16:28.475 08:49:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:28.475 08:49:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:16:28.475 08:49:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:28.475 08:49:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:28.475 08:49:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:28.475 08:49:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:28.475 08:49:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:28.475 08:49:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:28.475 08:49:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:28.475 08:49:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:28.475 08:49:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:16:28.475 08:49:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:16:28.475 08:49:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:16:28.475 08:49:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:28.475 08:49:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:16:28.475 08:49:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:16:28.475 08:49:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:16:28.475 08:49:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:16:28.475 08:49:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:16:28.475 08:49:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:16:28.475 08:49:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=957276 00:16:28.475 08:49:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:16:28.475 08:49:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 957276' 00:16:28.475 Process pid: 957276 00:16:28.475 08:49:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:28.475 08:49:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 957276 00:16:28.475 08:49:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@831 -- # '[' -z 957276 ']' 00:16:28.475 08:49:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:28.475 08:49:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:28.475 08:49:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:28.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:28.475 08:49:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:28.475 08:49:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:16:28.475 [2024-07-26 08:49:46.588027] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:16:28.475 [2024-07-26 08:49:46.588132] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:28.475 EAL: No free 2048 kB hugepages reported on node 1 00:16:28.475 [2024-07-26 08:49:46.629925] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:28.475 [2024-07-26 08:49:46.661036] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:28.475 [2024-07-26 08:49:46.755878] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:28.475 [2024-07-26 08:49:46.755947] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:28.475 [2024-07-26 08:49:46.755968] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:28.475 [2024-07-26 08:49:46.755981] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:28.475 [2024-07-26 08:49:46.755993] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:28.475 [2024-07-26 08:49:46.756074] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:28.475 [2024-07-26 08:49:46.756117] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:28.475 [2024-07-26 08:49:46.756171] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:28.475 [2024-07-26 08:49:46.756175] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:28.475 08:49:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:28.475 08:49:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # return 0 00:16:28.475 08:49:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:16:29.862 08:49:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:16:29.862 08:49:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:16:29.862 08:49:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:16:29.862 08:49:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:29.862 08:49:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:16:29.862 08:49:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:16:30.119 Malloc1 00:16:30.119 08:49:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:16:30.377 08:49:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:16:30.635 08:49:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:16:30.893 08:49:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:30.893 08:49:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:16:30.893 08:49:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:16:31.151 Malloc2 00:16:31.151 08:49:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:16:31.409 08:49:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:16:31.668 08:49:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:16:31.927 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:16:31.927 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:16:31.927 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:31.927 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:16:31.927 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:16:31.927 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:16:31.927 [2024-07-26 08:49:50.210117] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:16:31.927 [2024-07-26 08:49:50.210163] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid957699 ] 00:16:31.927 EAL: No free 2048 kB hugepages reported on node 1 00:16:31.927 [2024-07-26 08:49:50.226824] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:31.927 [2024-07-26 08:49:50.244425] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:16:31.927 [2024-07-26 08:49:50.254221] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:16:31.927 [2024-07-26 08:49:50.254255] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fa91cb37000 00:16:31.927 [2024-07-26 08:49:50.259084] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:31.927 [2024-07-26 08:49:50.259222] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:31.927 [2024-07-26 08:49:50.260226] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:31.927 [2024-07-26 08:49:50.261231] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:31.927 [2024-07-26 08:49:50.262236] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:31.927 [2024-07-26 08:49:50.263236] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:31.927 [2024-07-26 08:49:50.264239] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:31.927 [2024-07-26 08:49:50.265249] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:31.927 [2024-07-26 08:49:50.266256] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:16:31.927 [2024-07-26 08:49:50.266277] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fa91b8f9000 00:16:31.927 [2024-07-26 08:49:50.267409] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:16:31.927 [2024-07-26 08:49:50.278985] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:16:31.927 [2024-07-26 08:49:50.279023] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:16:31.927 [2024-07-26 08:49:50.284366] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:16:31.927 [2024-07-26 08:49:50.284438] nvme_pcie_common.c: 133:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:16:31.928 [2024-07-26 08:49:50.284541] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:16:31.928 [2024-07-26 08:49:50.284572] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:16:31.928 [2024-07-26 08:49:50.284582] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:16:31.928 [2024-07-26 08:49:50.285372] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:16:31.928 [2024-07-26 08:49:50.285396] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:16:31.928 [2024-07-26 08:49:50.285409] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:16:31.928 [2024-07-26 08:49:50.286362] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:16:31.928 [2024-07-26 08:49:50.286395] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:16:31.928 [2024-07-26 08:49:50.286408] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:16:31.928 [2024-07-26 08:49:50.287368] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:16:31.928 [2024-07-26 08:49:50.287402] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:16:31.928 [2024-07-26 08:49:50.288379] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:16:31.928 [2024-07-26 08:49:50.288413] nvme_ctrlr.c:3873:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:16:31.928 [2024-07-26 08:49:50.288422] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:16:31.928 [2024-07-26 08:49:50.288433] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:16:31.928 [2024-07-26 08:49:50.288543] nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:16:31.928 [2024-07-26 08:49:50.288552] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:16:31.928 [2024-07-26 08:49:50.288561] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:16:31.928 [2024-07-26 08:49:50.289387] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:16:31.928 [2024-07-26 08:49:50.290405] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:16:31.928 [2024-07-26 08:49:50.291398] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:16:31.928 [2024-07-26 08:49:50.292407] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:31.928 [2024-07-26 08:49:50.292513] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:16:31.928 [2024-07-26 08:49:50.293403] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:16:31.928 [2024-07-26 08:49:50.293421] nvme_ctrlr.c:3908:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:16:31.928 [2024-07-26 08:49:50.293434] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:16:31.928 [2024-07-26 08:49:50.293459] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:16:31.928 [2024-07-26 08:49:50.293473] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:16:31.928 [2024-07-26 08:49:50.293503] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:31.928 [2024-07-26 08:49:50.293512] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:31.928 [2024-07-26 08:49:50.293519] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:31.928 [2024-07-26 08:49:50.293541] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:31.928 [2024-07-26 08:49:50.293604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:16:31.928 [2024-07-26 08:49:50.293623] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:16:31.928 [2024-07-26 08:49:50.293631] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:16:31.928 [2024-07-26 08:49:50.293639] nvme_ctrlr.c:2064:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:16:31.928 [2024-07-26 08:49:50.293647] nvme_ctrlr.c:2075:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:16:31.928 [2024-07-26 08:49:50.293655] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:16:31.928 [2024-07-26 08:49:50.293663] nvme_ctrlr.c:2103:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:16:31.928 [2024-07-26 08:49:50.293671] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:16:31.928 [2024-07-26 08:49:50.293684] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:16:31.928 [2024-07-26 08:49:50.293703] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:16:31.928 [2024-07-26 08:49:50.293725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:16:31.928 [2024-07-26 08:49:50.293747] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:16:31.928 [2024-07-26 08:49:50.293760] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:16:31.928 [2024-07-26 08:49:50.293772] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:16:31.928 [2024-07-26 08:49:50.293784] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:16:31.928 [2024-07-26 08:49:50.293792] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:16:31.928 [2024-07-26 08:49:50.293808] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:16:31.928 [2024-07-26 08:49:50.293822] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:16:31.928 [2024-07-26 08:49:50.293838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:16:31.928 [2024-07-26 08:49:50.293851] nvme_ctrlr.c:3014:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:16:31.928 [2024-07-26 08:49:50.293860] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:16:31.928 [2024-07-26 08:49:50.293877] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:16:31.928 [2024-07-26 08:49:50.293889] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:16:31.928 [2024-07-26 08:49:50.293904] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:16:31.928 [2024-07-26 08:49:50.293916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:16:31.928 [2024-07-26 08:49:50.293982] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:16:31.928 [2024-07-26 08:49:50.293998] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:16:31.928 [2024-07-26 08:49:50.294012] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:16:31.928 [2024-07-26 08:49:50.294020] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:16:31.928 [2024-07-26 08:49:50.294026] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:31.928 [2024-07-26 08:49:50.294036] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:16:31.928 [2024-07-26 08:49:50.294069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:16:31.928 [2024-07-26 08:49:50.294090] nvme_ctrlr.c:4697:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:16:31.928 [2024-07-26 08:49:50.294122] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:16:31.928 [2024-07-26 08:49:50.294137] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:16:31.928 [2024-07-26 08:49:50.294149] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:31.928 [2024-07-26 08:49:50.294157] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:31.928 [2024-07-26 08:49:50.294163] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:31.928 [2024-07-26 08:49:50.294173] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:31.928 [2024-07-26 08:49:50.294199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:16:31.928 [2024-07-26 08:49:50.294222] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:16:31.928 [2024-07-26 08:49:50.294236] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:16:31.928 [2024-07-26 08:49:50.294248] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:31.928 [2024-07-26 08:49:50.294257] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:31.928 [2024-07-26 08:49:50.294266] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:31.928 [2024-07-26 08:49:50.294276] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:31.928 [2024-07-26 08:49:50.294288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:16:31.929 [2024-07-26 08:49:50.294302] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:16:31.929 [2024-07-26 08:49:50.294315] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:16:31.929 [2024-07-26 08:49:50.294330] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:16:31.929 [2024-07-26 08:49:50.294359] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:16:31.929 [2024-07-26 08:49:50.294370] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:16:31.929 [2024-07-26 08:49:50.294379] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:16:31.929 [2024-07-26 08:49:50.294388] nvme_ctrlr.c:3114:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:16:31.929 [2024-07-26 08:49:50.294397] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:16:31.929 [2024-07-26 08:49:50.294406] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:16:31.929 [2024-07-26 08:49:50.294434] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:16:31.929 [2024-07-26 08:49:50.294452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:16:31.929 [2024-07-26 08:49:50.294471] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:16:31.929 [2024-07-26 08:49:50.294483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:16:31.929 [2024-07-26 08:49:50.294499] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:16:31.929 [2024-07-26 08:49:50.294510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:16:31.929 [2024-07-26 08:49:50.294526] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:16:31.929 [2024-07-26 08:49:50.294537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:16:31.929 [2024-07-26 08:49:50.294559] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:16:31.929 [2024-07-26 08:49:50.294569] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:16:31.929 [2024-07-26 08:49:50.294575] nvme_pcie_common.c:1239:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:16:31.929 [2024-07-26 08:49:50.294581] nvme_pcie_common.c:1255:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:16:31.929 [2024-07-26 08:49:50.294587] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:16:31.929 [2024-07-26 08:49:50.294597] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:16:31.929 [2024-07-26 08:49:50.294609] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:16:31.929 [2024-07-26 08:49:50.294620] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:16:31.929 [2024-07-26 08:49:50.294626] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:31.929 [2024-07-26 08:49:50.294636] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:16:31.929 [2024-07-26 08:49:50.294647] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:16:31.929 [2024-07-26 08:49:50.294655] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:31.929 [2024-07-26 08:49:50.294661] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:31.929 [2024-07-26 08:49:50.294670] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:31.929 [2024-07-26 08:49:50.294682] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:16:31.929 [2024-07-26 08:49:50.294690] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:16:31.929 [2024-07-26 08:49:50.294697] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:31.929 [2024-07-26 08:49:50.294706] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:16:31.929 [2024-07-26 08:49:50.294718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:16:31.929 [2024-07-26 08:49:50.294738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:16:31.929 [2024-07-26 08:49:50.294755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:16:31.929 [2024-07-26 08:49:50.294767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:16:31.929 ===================================================== 00:16:31.929 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:31.929 ===================================================== 00:16:31.929 Controller Capabilities/Features 00:16:31.929 ================================ 00:16:31.929 Vendor ID: 4e58 00:16:31.929 Subsystem Vendor ID: 4e58 00:16:31.929 Serial Number: SPDK1 00:16:31.929 Model Number: SPDK bdev Controller 00:16:31.929 Firmware Version: 24.09 00:16:31.929 Recommended Arb Burst: 6 00:16:31.929 IEEE OUI Identifier: 8d 6b 50 00:16:31.929 Multi-path I/O 00:16:31.929 May have multiple subsystem ports: Yes 00:16:31.929 May have multiple controllers: Yes 00:16:31.929 Associated with SR-IOV VF: No 00:16:31.929 Max Data Transfer Size: 131072 00:16:31.929 Max Number of Namespaces: 32 00:16:31.929 Max Number of I/O Queues: 127 00:16:31.929 NVMe Specification Version (VS): 1.3 00:16:31.929 NVMe Specification Version (Identify): 1.3 00:16:31.929 Maximum Queue Entries: 256 00:16:31.929 Contiguous Queues Required: Yes 00:16:31.929 Arbitration Mechanisms Supported 00:16:31.929 Weighted Round Robin: Not Supported 00:16:31.929 Vendor Specific: Not Supported 00:16:31.929 Reset Timeout: 15000 ms 00:16:31.929 Doorbell Stride: 4 bytes 00:16:31.929 NVM Subsystem Reset: Not Supported 00:16:31.929 Command Sets Supported 00:16:31.929 NVM Command Set: Supported 00:16:31.929 Boot Partition: Not Supported 00:16:31.929 Memory Page Size Minimum: 4096 bytes 00:16:31.929 Memory Page Size Maximum: 4096 bytes 00:16:31.929 Persistent Memory Region: Not Supported 00:16:31.929 Optional Asynchronous Events Supported 00:16:31.929 Namespace Attribute Notices: Supported 00:16:31.929 Firmware Activation Notices: Not Supported 00:16:31.929 ANA Change Notices: Not Supported 00:16:31.929 PLE Aggregate Log Change Notices: Not Supported 00:16:31.929 LBA Status Info Alert Notices: Not Supported 00:16:31.929 EGE Aggregate Log Change Notices: Not Supported 00:16:31.929 Normal NVM Subsystem Shutdown event: Not Supported 00:16:31.929 Zone Descriptor Change Notices: Not Supported 00:16:31.929 Discovery Log Change Notices: Not Supported 00:16:31.929 Controller Attributes 00:16:31.929 128-bit Host Identifier: Supported 00:16:31.929 Non-Operational Permissive Mode: Not Supported 00:16:31.929 NVM Sets: Not Supported 00:16:31.929 Read Recovery Levels: Not Supported 00:16:31.929 Endurance Groups: Not Supported 00:16:31.929 Predictable Latency Mode: Not Supported 00:16:31.929 Traffic Based Keep ALive: Not Supported 00:16:31.929 Namespace Granularity: Not Supported 00:16:31.929 SQ Associations: Not Supported 00:16:31.929 UUID List: Not Supported 00:16:31.929 Multi-Domain Subsystem: Not Supported 00:16:31.929 Fixed Capacity Management: Not Supported 00:16:31.929 Variable Capacity Management: Not Supported 00:16:31.929 Delete Endurance Group: Not Supported 00:16:31.929 Delete NVM Set: Not Supported 00:16:31.929 Extended LBA Formats Supported: Not Supported 00:16:31.929 Flexible Data Placement Supported: Not Supported 00:16:31.929 00:16:31.929 Controller Memory Buffer Support 00:16:31.929 ================================ 00:16:31.929 Supported: No 00:16:31.929 00:16:31.929 Persistent Memory Region Support 00:16:31.929 ================================ 00:16:31.929 Supported: No 00:16:31.929 00:16:31.929 Admin Command Set Attributes 00:16:31.929 ============================ 00:16:31.929 Security Send/Receive: Not Supported 00:16:31.929 Format NVM: Not Supported 00:16:31.929 Firmware Activate/Download: Not Supported 00:16:31.929 Namespace Management: Not Supported 00:16:31.929 Device Self-Test: Not Supported 00:16:31.929 Directives: Not Supported 00:16:31.929 NVMe-MI: Not Supported 00:16:31.929 Virtualization Management: Not Supported 00:16:31.929 Doorbell Buffer Config: Not Supported 00:16:31.929 Get LBA Status Capability: Not Supported 00:16:31.929 Command & Feature Lockdown Capability: Not Supported 00:16:31.929 Abort Command Limit: 4 00:16:31.929 Async Event Request Limit: 4 00:16:31.929 Number of Firmware Slots: N/A 00:16:31.929 Firmware Slot 1 Read-Only: N/A 00:16:31.929 Firmware Activation Without Reset: N/A 00:16:31.929 Multiple Update Detection Support: N/A 00:16:31.929 Firmware Update Granularity: No Information Provided 00:16:31.929 Per-Namespace SMART Log: No 00:16:31.929 Asymmetric Namespace Access Log Page: Not Supported 00:16:31.929 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:16:31.930 Command Effects Log Page: Supported 00:16:31.930 Get Log Page Extended Data: Supported 00:16:31.930 Telemetry Log Pages: Not Supported 00:16:31.930 Persistent Event Log Pages: Not Supported 00:16:31.930 Supported Log Pages Log Page: May Support 00:16:31.930 Commands Supported & Effects Log Page: Not Supported 00:16:31.930 Feature Identifiers & Effects Log Page:May Support 00:16:31.930 NVMe-MI Commands & Effects Log Page: May Support 00:16:31.930 Data Area 4 for Telemetry Log: Not Supported 00:16:31.930 Error Log Page Entries Supported: 128 00:16:31.930 Keep Alive: Supported 00:16:31.930 Keep Alive Granularity: 10000 ms 00:16:31.930 00:16:31.930 NVM Command Set Attributes 00:16:31.930 ========================== 00:16:31.930 Submission Queue Entry Size 00:16:31.930 Max: 64 00:16:31.930 Min: 64 00:16:31.930 Completion Queue Entry Size 00:16:31.930 Max: 16 00:16:31.930 Min: 16 00:16:31.930 Number of Namespaces: 32 00:16:31.930 Compare Command: Supported 00:16:31.930 Write Uncorrectable Command: Not Supported 00:16:31.930 Dataset Management Command: Supported 00:16:31.930 Write Zeroes Command: Supported 00:16:31.930 Set Features Save Field: Not Supported 00:16:31.930 Reservations: Not Supported 00:16:31.930 Timestamp: Not Supported 00:16:31.930 Copy: Supported 00:16:31.930 Volatile Write Cache: Present 00:16:31.930 Atomic Write Unit (Normal): 1 00:16:31.930 Atomic Write Unit (PFail): 1 00:16:31.930 Atomic Compare & Write Unit: 1 00:16:31.930 Fused Compare & Write: Supported 00:16:31.930 Scatter-Gather List 00:16:31.930 SGL Command Set: Supported (Dword aligned) 00:16:31.930 SGL Keyed: Not Supported 00:16:31.930 SGL Bit Bucket Descriptor: Not Supported 00:16:31.930 SGL Metadata Pointer: Not Supported 00:16:31.930 Oversized SGL: Not Supported 00:16:31.930 SGL Metadata Address: Not Supported 00:16:31.930 SGL Offset: Not Supported 00:16:31.930 Transport SGL Data Block: Not Supported 00:16:31.930 Replay Protected Memory Block: Not Supported 00:16:31.930 00:16:31.930 Firmware Slot Information 00:16:31.930 ========================= 00:16:31.930 Active slot: 1 00:16:31.930 Slot 1 Firmware Revision: 24.09 00:16:31.930 00:16:31.930 00:16:31.930 Commands Supported and Effects 00:16:31.930 ============================== 00:16:31.930 Admin Commands 00:16:31.930 -------------- 00:16:31.930 Get Log Page (02h): Supported 00:16:31.930 Identify (06h): Supported 00:16:31.930 Abort (08h): Supported 00:16:31.930 Set Features (09h): Supported 00:16:31.930 Get Features (0Ah): Supported 00:16:31.930 Asynchronous Event Request (0Ch): Supported 00:16:31.930 Keep Alive (18h): Supported 00:16:31.930 I/O Commands 00:16:31.930 ------------ 00:16:31.930 Flush (00h): Supported LBA-Change 00:16:31.930 Write (01h): Supported LBA-Change 00:16:31.930 Read (02h): Supported 00:16:31.930 Compare (05h): Supported 00:16:31.930 Write Zeroes (08h): Supported LBA-Change 00:16:31.930 Dataset Management (09h): Supported LBA-Change 00:16:31.930 Copy (19h): Supported LBA-Change 00:16:31.930 00:16:31.930 Error Log 00:16:31.930 ========= 00:16:31.930 00:16:31.930 Arbitration 00:16:31.930 =========== 00:16:31.930 Arbitration Burst: 1 00:16:31.930 00:16:31.930 Power Management 00:16:31.930 ================ 00:16:31.930 Number of Power States: 1 00:16:31.930 Current Power State: Power State #0 00:16:31.930 Power State #0: 00:16:31.930 Max Power: 0.00 W 00:16:31.930 Non-Operational State: Operational 00:16:31.930 Entry Latency: Not Reported 00:16:31.930 Exit Latency: Not Reported 00:16:31.930 Relative Read Throughput: 0 00:16:31.930 Relative Read Latency: 0 00:16:31.930 Relative Write Throughput: 0 00:16:31.930 Relative Write Latency: 0 00:16:31.930 Idle Power: Not Reported 00:16:31.930 Active Power: Not Reported 00:16:31.930 Non-Operational Permissive Mode: Not Supported 00:16:31.930 00:16:31.930 Health Information 00:16:31.930 ================== 00:16:31.930 Critical Warnings: 00:16:31.930 Available Spare Space: OK 00:16:31.930 Temperature: OK 00:16:31.930 Device Reliability: OK 00:16:31.930 Read Only: No 00:16:31.930 Volatile Memory Backup: OK 00:16:31.930 Current Temperature: 0 Kelvin (-273 Celsius) 00:16:31.930 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:16:31.930 Available Spare: 0% 00:16:31.930 Available Sp[2024-07-26 08:49:50.294889] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:16:31.930 [2024-07-26 08:49:50.294906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:16:31.930 [2024-07-26 08:49:50.294950] nvme_ctrlr.c:4361:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:16:31.930 [2024-07-26 08:49:50.294968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.930 [2024-07-26 08:49:50.294985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.930 [2024-07-26 08:49:50.294995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.930 [2024-07-26 08:49:50.295006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.930 [2024-07-26 08:49:50.299072] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:16:31.930 [2024-07-26 08:49:50.299096] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:16:31.930 [2024-07-26 08:49:50.299430] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:31.930 [2024-07-26 08:49:50.299501] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:16:31.930 [2024-07-26 08:49:50.299515] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:16:31.930 [2024-07-26 08:49:50.300442] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:16:31.930 [2024-07-26 08:49:50.300465] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:16:31.930 [2024-07-26 08:49:50.300519] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:16:31.930 [2024-07-26 08:49:50.302485] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:16:31.930 are Threshold: 0% 00:16:31.930 Life Percentage Used: 0% 00:16:31.930 Data Units Read: 0 00:16:31.930 Data Units Written: 0 00:16:31.930 Host Read Commands: 0 00:16:31.930 Host Write Commands: 0 00:16:31.930 Controller Busy Time: 0 minutes 00:16:31.930 Power Cycles: 0 00:16:31.930 Power On Hours: 0 hours 00:16:31.930 Unsafe Shutdowns: 0 00:16:31.930 Unrecoverable Media Errors: 0 00:16:31.930 Lifetime Error Log Entries: 0 00:16:31.930 Warning Temperature Time: 0 minutes 00:16:31.930 Critical Temperature Time: 0 minutes 00:16:31.930 00:16:31.930 Number of Queues 00:16:31.930 ================ 00:16:31.930 Number of I/O Submission Queues: 127 00:16:31.930 Number of I/O Completion Queues: 127 00:16:31.930 00:16:31.930 Active Namespaces 00:16:31.930 ================= 00:16:31.930 Namespace ID:1 00:16:31.930 Error Recovery Timeout: Unlimited 00:16:31.930 Command Set Identifier: NVM (00h) 00:16:31.930 Deallocate: Supported 00:16:31.930 Deallocated/Unwritten Error: Not Supported 00:16:31.930 Deallocated Read Value: Unknown 00:16:31.930 Deallocate in Write Zeroes: Not Supported 00:16:31.930 Deallocated Guard Field: 0xFFFF 00:16:31.930 Flush: Supported 00:16:31.930 Reservation: Supported 00:16:31.930 Namespace Sharing Capabilities: Multiple Controllers 00:16:31.930 Size (in LBAs): 131072 (0GiB) 00:16:31.930 Capacity (in LBAs): 131072 (0GiB) 00:16:31.930 Utilization (in LBAs): 131072 (0GiB) 00:16:31.930 NGUID: 0E98537DD2ED4895A5A2DBDB4469476D 00:16:31.930 UUID: 0e98537d-d2ed-4895-a5a2-dbdb4469476d 00:16:31.930 Thin Provisioning: Not Supported 00:16:31.930 Per-NS Atomic Units: Yes 00:16:31.930 Atomic Boundary Size (Normal): 0 00:16:31.930 Atomic Boundary Size (PFail): 0 00:16:31.930 Atomic Boundary Offset: 0 00:16:31.930 Maximum Single Source Range Length: 65535 00:16:31.930 Maximum Copy Length: 65535 00:16:31.930 Maximum Source Range Count: 1 00:16:31.930 NGUID/EUI64 Never Reused: No 00:16:31.930 Namespace Write Protected: No 00:16:31.930 Number of LBA Formats: 1 00:16:31.930 Current LBA Format: LBA Format #00 00:16:31.930 LBA Format #00: Data Size: 512 Metadata Size: 0 00:16:31.930 00:16:31.931 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:16:31.931 EAL: No free 2048 kB hugepages reported on node 1 00:16:32.191 [2024-07-26 08:49:50.530903] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:37.472 Initializing NVMe Controllers 00:16:37.472 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:37.472 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:16:37.472 Initialization complete. Launching workers. 00:16:37.472 ======================================================== 00:16:37.472 Latency(us) 00:16:37.472 Device Information : IOPS MiB/s Average min max 00:16:37.472 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 34045.80 132.99 3761.00 1189.09 8318.35 00:16:37.472 ======================================================== 00:16:37.472 Total : 34045.80 132.99 3761.00 1189.09 8318.35 00:16:37.472 00:16:37.472 [2024-07-26 08:49:55.554808] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:37.472 08:49:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:16:37.472 EAL: No free 2048 kB hugepages reported on node 1 00:16:37.472 [2024-07-26 08:49:55.796976] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:42.755 Initializing NVMe Controllers 00:16:42.755 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:42.755 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:16:42.755 Initialization complete. Launching workers. 00:16:42.755 ======================================================== 00:16:42.755 Latency(us) 00:16:42.755 Device Information : IOPS MiB/s Average min max 00:16:42.755 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16051.17 62.70 7982.89 4975.01 11020.90 00:16:42.755 ======================================================== 00:16:42.755 Total : 16051.17 62.70 7982.89 4975.01 11020.90 00:16:42.755 00:16:42.755 [2024-07-26 08:50:00.831951] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:42.755 08:50:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:16:42.755 EAL: No free 2048 kB hugepages reported on node 1 00:16:42.755 [2024-07-26 08:50:01.048068] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:48.031 [2024-07-26 08:50:06.115456] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:48.031 Initializing NVMe Controllers 00:16:48.031 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:48.031 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:48.031 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:16:48.031 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:16:48.031 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:16:48.031 Initialization complete. Launching workers. 00:16:48.031 Starting thread on core 2 00:16:48.031 Starting thread on core 3 00:16:48.031 Starting thread on core 1 00:16:48.031 08:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:16:48.031 EAL: No free 2048 kB hugepages reported on node 1 00:16:48.031 [2024-07-26 08:50:06.425536] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:52.226 [2024-07-26 08:50:09.798318] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:52.226 Initializing NVMe Controllers 00:16:52.226 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:52.226 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:52.226 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:16:52.226 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:16:52.226 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:16:52.226 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:16:52.226 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:16:52.226 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:16:52.226 Initialization complete. Launching workers. 00:16:52.226 Starting thread on core 1 with urgent priority queue 00:16:52.226 Starting thread on core 2 with urgent priority queue 00:16:52.226 Starting thread on core 3 with urgent priority queue 00:16:52.226 Starting thread on core 0 with urgent priority queue 00:16:52.226 SPDK bdev Controller (SPDK1 ) core 0: 1542.33 IO/s 64.84 secs/100000 ios 00:16:52.226 SPDK bdev Controller (SPDK1 ) core 1: 1980.67 IO/s 50.49 secs/100000 ios 00:16:52.226 SPDK bdev Controller (SPDK1 ) core 2: 1813.67 IO/s 55.14 secs/100000 ios 00:16:52.226 SPDK bdev Controller (SPDK1 ) core 3: 1789.33 IO/s 55.89 secs/100000 ios 00:16:52.226 ======================================================== 00:16:52.226 00:16:52.226 08:50:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:16:52.226 EAL: No free 2048 kB hugepages reported on node 1 00:16:52.226 [2024-07-26 08:50:10.080285] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:52.226 Initializing NVMe Controllers 00:16:52.226 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:52.226 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:52.226 Namespace ID: 1 size: 0GB 00:16:52.226 Initialization complete. 00:16:52.226 INFO: using host memory buffer for IO 00:16:52.226 Hello world! 00:16:52.226 [2024-07-26 08:50:10.115907] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:52.226 08:50:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:16:52.226 EAL: No free 2048 kB hugepages reported on node 1 00:16:52.226 [2024-07-26 08:50:10.410552] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:53.167 Initializing NVMe Controllers 00:16:53.167 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:53.167 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:53.167 Initialization complete. Launching workers. 00:16:53.167 submit (in ns) avg, min, max = 7624.3, 3558.9, 4016031.1 00:16:53.167 complete (in ns) avg, min, max = 24140.2, 2071.1, 5011080.0 00:16:53.167 00:16:53.167 Submit histogram 00:16:53.167 ================ 00:16:53.167 Range in us Cumulative Count 00:16:53.167 3.556 - 3.579: 0.4357% ( 58) 00:16:53.167 3.579 - 3.603: 1.2996% ( 115) 00:16:53.167 3.603 - 3.627: 4.0565% ( 367) 00:16:53.167 3.627 - 3.650: 9.0595% ( 666) 00:16:53.167 3.650 - 3.674: 17.2927% ( 1096) 00:16:53.167 3.674 - 3.698: 26.6677% ( 1248) 00:16:53.167 3.698 - 3.721: 36.7713% ( 1345) 00:16:53.167 3.721 - 3.745: 44.3585% ( 1010) 00:16:53.167 3.745 - 3.769: 50.5559% ( 825) 00:16:53.167 3.769 - 3.793: 55.4312% ( 649) 00:16:53.167 3.793 - 3.816: 59.5027% ( 542) 00:16:53.167 3.816 - 3.840: 62.9432% ( 458) 00:16:53.167 3.840 - 3.864: 66.3311% ( 451) 00:16:53.167 3.864 - 3.887: 70.2449% ( 521) 00:16:53.167 3.887 - 3.911: 73.8807% ( 484) 00:16:53.167 3.911 - 3.935: 78.1926% ( 574) 00:16:53.167 3.935 - 3.959: 81.9787% ( 504) 00:16:53.167 3.959 - 3.982: 84.9609% ( 397) 00:16:53.167 3.982 - 4.006: 87.3648% ( 320) 00:16:53.167 4.006 - 4.030: 88.9724% ( 214) 00:16:53.167 4.030 - 4.053: 90.3696% ( 186) 00:16:53.167 4.053 - 4.077: 91.6466% ( 170) 00:16:53.167 4.077 - 4.101: 92.6908% ( 139) 00:16:53.167 4.101 - 4.124: 93.5998% ( 121) 00:16:53.167 4.124 - 4.148: 94.3885% ( 105) 00:16:53.167 4.148 - 4.172: 94.9970% ( 81) 00:16:53.167 4.172 - 4.196: 95.4703% ( 63) 00:16:53.167 4.196 - 4.219: 95.8759% ( 54) 00:16:53.167 4.219 - 4.243: 96.1313% ( 34) 00:16:53.167 4.243 - 4.267: 96.2816% ( 20) 00:16:53.167 4.267 - 4.290: 96.4393% ( 21) 00:16:53.167 4.290 - 4.314: 96.5294% ( 12) 00:16:53.167 4.314 - 4.338: 96.6196% ( 12) 00:16:53.167 4.338 - 4.361: 96.7323% ( 15) 00:16:53.167 4.361 - 4.385: 96.8299% ( 13) 00:16:53.167 4.385 - 4.409: 96.8450% ( 2) 00:16:53.167 4.409 - 4.433: 96.9050% ( 8) 00:16:53.167 4.433 - 4.456: 96.9276% ( 3) 00:16:53.167 4.456 - 4.480: 96.9576% ( 4) 00:16:53.167 4.480 - 4.504: 96.9727% ( 2) 00:16:53.167 4.504 - 4.527: 96.9952% ( 3) 00:16:53.167 4.527 - 4.551: 97.0102% ( 2) 00:16:53.167 4.551 - 4.575: 97.0177% ( 1) 00:16:53.167 4.575 - 4.599: 97.0478% ( 4) 00:16:53.167 4.599 - 4.622: 97.0553% ( 1) 00:16:53.167 4.622 - 4.646: 97.0628% ( 1) 00:16:53.167 4.646 - 4.670: 97.0703% ( 1) 00:16:53.167 4.670 - 4.693: 97.0778% ( 1) 00:16:53.167 4.693 - 4.717: 97.0928% ( 2) 00:16:53.167 4.717 - 4.741: 97.1154% ( 3) 00:16:53.167 4.741 - 4.764: 97.1304% ( 2) 00:16:53.167 4.764 - 4.788: 97.1529% ( 3) 00:16:53.167 4.788 - 4.812: 97.1980% ( 6) 00:16:53.167 4.812 - 4.836: 97.2431% ( 6) 00:16:53.167 4.836 - 4.859: 97.3032% ( 8) 00:16:53.167 4.859 - 4.883: 97.3558% ( 7) 00:16:53.167 4.883 - 4.907: 97.3858% ( 4) 00:16:53.167 4.907 - 4.930: 97.4309% ( 6) 00:16:53.167 4.930 - 4.954: 97.4760% ( 6) 00:16:53.167 4.954 - 4.978: 97.5361% ( 8) 00:16:53.167 4.978 - 5.001: 97.5886% ( 7) 00:16:53.167 5.001 - 5.025: 97.6412% ( 7) 00:16:53.167 5.025 - 5.049: 97.6938% ( 7) 00:16:53.167 5.049 - 5.073: 97.7389% ( 6) 00:16:53.167 5.073 - 5.096: 97.7689% ( 4) 00:16:53.167 5.096 - 5.120: 97.8065% ( 5) 00:16:53.167 5.120 - 5.144: 97.8140% ( 1) 00:16:53.167 5.144 - 5.167: 97.8290% ( 2) 00:16:53.167 5.167 - 5.191: 97.8516% ( 3) 00:16:53.168 5.191 - 5.215: 97.8591% ( 1) 00:16:53.168 5.215 - 5.239: 97.8666% ( 1) 00:16:53.168 5.239 - 5.262: 97.8816% ( 2) 00:16:53.168 5.262 - 5.286: 97.9117% ( 4) 00:16:53.168 5.310 - 5.333: 97.9267% ( 2) 00:16:53.168 5.357 - 5.381: 97.9417% ( 2) 00:16:53.168 5.428 - 5.452: 97.9492% ( 1) 00:16:53.168 5.452 - 5.476: 97.9567% ( 1) 00:16:53.168 5.476 - 5.499: 97.9718% ( 2) 00:16:53.168 5.618 - 5.641: 97.9793% ( 1) 00:16:53.168 5.760 - 5.784: 97.9868% ( 1) 00:16:53.168 5.926 - 5.950: 97.9943% ( 1) 00:16:53.168 6.021 - 6.044: 98.0018% ( 1) 00:16:53.168 6.068 - 6.116: 98.0093% ( 1) 00:16:53.168 6.116 - 6.163: 98.0243% ( 2) 00:16:53.168 6.210 - 6.258: 98.0319% ( 1) 00:16:53.168 6.400 - 6.447: 98.0469% ( 2) 00:16:53.168 6.542 - 6.590: 98.0544% ( 1) 00:16:53.168 6.590 - 6.637: 98.0619% ( 1) 00:16:53.168 6.684 - 6.732: 98.0769% ( 2) 00:16:53.168 6.827 - 6.874: 98.0844% ( 1) 00:16:53.168 6.874 - 6.921: 98.0919% ( 1) 00:16:53.168 6.921 - 6.969: 98.1145% ( 3) 00:16:53.168 7.016 - 7.064: 98.1220% ( 1) 00:16:53.168 7.111 - 7.159: 98.1295% ( 1) 00:16:53.168 7.159 - 7.206: 98.1370% ( 1) 00:16:53.168 7.206 - 7.253: 98.1445% ( 1) 00:16:53.168 7.253 - 7.301: 98.1596% ( 2) 00:16:53.168 7.396 - 7.443: 98.1821% ( 3) 00:16:53.168 7.443 - 7.490: 98.1896% ( 1) 00:16:53.168 7.490 - 7.538: 98.2046% ( 2) 00:16:53.168 7.538 - 7.585: 98.2121% ( 1) 00:16:53.168 7.585 - 7.633: 98.2197% ( 1) 00:16:53.168 7.633 - 7.680: 98.2347% ( 2) 00:16:53.168 7.917 - 7.964: 98.2497% ( 2) 00:16:53.168 7.964 - 8.012: 98.2572% ( 1) 00:16:53.168 8.012 - 8.059: 98.2722% ( 2) 00:16:53.168 8.154 - 8.201: 98.2797% ( 1) 00:16:53.168 8.201 - 8.249: 98.2873% ( 1) 00:16:53.168 8.296 - 8.344: 98.2948% ( 1) 00:16:53.168 8.344 - 8.391: 98.3098% ( 2) 00:16:53.168 8.486 - 8.533: 98.3173% ( 1) 00:16:53.168 8.581 - 8.628: 98.3248% ( 1) 00:16:53.168 8.628 - 8.676: 98.3323% ( 1) 00:16:53.168 8.770 - 8.818: 98.3549% ( 3) 00:16:53.168 8.818 - 8.865: 98.3699% ( 2) 00:16:53.168 9.007 - 9.055: 98.3774% ( 1) 00:16:53.168 9.197 - 9.244: 98.3849% ( 1) 00:16:53.168 9.244 - 9.292: 98.4150% ( 4) 00:16:53.168 9.292 - 9.339: 98.4225% ( 1) 00:16:53.168 9.339 - 9.387: 98.4300% ( 1) 00:16:53.168 9.387 - 9.434: 98.4375% ( 1) 00:16:53.168 9.481 - 9.529: 98.4450% ( 1) 00:16:53.168 9.766 - 9.813: 98.4525% ( 1) 00:16:53.168 9.813 - 9.861: 98.4600% ( 1) 00:16:53.168 9.861 - 9.908: 98.4751% ( 2) 00:16:53.168 9.908 - 9.956: 98.4826% ( 1) 00:16:53.168 9.956 - 10.003: 98.4901% ( 1) 00:16:53.168 10.145 - 10.193: 98.5051% ( 2) 00:16:53.168 10.240 - 10.287: 98.5126% ( 1) 00:16:53.168 10.335 - 10.382: 98.5201% ( 1) 00:16:53.168 10.382 - 10.430: 98.5352% ( 2) 00:16:53.168 10.477 - 10.524: 98.5427% ( 1) 00:16:53.168 10.619 - 10.667: 98.5577% ( 2) 00:16:53.168 10.761 - 10.809: 98.5652% ( 1) 00:16:53.168 10.856 - 10.904: 98.5802% ( 2) 00:16:53.168 11.093 - 11.141: 98.5877% ( 1) 00:16:53.168 11.141 - 11.188: 98.6028% ( 2) 00:16:53.168 11.283 - 11.330: 98.6103% ( 1) 00:16:53.168 11.330 - 11.378: 98.6178% ( 1) 00:16:53.168 11.425 - 11.473: 98.6328% ( 2) 00:16:53.168 11.615 - 11.662: 98.6403% ( 1) 00:16:53.168 11.662 - 11.710: 98.6629% ( 3) 00:16:53.168 11.804 - 11.852: 98.6704% ( 1) 00:16:53.168 11.947 - 11.994: 98.6779% ( 1) 00:16:53.168 12.041 - 12.089: 98.6929% ( 2) 00:16:53.168 12.089 - 12.136: 98.7004% ( 1) 00:16:53.168 12.421 - 12.516: 98.7079% ( 1) 00:16:53.168 12.516 - 12.610: 98.7154% ( 1) 00:16:53.168 12.610 - 12.705: 98.7230% ( 1) 00:16:53.168 12.705 - 12.800: 98.7380% ( 2) 00:16:53.168 12.990 - 13.084: 98.7530% ( 2) 00:16:53.168 13.179 - 13.274: 98.7605% ( 1) 00:16:53.168 13.369 - 13.464: 98.7755% ( 2) 00:16:53.168 13.559 - 13.653: 98.7831% ( 1) 00:16:53.168 13.748 - 13.843: 98.8131% ( 4) 00:16:53.168 13.938 - 14.033: 98.8356% ( 3) 00:16:53.168 14.127 - 14.222: 98.8507% ( 2) 00:16:53.168 14.412 - 14.507: 98.8657% ( 2) 00:16:53.168 14.507 - 14.601: 98.8807% ( 2) 00:16:53.168 14.696 - 14.791: 98.8882% ( 1) 00:16:53.168 14.981 - 15.076: 98.9032% ( 2) 00:16:53.168 15.076 - 15.170: 98.9258% ( 3) 00:16:53.168 15.170 - 15.265: 98.9333% ( 1) 00:16:53.168 15.265 - 15.360: 98.9408% ( 1) 00:16:53.168 15.360 - 15.455: 98.9558% ( 2) 00:16:53.168 15.739 - 15.834: 98.9633% ( 1) 00:16:53.168 16.972 - 17.067: 98.9784% ( 2) 00:16:53.168 17.256 - 17.351: 98.9934% ( 2) 00:16:53.168 17.351 - 17.446: 99.0309% ( 5) 00:16:53.168 17.446 - 17.541: 99.0760% ( 6) 00:16:53.168 17.636 - 17.730: 99.1511% ( 10) 00:16:53.168 17.730 - 17.825: 99.1737% ( 3) 00:16:53.168 17.825 - 17.920: 99.2112% ( 5) 00:16:53.168 17.920 - 18.015: 99.2788% ( 9) 00:16:53.168 18.015 - 18.110: 99.2939% ( 2) 00:16:53.168 18.110 - 18.204: 99.3540% ( 8) 00:16:53.168 18.204 - 18.299: 99.4441% ( 12) 00:16:53.168 18.299 - 18.394: 99.5192% ( 10) 00:16:53.168 18.394 - 18.489: 99.6319% ( 15) 00:16:53.168 18.489 - 18.584: 99.6770% ( 6) 00:16:53.168 18.584 - 18.679: 99.6995% ( 3) 00:16:53.168 18.679 - 18.773: 99.7145% ( 2) 00:16:53.168 18.773 - 18.868: 99.7371% ( 3) 00:16:53.168 18.868 - 18.963: 99.7596% ( 3) 00:16:53.168 18.963 - 19.058: 99.7822% ( 3) 00:16:53.168 19.058 - 19.153: 99.7972% ( 2) 00:16:53.168 19.153 - 19.247: 99.8272% ( 4) 00:16:53.168 19.247 - 19.342: 99.8422% ( 2) 00:16:53.169 19.342 - 19.437: 99.8498% ( 1) 00:16:53.169 20.385 - 20.480: 99.8573% ( 1) 00:16:53.169 21.333 - 21.428: 99.8648% ( 1) 00:16:53.169 22.471 - 22.566: 99.8723% ( 1) 00:16:53.169 22.945 - 23.040: 99.8798% ( 1) 00:16:53.169 23.988 - 24.083: 99.8873% ( 1) 00:16:53.169 24.273 - 24.462: 99.8948% ( 1) 00:16:53.169 27.496 - 27.686: 99.9023% ( 1) 00:16:53.169 28.634 - 28.824: 99.9099% ( 1) 00:16:53.169 3980.705 - 4004.978: 99.9850% ( 10) 00:16:53.169 4004.978 - 4029.250: 100.0000% ( 2) 00:16:53.169 00:16:53.169 Complete histogram 00:16:53.169 ================== 00:16:53.169 Range in us Cumulative Count 00:16:53.169 2.062 - 2.074: 0.0826% ( 11) 00:16:53.169 2.074 - 2.086: 16.0907% ( 2131) 00:16:53.169 2.086 - 2.098: 39.5132% ( 3118) 00:16:53.169 2.098 - 2.110: 41.8570% ( 312) 00:16:53.169 2.110 - 2.121: 54.9129% ( 1738) 00:16:53.169 2.121 - 2.133: 61.2079% ( 838) 00:16:53.169 2.133 - 2.145: 63.4841% ( 303) 00:16:53.169 2.145 - 2.157: 70.9210% ( 990) 00:16:53.169 2.157 - 2.169: 74.9399% ( 535) 00:16:53.169 2.169 - 2.181: 76.5099% ( 209) 00:16:53.169 2.181 - 2.193: 81.1974% ( 624) 00:16:53.169 2.193 - 2.204: 83.0604% ( 248) 00:16:53.169 2.204 - 2.216: 83.6914% ( 84) 00:16:53.169 2.216 - 2.228: 86.6061% ( 388) 00:16:53.169 2.228 - 2.240: 89.7912% ( 424) 00:16:53.169 2.240 - 2.252: 91.0006% ( 161) 00:16:53.169 2.252 - 2.264: 92.9838% ( 264) 00:16:53.169 2.264 - 2.276: 93.9078% ( 123) 00:16:53.169 2.276 - 2.287: 94.2157% ( 41) 00:16:53.169 2.287 - 2.299: 94.5613% ( 46) 00:16:53.169 2.299 - 2.311: 95.3501% ( 105) 00:16:53.169 2.311 - 2.323: 95.7632% ( 55) 00:16:53.169 2.323 - 2.335: 95.8308% ( 9) 00:16:53.169 2.335 - 2.347: 95.8909% ( 8) 00:16:53.169 2.347 - 2.359: 95.9961% ( 14) 00:16:53.169 2.359 - 2.370: 96.2064% ( 28) 00:16:53.169 2.370 - 2.382: 96.5370% ( 44) 00:16:53.169 2.382 - 2.394: 96.9576% ( 56) 00:16:53.169 2.394 - 2.406: 97.2731% ( 42) 00:16:53.169 2.406 - 2.418: 97.4459% ( 23) 00:16:53.169 2.418 - 2.430: 97.5511% ( 14) 00:16:53.169 2.430 - 2.441: 97.7689% ( 29) 00:16:53.169 2.441 - 2.453: 97.8891% ( 16) 00:16:53.169 2.453 - 2.465: 97.9718% ( 11) 00:16:53.169 2.465 - 2.477: 98.0919% ( 16) 00:16:53.169 2.477 - 2.489: 98.1671% ( 10) 00:16:53.169 2.489 - 2.501: 98.2572% ( 12) 00:16:53.169 2.501 - 2.513: 98.3173% ( 8) 00:16:53.169 2.513 - 2.524: 98.3398% ( 3) 00:16:53.169 2.524 - 2.536: 98.3774% ( 5) 00:16:53.169 2.536 - 2.548: 98.3849% ( 1) 00:16:53.169 2.548 - 2.560: 98.4075% ( 3) 00:16:53.169 2.560 - 2.572: 98.4150% ( 1) 00:16:53.169 2.584 - 2.596: 98.4225% ( 1) 00:16:53.169 2.607 - 2.619: 98.4300% ( 1) 00:16:53.169 2.619 - 2.631: 98.4375% ( 1) 00:16:53.169 2.679 - 2.690: 98.4450% ( 1) 00:16:53.169 2.738 - 2.750: 98.4525% ( 1) 00:16:53.169 2.761 - 2.773: 98.4600% ( 1) 00:16:53.169 3.105 - 3.129: 98.4675% ( 1) 00:16:53.169 3.129 - 3.153: 98.4751% ( 1) 00:16:53.169 3.176 - 3.200: 98.4826% ( 1) 00:16:53.169 3.224 - 3.247: 9[2024-07-26 08:50:11.432697] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:53.169 8.4901% ( 1) 00:16:53.169 3.271 - 3.295: 98.4976% ( 1) 00:16:53.169 3.295 - 3.319: 98.5051% ( 1) 00:16:53.169 3.319 - 3.342: 98.5276% ( 3) 00:16:53.169 3.342 - 3.366: 98.5427% ( 2) 00:16:53.169 3.366 - 3.390: 98.5502% ( 1) 00:16:53.169 3.390 - 3.413: 98.5652% ( 2) 00:16:53.169 3.413 - 3.437: 98.5802% ( 2) 00:16:53.169 3.437 - 3.461: 98.5877% ( 1) 00:16:53.169 3.484 - 3.508: 98.6028% ( 2) 00:16:53.169 3.508 - 3.532: 98.6253% ( 3) 00:16:53.169 3.579 - 3.603: 98.6403% ( 2) 00:16:53.169 3.627 - 3.650: 98.6478% ( 1) 00:16:53.169 3.650 - 3.674: 98.6553% ( 1) 00:16:53.169 3.698 - 3.721: 98.6629% ( 1) 00:16:53.169 3.745 - 3.769: 98.6704% ( 1) 00:16:53.169 3.769 - 3.793: 98.6779% ( 1) 00:16:53.169 3.793 - 3.816: 98.6854% ( 1) 00:16:53.169 3.816 - 3.840: 98.6929% ( 1) 00:16:53.169 3.840 - 3.864: 98.7004% ( 1) 00:16:53.169 3.864 - 3.887: 98.7079% ( 1) 00:16:53.169 4.148 - 4.172: 98.7154% ( 1) 00:16:53.169 4.196 - 4.219: 98.7230% ( 1) 00:16:53.169 4.433 - 4.456: 98.7305% ( 1) 00:16:53.169 4.907 - 4.930: 98.7380% ( 1) 00:16:53.169 5.191 - 5.215: 98.7455% ( 1) 00:16:53.169 5.239 - 5.262: 98.7530% ( 1) 00:16:53.169 5.333 - 5.357: 98.7605% ( 1) 00:16:53.169 5.452 - 5.476: 98.7680% ( 1) 00:16:53.169 5.831 - 5.855: 98.7755% ( 1) 00:16:53.169 5.950 - 5.973: 98.7831% ( 1) 00:16:53.169 6.068 - 6.116: 98.7906% ( 1) 00:16:53.169 6.447 - 6.495: 98.7981% ( 1) 00:16:53.169 6.590 - 6.637: 98.8206% ( 3) 00:16:53.169 6.637 - 6.684: 98.8281% ( 1) 00:16:53.169 6.684 - 6.732: 98.8356% ( 1) 00:16:53.169 7.016 - 7.064: 98.8431% ( 1) 00:16:53.169 7.443 - 7.490: 98.8507% ( 1) 00:16:53.169 8.391 - 8.439: 98.8582% ( 1) 00:16:53.169 10.714 - 10.761: 98.8657% ( 1) 00:16:53.169 15.455 - 15.550: 98.8807% ( 2) 00:16:53.169 15.739 - 15.834: 98.8957% ( 2) 00:16:53.169 15.834 - 15.929: 98.9032% ( 1) 00:16:53.169 15.929 - 16.024: 98.9333% ( 4) 00:16:53.169 16.024 - 16.119: 98.9558% ( 3) 00:16:53.169 16.119 - 16.213: 98.9784% ( 3) 00:16:53.169 16.213 - 16.308: 99.0009% ( 3) 00:16:53.169 16.308 - 16.403: 99.0385% ( 5) 00:16:53.169 16.403 - 16.498: 99.0910% ( 7) 00:16:53.169 16.498 - 16.593: 99.1662% ( 10) 00:16:53.170 16.593 - 16.687: 99.2037% ( 5) 00:16:53.170 16.687 - 16.782: 99.2413% ( 5) 00:16:53.170 16.782 - 16.877: 99.2939% ( 7) 00:16:53.170 16.877 - 16.972: 99.3089% ( 2) 00:16:53.170 16.972 - 17.067: 99.3465% ( 5) 00:16:53.170 17.067 - 17.161: 99.3690% ( 3) 00:16:53.170 17.161 - 17.256: 99.3840% ( 2) 00:16:53.170 17.256 - 17.351: 99.3915% ( 1) 00:16:53.170 17.446 - 17.541: 99.4141% ( 3) 00:16:53.170 17.636 - 17.730: 99.4216% ( 1) 00:16:53.170 18.015 - 18.110: 99.4291% ( 1) 00:16:53.170 18.110 - 18.204: 99.4441% ( 2) 00:16:53.170 194.181 - 195.698: 99.4516% ( 1) 00:16:53.170 2439.396 - 2451.532: 99.4591% ( 1) 00:16:53.170 3980.705 - 4004.978: 99.9099% ( 60) 00:16:53.170 4004.978 - 4029.250: 99.9925% ( 11) 00:16:53.170 5000.154 - 5024.427: 100.0000% ( 1) 00:16:53.170 00:16:53.170 08:50:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:16:53.170 08:50:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:16:53.170 08:50:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:16:53.170 08:50:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:16:53.170 08:50:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:53.472 [ 00:16:53.472 { 00:16:53.472 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:53.472 "subtype": "Discovery", 00:16:53.472 "listen_addresses": [], 00:16:53.472 "allow_any_host": true, 00:16:53.472 "hosts": [] 00:16:53.472 }, 00:16:53.472 { 00:16:53.472 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:53.472 "subtype": "NVMe", 00:16:53.472 "listen_addresses": [ 00:16:53.472 { 00:16:53.472 "trtype": "VFIOUSER", 00:16:53.472 "adrfam": "IPv4", 00:16:53.472 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:53.472 "trsvcid": "0" 00:16:53.472 } 00:16:53.472 ], 00:16:53.472 "allow_any_host": true, 00:16:53.472 "hosts": [], 00:16:53.472 "serial_number": "SPDK1", 00:16:53.472 "model_number": "SPDK bdev Controller", 00:16:53.472 "max_namespaces": 32, 00:16:53.472 "min_cntlid": 1, 00:16:53.472 "max_cntlid": 65519, 00:16:53.472 "namespaces": [ 00:16:53.472 { 00:16:53.472 "nsid": 1, 00:16:53.472 "bdev_name": "Malloc1", 00:16:53.472 "name": "Malloc1", 00:16:53.472 "nguid": "0E98537DD2ED4895A5A2DBDB4469476D", 00:16:53.472 "uuid": "0e98537d-d2ed-4895-a5a2-dbdb4469476d" 00:16:53.472 } 00:16:53.472 ] 00:16:53.472 }, 00:16:53.472 { 00:16:53.472 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:53.472 "subtype": "NVMe", 00:16:53.472 "listen_addresses": [ 00:16:53.472 { 00:16:53.472 "trtype": "VFIOUSER", 00:16:53.472 "adrfam": "IPv4", 00:16:53.472 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:53.472 "trsvcid": "0" 00:16:53.472 } 00:16:53.472 ], 00:16:53.472 "allow_any_host": true, 00:16:53.472 "hosts": [], 00:16:53.472 "serial_number": "SPDK2", 00:16:53.472 "model_number": "SPDK bdev Controller", 00:16:53.472 "max_namespaces": 32, 00:16:53.472 "min_cntlid": 1, 00:16:53.472 "max_cntlid": 65519, 00:16:53.472 "namespaces": [ 00:16:53.472 { 00:16:53.472 "nsid": 1, 00:16:53.472 "bdev_name": "Malloc2", 00:16:53.472 "name": "Malloc2", 00:16:53.472 "nguid": "7955259F71CF4479982D6B0024843228", 00:16:53.472 "uuid": "7955259f-71cf-4479-982d-6b0024843228" 00:16:53.472 } 00:16:53.472 ] 00:16:53.472 } 00:16:53.472 ] 00:16:53.472 08:50:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:16:53.472 08:50:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=960210 00:16:53.472 08:50:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:16:53.472 08:50:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:16:53.472 08:50:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:16:53.472 08:50:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:53.472 08:50:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:53.472 08:50:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:16:53.472 08:50:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:16:53.472 08:50:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:16:53.472 EAL: No free 2048 kB hugepages reported on node 1 00:16:53.472 [2024-07-26 08:50:11.887542] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:53.730 Malloc3 00:16:53.730 08:50:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:16:53.987 [2024-07-26 08:50:12.265403] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:53.987 08:50:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:53.987 Asynchronous Event Request test 00:16:53.987 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:53.987 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:53.987 Registering asynchronous event callbacks... 00:16:53.987 Starting namespace attribute notice tests for all controllers... 00:16:53.987 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:16:53.987 aer_cb - Changed Namespace 00:16:53.987 Cleaning up... 00:16:54.247 [ 00:16:54.247 { 00:16:54.247 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:54.247 "subtype": "Discovery", 00:16:54.247 "listen_addresses": [], 00:16:54.247 "allow_any_host": true, 00:16:54.247 "hosts": [] 00:16:54.247 }, 00:16:54.247 { 00:16:54.247 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:54.247 "subtype": "NVMe", 00:16:54.247 "listen_addresses": [ 00:16:54.247 { 00:16:54.247 "trtype": "VFIOUSER", 00:16:54.247 "adrfam": "IPv4", 00:16:54.247 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:54.247 "trsvcid": "0" 00:16:54.247 } 00:16:54.247 ], 00:16:54.247 "allow_any_host": true, 00:16:54.247 "hosts": [], 00:16:54.247 "serial_number": "SPDK1", 00:16:54.247 "model_number": "SPDK bdev Controller", 00:16:54.247 "max_namespaces": 32, 00:16:54.247 "min_cntlid": 1, 00:16:54.247 "max_cntlid": 65519, 00:16:54.247 "namespaces": [ 00:16:54.247 { 00:16:54.247 "nsid": 1, 00:16:54.247 "bdev_name": "Malloc1", 00:16:54.247 "name": "Malloc1", 00:16:54.247 "nguid": "0E98537DD2ED4895A5A2DBDB4469476D", 00:16:54.247 "uuid": "0e98537d-d2ed-4895-a5a2-dbdb4469476d" 00:16:54.247 }, 00:16:54.247 { 00:16:54.247 "nsid": 2, 00:16:54.247 "bdev_name": "Malloc3", 00:16:54.247 "name": "Malloc3", 00:16:54.247 "nguid": "A0E58CD13FCA4AED97DD4620DC5244D9", 00:16:54.247 "uuid": "a0e58cd1-3fca-4aed-97dd-4620dc5244d9" 00:16:54.247 } 00:16:54.247 ] 00:16:54.247 }, 00:16:54.247 { 00:16:54.247 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:54.247 "subtype": "NVMe", 00:16:54.247 "listen_addresses": [ 00:16:54.247 { 00:16:54.247 "trtype": "VFIOUSER", 00:16:54.247 "adrfam": "IPv4", 00:16:54.247 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:54.247 "trsvcid": "0" 00:16:54.247 } 00:16:54.247 ], 00:16:54.247 "allow_any_host": true, 00:16:54.247 "hosts": [], 00:16:54.247 "serial_number": "SPDK2", 00:16:54.247 "model_number": "SPDK bdev Controller", 00:16:54.247 "max_namespaces": 32, 00:16:54.247 "min_cntlid": 1, 00:16:54.247 "max_cntlid": 65519, 00:16:54.247 "namespaces": [ 00:16:54.247 { 00:16:54.247 "nsid": 1, 00:16:54.247 "bdev_name": "Malloc2", 00:16:54.247 "name": "Malloc2", 00:16:54.247 "nguid": "7955259F71CF4479982D6B0024843228", 00:16:54.247 "uuid": "7955259f-71cf-4479-982d-6b0024843228" 00:16:54.247 } 00:16:54.247 ] 00:16:54.247 } 00:16:54.247 ] 00:16:54.247 08:50:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 960210 00:16:54.247 08:50:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:54.247 08:50:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:16:54.247 08:50:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:16:54.247 08:50:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:16:54.247 [2024-07-26 08:50:12.543978] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:16:54.247 [2024-07-26 08:50:12.544023] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid960345 ] 00:16:54.247 EAL: No free 2048 kB hugepages reported on node 1 00:16:54.247 [2024-07-26 08:50:12.560626] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:54.247 [2024-07-26 08:50:12.578210] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:16:54.247 [2024-07-26 08:50:12.583537] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:16:54.247 [2024-07-26 08:50:12.583571] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7eff3433a000 00:16:54.247 [2024-07-26 08:50:12.584532] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:54.247 [2024-07-26 08:50:12.585538] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:54.247 [2024-07-26 08:50:12.586544] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:54.247 [2024-07-26 08:50:12.587546] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:54.247 [2024-07-26 08:50:12.588553] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:54.247 [2024-07-26 08:50:12.589558] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:54.247 [2024-07-26 08:50:12.590566] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:54.247 [2024-07-26 08:50:12.591566] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:54.247 [2024-07-26 08:50:12.592573] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:16:54.248 [2024-07-26 08:50:12.592594] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7eff330fc000 00:16:54.248 [2024-07-26 08:50:12.593710] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:16:54.248 [2024-07-26 08:50:12.608472] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:16:54.248 [2024-07-26 08:50:12.608508] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:16:54.248 [2024-07-26 08:50:12.610587] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:16:54.248 [2024-07-26 08:50:12.610641] nvme_pcie_common.c: 133:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:16:54.248 [2024-07-26 08:50:12.610731] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:16:54.248 [2024-07-26 08:50:12.610752] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:16:54.248 [2024-07-26 08:50:12.610762] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:16:54.248 [2024-07-26 08:50:12.611589] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:16:54.248 [2024-07-26 08:50:12.611616] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:16:54.248 [2024-07-26 08:50:12.611629] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:16:54.248 [2024-07-26 08:50:12.612600] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:16:54.248 [2024-07-26 08:50:12.612619] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:16:54.248 [2024-07-26 08:50:12.612633] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:16:54.248 [2024-07-26 08:50:12.613603] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:16:54.248 [2024-07-26 08:50:12.613624] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:16:54.248 [2024-07-26 08:50:12.614606] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:16:54.248 [2024-07-26 08:50:12.614627] nvme_ctrlr.c:3873:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:16:54.248 [2024-07-26 08:50:12.614636] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:16:54.248 [2024-07-26 08:50:12.614647] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:16:54.248 [2024-07-26 08:50:12.614757] nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:16:54.248 [2024-07-26 08:50:12.614765] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:16:54.248 [2024-07-26 08:50:12.614774] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:16:54.248 [2024-07-26 08:50:12.615617] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:16:54.248 [2024-07-26 08:50:12.616625] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:16:54.248 [2024-07-26 08:50:12.617639] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:16:54.248 [2024-07-26 08:50:12.618635] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:54.248 [2024-07-26 08:50:12.618702] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:16:54.248 [2024-07-26 08:50:12.619653] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:16:54.248 [2024-07-26 08:50:12.619672] nvme_ctrlr.c:3908:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:16:54.248 [2024-07-26 08:50:12.619681] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:16:54.248 [2024-07-26 08:50:12.619704] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:16:54.248 [2024-07-26 08:50:12.619717] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:16:54.248 [2024-07-26 08:50:12.619742] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:54.248 [2024-07-26 08:50:12.619752] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:54.248 [2024-07-26 08:50:12.619759] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:54.248 [2024-07-26 08:50:12.619779] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:54.248 [2024-07-26 08:50:12.626072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:16:54.248 [2024-07-26 08:50:12.626094] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:16:54.248 [2024-07-26 08:50:12.626104] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:16:54.248 [2024-07-26 08:50:12.626112] nvme_ctrlr.c:2064:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:16:54.248 [2024-07-26 08:50:12.626120] nvme_ctrlr.c:2075:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:16:54.248 [2024-07-26 08:50:12.626128] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:16:54.248 [2024-07-26 08:50:12.626136] nvme_ctrlr.c:2103:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:16:54.248 [2024-07-26 08:50:12.626144] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:16:54.248 [2024-07-26 08:50:12.626157] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:16:54.248 [2024-07-26 08:50:12.626177] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:16:54.248 [2024-07-26 08:50:12.634069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:16:54.248 [2024-07-26 08:50:12.634097] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:16:54.248 [2024-07-26 08:50:12.634112] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:16:54.248 [2024-07-26 08:50:12.634124] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:16:54.248 [2024-07-26 08:50:12.634137] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:16:54.248 [2024-07-26 08:50:12.634146] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:16:54.248 [2024-07-26 08:50:12.634161] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:16:54.248 [2024-07-26 08:50:12.634177] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:16:54.248 [2024-07-26 08:50:12.642070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:16:54.248 [2024-07-26 08:50:12.642089] nvme_ctrlr.c:3014:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:16:54.248 [2024-07-26 08:50:12.642099] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:16:54.248 [2024-07-26 08:50:12.642115] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:16:54.248 [2024-07-26 08:50:12.642130] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:16:54.248 [2024-07-26 08:50:12.642145] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:16:54.248 [2024-07-26 08:50:12.650068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:16:54.248 [2024-07-26 08:50:12.650141] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:16:54.248 [2024-07-26 08:50:12.650158] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:16:54.248 [2024-07-26 08:50:12.650172] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:16:54.248 [2024-07-26 08:50:12.650180] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:16:54.248 [2024-07-26 08:50:12.650186] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:54.248 [2024-07-26 08:50:12.650197] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:16:54.248 [2024-07-26 08:50:12.658075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:16:54.248 [2024-07-26 08:50:12.658099] nvme_ctrlr.c:4697:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:16:54.248 [2024-07-26 08:50:12.658119] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:16:54.248 [2024-07-26 08:50:12.658134] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:16:54.248 [2024-07-26 08:50:12.658146] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:54.248 [2024-07-26 08:50:12.658155] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:54.248 [2024-07-26 08:50:12.658161] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:54.248 [2024-07-26 08:50:12.658171] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:54.248 [2024-07-26 08:50:12.666082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:16:54.249 [2024-07-26 08:50:12.666112] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:16:54.249 [2024-07-26 08:50:12.666129] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:16:54.249 [2024-07-26 08:50:12.666143] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:54.249 [2024-07-26 08:50:12.666151] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:54.249 [2024-07-26 08:50:12.666157] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:54.249 [2024-07-26 08:50:12.666167] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:54.249 [2024-07-26 08:50:12.674081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:16:54.249 [2024-07-26 08:50:12.674104] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:16:54.249 [2024-07-26 08:50:12.674121] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:16:54.249 [2024-07-26 08:50:12.674136] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:16:54.249 [2024-07-26 08:50:12.674152] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:16:54.249 [2024-07-26 08:50:12.674162] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:16:54.249 [2024-07-26 08:50:12.674171] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:16:54.249 [2024-07-26 08:50:12.674180] nvme_ctrlr.c:3114:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:16:54.249 [2024-07-26 08:50:12.674188] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:16:54.249 [2024-07-26 08:50:12.674197] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:16:54.249 [2024-07-26 08:50:12.674224] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:16:54.249 [2024-07-26 08:50:12.682082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:16:54.249 [2024-07-26 08:50:12.682111] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:16:54.249 [2024-07-26 08:50:12.690072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:16:54.249 [2024-07-26 08:50:12.690098] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:16:54.249 [2024-07-26 08:50:12.698069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:16:54.249 [2024-07-26 08:50:12.698094] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:16:54.249 [2024-07-26 08:50:12.706073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:16:54.249 [2024-07-26 08:50:12.706104] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:16:54.249 [2024-07-26 08:50:12.706116] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:16:54.249 [2024-07-26 08:50:12.706123] nvme_pcie_common.c:1239:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:16:54.249 [2024-07-26 08:50:12.706129] nvme_pcie_common.c:1255:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:16:54.249 [2024-07-26 08:50:12.706136] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:16:54.249 [2024-07-26 08:50:12.706146] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:16:54.249 [2024-07-26 08:50:12.706158] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:16:54.249 [2024-07-26 08:50:12.706167] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:16:54.249 [2024-07-26 08:50:12.706173] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:54.249 [2024-07-26 08:50:12.706183] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:16:54.249 [2024-07-26 08:50:12.706209] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:16:54.249 [2024-07-26 08:50:12.706221] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:54.249 [2024-07-26 08:50:12.706227] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:54.249 [2024-07-26 08:50:12.706237] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:54.249 [2024-07-26 08:50:12.706249] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:16:54.249 [2024-07-26 08:50:12.706257] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:16:54.249 [2024-07-26 08:50:12.706263] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:54.249 [2024-07-26 08:50:12.706272] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:16:54.508 [2024-07-26 08:50:12.714070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:16:54.508 [2024-07-26 08:50:12.714099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:16:54.508 [2024-07-26 08:50:12.714116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:16:54.508 [2024-07-26 08:50:12.714129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:16:54.508 ===================================================== 00:16:54.508 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:54.508 ===================================================== 00:16:54.508 Controller Capabilities/Features 00:16:54.508 ================================ 00:16:54.508 Vendor ID: 4e58 00:16:54.508 Subsystem Vendor ID: 4e58 00:16:54.508 Serial Number: SPDK2 00:16:54.508 Model Number: SPDK bdev Controller 00:16:54.508 Firmware Version: 24.09 00:16:54.508 Recommended Arb Burst: 6 00:16:54.508 IEEE OUI Identifier: 8d 6b 50 00:16:54.508 Multi-path I/O 00:16:54.508 May have multiple subsystem ports: Yes 00:16:54.508 May have multiple controllers: Yes 00:16:54.508 Associated with SR-IOV VF: No 00:16:54.508 Max Data Transfer Size: 131072 00:16:54.508 Max Number of Namespaces: 32 00:16:54.508 Max Number of I/O Queues: 127 00:16:54.508 NVMe Specification Version (VS): 1.3 00:16:54.508 NVMe Specification Version (Identify): 1.3 00:16:54.508 Maximum Queue Entries: 256 00:16:54.508 Contiguous Queues Required: Yes 00:16:54.508 Arbitration Mechanisms Supported 00:16:54.508 Weighted Round Robin: Not Supported 00:16:54.508 Vendor Specific: Not Supported 00:16:54.508 Reset Timeout: 15000 ms 00:16:54.508 Doorbell Stride: 4 bytes 00:16:54.508 NVM Subsystem Reset: Not Supported 00:16:54.508 Command Sets Supported 00:16:54.508 NVM Command Set: Supported 00:16:54.508 Boot Partition: Not Supported 00:16:54.508 Memory Page Size Minimum: 4096 bytes 00:16:54.508 Memory Page Size Maximum: 4096 bytes 00:16:54.508 Persistent Memory Region: Not Supported 00:16:54.508 Optional Asynchronous Events Supported 00:16:54.508 Namespace Attribute Notices: Supported 00:16:54.508 Firmware Activation Notices: Not Supported 00:16:54.508 ANA Change Notices: Not Supported 00:16:54.508 PLE Aggregate Log Change Notices: Not Supported 00:16:54.508 LBA Status Info Alert Notices: Not Supported 00:16:54.508 EGE Aggregate Log Change Notices: Not Supported 00:16:54.508 Normal NVM Subsystem Shutdown event: Not Supported 00:16:54.508 Zone Descriptor Change Notices: Not Supported 00:16:54.508 Discovery Log Change Notices: Not Supported 00:16:54.508 Controller Attributes 00:16:54.508 128-bit Host Identifier: Supported 00:16:54.508 Non-Operational Permissive Mode: Not Supported 00:16:54.508 NVM Sets: Not Supported 00:16:54.508 Read Recovery Levels: Not Supported 00:16:54.508 Endurance Groups: Not Supported 00:16:54.508 Predictable Latency Mode: Not Supported 00:16:54.508 Traffic Based Keep ALive: Not Supported 00:16:54.508 Namespace Granularity: Not Supported 00:16:54.508 SQ Associations: Not Supported 00:16:54.508 UUID List: Not Supported 00:16:54.508 Multi-Domain Subsystem: Not Supported 00:16:54.508 Fixed Capacity Management: Not Supported 00:16:54.508 Variable Capacity Management: Not Supported 00:16:54.508 Delete Endurance Group: Not Supported 00:16:54.508 Delete NVM Set: Not Supported 00:16:54.508 Extended LBA Formats Supported: Not Supported 00:16:54.508 Flexible Data Placement Supported: Not Supported 00:16:54.508 00:16:54.508 Controller Memory Buffer Support 00:16:54.508 ================================ 00:16:54.508 Supported: No 00:16:54.508 00:16:54.508 Persistent Memory Region Support 00:16:54.508 ================================ 00:16:54.508 Supported: No 00:16:54.508 00:16:54.508 Admin Command Set Attributes 00:16:54.508 ============================ 00:16:54.508 Security Send/Receive: Not Supported 00:16:54.508 Format NVM: Not Supported 00:16:54.508 Firmware Activate/Download: Not Supported 00:16:54.508 Namespace Management: Not Supported 00:16:54.508 Device Self-Test: Not Supported 00:16:54.508 Directives: Not Supported 00:16:54.508 NVMe-MI: Not Supported 00:16:54.508 Virtualization Management: Not Supported 00:16:54.508 Doorbell Buffer Config: Not Supported 00:16:54.509 Get LBA Status Capability: Not Supported 00:16:54.509 Command & Feature Lockdown Capability: Not Supported 00:16:54.509 Abort Command Limit: 4 00:16:54.509 Async Event Request Limit: 4 00:16:54.509 Number of Firmware Slots: N/A 00:16:54.509 Firmware Slot 1 Read-Only: N/A 00:16:54.509 Firmware Activation Without Reset: N/A 00:16:54.509 Multiple Update Detection Support: N/A 00:16:54.509 Firmware Update Granularity: No Information Provided 00:16:54.509 Per-Namespace SMART Log: No 00:16:54.509 Asymmetric Namespace Access Log Page: Not Supported 00:16:54.509 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:16:54.509 Command Effects Log Page: Supported 00:16:54.509 Get Log Page Extended Data: Supported 00:16:54.509 Telemetry Log Pages: Not Supported 00:16:54.509 Persistent Event Log Pages: Not Supported 00:16:54.509 Supported Log Pages Log Page: May Support 00:16:54.509 Commands Supported & Effects Log Page: Not Supported 00:16:54.509 Feature Identifiers & Effects Log Page:May Support 00:16:54.509 NVMe-MI Commands & Effects Log Page: May Support 00:16:54.509 Data Area 4 for Telemetry Log: Not Supported 00:16:54.509 Error Log Page Entries Supported: 128 00:16:54.509 Keep Alive: Supported 00:16:54.509 Keep Alive Granularity: 10000 ms 00:16:54.509 00:16:54.509 NVM Command Set Attributes 00:16:54.509 ========================== 00:16:54.509 Submission Queue Entry Size 00:16:54.509 Max: 64 00:16:54.509 Min: 64 00:16:54.509 Completion Queue Entry Size 00:16:54.509 Max: 16 00:16:54.509 Min: 16 00:16:54.509 Number of Namespaces: 32 00:16:54.509 Compare Command: Supported 00:16:54.509 Write Uncorrectable Command: Not Supported 00:16:54.509 Dataset Management Command: Supported 00:16:54.509 Write Zeroes Command: Supported 00:16:54.509 Set Features Save Field: Not Supported 00:16:54.509 Reservations: Not Supported 00:16:54.509 Timestamp: Not Supported 00:16:54.509 Copy: Supported 00:16:54.509 Volatile Write Cache: Present 00:16:54.509 Atomic Write Unit (Normal): 1 00:16:54.509 Atomic Write Unit (PFail): 1 00:16:54.509 Atomic Compare & Write Unit: 1 00:16:54.509 Fused Compare & Write: Supported 00:16:54.509 Scatter-Gather List 00:16:54.509 SGL Command Set: Supported (Dword aligned) 00:16:54.509 SGL Keyed: Not Supported 00:16:54.509 SGL Bit Bucket Descriptor: Not Supported 00:16:54.509 SGL Metadata Pointer: Not Supported 00:16:54.509 Oversized SGL: Not Supported 00:16:54.509 SGL Metadata Address: Not Supported 00:16:54.509 SGL Offset: Not Supported 00:16:54.509 Transport SGL Data Block: Not Supported 00:16:54.509 Replay Protected Memory Block: Not Supported 00:16:54.509 00:16:54.509 Firmware Slot Information 00:16:54.509 ========================= 00:16:54.509 Active slot: 1 00:16:54.509 Slot 1 Firmware Revision: 24.09 00:16:54.509 00:16:54.509 00:16:54.509 Commands Supported and Effects 00:16:54.509 ============================== 00:16:54.509 Admin Commands 00:16:54.509 -------------- 00:16:54.509 Get Log Page (02h): Supported 00:16:54.509 Identify (06h): Supported 00:16:54.509 Abort (08h): Supported 00:16:54.509 Set Features (09h): Supported 00:16:54.509 Get Features (0Ah): Supported 00:16:54.509 Asynchronous Event Request (0Ch): Supported 00:16:54.509 Keep Alive (18h): Supported 00:16:54.509 I/O Commands 00:16:54.509 ------------ 00:16:54.509 Flush (00h): Supported LBA-Change 00:16:54.509 Write (01h): Supported LBA-Change 00:16:54.509 Read (02h): Supported 00:16:54.509 Compare (05h): Supported 00:16:54.509 Write Zeroes (08h): Supported LBA-Change 00:16:54.509 Dataset Management (09h): Supported LBA-Change 00:16:54.509 Copy (19h): Supported LBA-Change 00:16:54.509 00:16:54.509 Error Log 00:16:54.509 ========= 00:16:54.509 00:16:54.509 Arbitration 00:16:54.509 =========== 00:16:54.509 Arbitration Burst: 1 00:16:54.509 00:16:54.509 Power Management 00:16:54.509 ================ 00:16:54.509 Number of Power States: 1 00:16:54.509 Current Power State: Power State #0 00:16:54.509 Power State #0: 00:16:54.509 Max Power: 0.00 W 00:16:54.509 Non-Operational State: Operational 00:16:54.509 Entry Latency: Not Reported 00:16:54.509 Exit Latency: Not Reported 00:16:54.509 Relative Read Throughput: 0 00:16:54.509 Relative Read Latency: 0 00:16:54.509 Relative Write Throughput: 0 00:16:54.509 Relative Write Latency: 0 00:16:54.509 Idle Power: Not Reported 00:16:54.509 Active Power: Not Reported 00:16:54.509 Non-Operational Permissive Mode: Not Supported 00:16:54.509 00:16:54.509 Health Information 00:16:54.509 ================== 00:16:54.509 Critical Warnings: 00:16:54.509 Available Spare Space: OK 00:16:54.509 Temperature: OK 00:16:54.509 Device Reliability: OK 00:16:54.509 Read Only: No 00:16:54.509 Volatile Memory Backup: OK 00:16:54.509 Current Temperature: 0 Kelvin (-273 Celsius) 00:16:54.509 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:16:54.509 Available Spare: 0% 00:16:54.509 Available Sp[2024-07-26 08:50:12.714251] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:16:54.509 [2024-07-26 08:50:12.722086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:16:54.509 [2024-07-26 08:50:12.722138] nvme_ctrlr.c:4361:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:16:54.509 [2024-07-26 08:50:12.722156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:54.509 [2024-07-26 08:50:12.722168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:54.509 [2024-07-26 08:50:12.722178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:54.509 [2024-07-26 08:50:12.722188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:54.509 [2024-07-26 08:50:12.722269] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:16:54.509 [2024-07-26 08:50:12.722290] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:16:54.509 [2024-07-26 08:50:12.723267] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:54.509 [2024-07-26 08:50:12.726095] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:16:54.509 [2024-07-26 08:50:12.726111] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:16:54.509 [2024-07-26 08:50:12.726290] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:16:54.509 [2024-07-26 08:50:12.726313] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:16:54.509 [2024-07-26 08:50:12.726378] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:16:54.509 [2024-07-26 08:50:12.727547] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:16:54.509 are Threshold: 0% 00:16:54.509 Life Percentage Used: 0% 00:16:54.509 Data Units Read: 0 00:16:54.509 Data Units Written: 0 00:16:54.509 Host Read Commands: 0 00:16:54.509 Host Write Commands: 0 00:16:54.509 Controller Busy Time: 0 minutes 00:16:54.509 Power Cycles: 0 00:16:54.509 Power On Hours: 0 hours 00:16:54.509 Unsafe Shutdowns: 0 00:16:54.509 Unrecoverable Media Errors: 0 00:16:54.509 Lifetime Error Log Entries: 0 00:16:54.509 Warning Temperature Time: 0 minutes 00:16:54.509 Critical Temperature Time: 0 minutes 00:16:54.509 00:16:54.509 Number of Queues 00:16:54.509 ================ 00:16:54.509 Number of I/O Submission Queues: 127 00:16:54.509 Number of I/O Completion Queues: 127 00:16:54.509 00:16:54.509 Active Namespaces 00:16:54.509 ================= 00:16:54.509 Namespace ID:1 00:16:54.509 Error Recovery Timeout: Unlimited 00:16:54.509 Command Set Identifier: NVM (00h) 00:16:54.509 Deallocate: Supported 00:16:54.509 Deallocated/Unwritten Error: Not Supported 00:16:54.509 Deallocated Read Value: Unknown 00:16:54.509 Deallocate in Write Zeroes: Not Supported 00:16:54.509 Deallocated Guard Field: 0xFFFF 00:16:54.509 Flush: Supported 00:16:54.509 Reservation: Supported 00:16:54.509 Namespace Sharing Capabilities: Multiple Controllers 00:16:54.509 Size (in LBAs): 131072 (0GiB) 00:16:54.509 Capacity (in LBAs): 131072 (0GiB) 00:16:54.509 Utilization (in LBAs): 131072 (0GiB) 00:16:54.509 NGUID: 7955259F71CF4479982D6B0024843228 00:16:54.509 UUID: 7955259f-71cf-4479-982d-6b0024843228 00:16:54.509 Thin Provisioning: Not Supported 00:16:54.509 Per-NS Atomic Units: Yes 00:16:54.509 Atomic Boundary Size (Normal): 0 00:16:54.509 Atomic Boundary Size (PFail): 0 00:16:54.509 Atomic Boundary Offset: 0 00:16:54.510 Maximum Single Source Range Length: 65535 00:16:54.510 Maximum Copy Length: 65535 00:16:54.510 Maximum Source Range Count: 1 00:16:54.510 NGUID/EUI64 Never Reused: No 00:16:54.510 Namespace Write Protected: No 00:16:54.510 Number of LBA Formats: 1 00:16:54.510 Current LBA Format: LBA Format #00 00:16:54.510 LBA Format #00: Data Size: 512 Metadata Size: 0 00:16:54.510 00:16:54.510 08:50:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:16:54.510 EAL: No free 2048 kB hugepages reported on node 1 00:16:54.510 [2024-07-26 08:50:12.953928] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:59.782 Initializing NVMe Controllers 00:16:59.782 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:59.782 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:16:59.782 Initialization complete. Launching workers. 00:16:59.782 ======================================================== 00:16:59.782 Latency(us) 00:16:59.782 Device Information : IOPS MiB/s Average min max 00:16:59.782 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 34286.02 133.93 3732.62 1183.15 7370.39 00:16:59.782 ======================================================== 00:16:59.782 Total : 34286.02 133.93 3732.62 1183.15 7370.39 00:16:59.782 00:16:59.782 [2024-07-26 08:50:18.060420] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:59.782 08:50:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:16:59.782 EAL: No free 2048 kB hugepages reported on node 1 00:17:00.041 [2024-07-26 08:50:18.292068] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:05.314 Initializing NVMe Controllers 00:17:05.314 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:17:05.314 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:17:05.314 Initialization complete. Launching workers. 00:17:05.314 ======================================================== 00:17:05.314 Latency(us) 00:17:05.314 Device Information : IOPS MiB/s Average min max 00:17:05.314 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 31687.07 123.78 4038.94 1215.14 10365.23 00:17:05.314 ======================================================== 00:17:05.314 Total : 31687.07 123.78 4038.94 1215.14 10365.23 00:17:05.314 00:17:05.314 [2024-07-26 08:50:23.311546] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:05.314 08:50:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:17:05.314 EAL: No free 2048 kB hugepages reported on node 1 00:17:05.314 [2024-07-26 08:50:23.527508] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:10.585 [2024-07-26 08:50:28.650226] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:10.585 Initializing NVMe Controllers 00:17:10.585 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:17:10.585 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:17:10.585 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:17:10.585 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:17:10.585 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:17:10.585 Initialization complete. Launching workers. 00:17:10.585 Starting thread on core 2 00:17:10.585 Starting thread on core 3 00:17:10.585 Starting thread on core 1 00:17:10.585 08:50:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:17:10.585 EAL: No free 2048 kB hugepages reported on node 1 00:17:10.585 [2024-07-26 08:50:28.948617] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:13.869 [2024-07-26 08:50:32.026313] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:13.869 Initializing NVMe Controllers 00:17:13.869 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:17:13.869 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:17:13.869 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:17:13.869 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:17:13.869 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:17:13.869 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:17:13.869 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:17:13.869 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:17:13.869 Initialization complete. Launching workers. 00:17:13.869 Starting thread on core 1 with urgent priority queue 00:17:13.869 Starting thread on core 2 with urgent priority queue 00:17:13.869 Starting thread on core 3 with urgent priority queue 00:17:13.869 Starting thread on core 0 with urgent priority queue 00:17:13.870 SPDK bdev Controller (SPDK2 ) core 0: 5882.67 IO/s 17.00 secs/100000 ios 00:17:13.870 SPDK bdev Controller (SPDK2 ) core 1: 6085.33 IO/s 16.43 secs/100000 ios 00:17:13.870 SPDK bdev Controller (SPDK2 ) core 2: 5602.33 IO/s 17.85 secs/100000 ios 00:17:13.870 SPDK bdev Controller (SPDK2 ) core 3: 6462.67 IO/s 15.47 secs/100000 ios 00:17:13.870 ======================================================== 00:17:13.870 00:17:13.870 08:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:17:13.870 EAL: No free 2048 kB hugepages reported on node 1 00:17:13.870 [2024-07-26 08:50:32.322563] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:14.129 Initializing NVMe Controllers 00:17:14.129 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:17:14.129 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:17:14.129 Namespace ID: 1 size: 0GB 00:17:14.129 Initialization complete. 00:17:14.129 INFO: using host memory buffer for IO 00:17:14.129 Hello world! 00:17:14.129 [2024-07-26 08:50:32.331633] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:14.129 08:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:17:14.129 EAL: No free 2048 kB hugepages reported on node 1 00:17:14.389 [2024-07-26 08:50:32.615123] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:15.324 Initializing NVMe Controllers 00:17:15.324 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:17:15.324 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:17:15.324 Initialization complete. Launching workers. 00:17:15.324 submit (in ns) avg, min, max = 7012.7, 3544.4, 4015457.8 00:17:15.324 complete (in ns) avg, min, max = 27280.5, 2081.1, 4014485.6 00:17:15.324 00:17:15.324 Submit histogram 00:17:15.324 ================ 00:17:15.324 Range in us Cumulative Count 00:17:15.324 3.532 - 3.556: 0.0981% ( 13) 00:17:15.324 3.556 - 3.579: 0.5507% ( 60) 00:17:15.324 3.579 - 3.603: 1.6598% ( 147) 00:17:15.324 3.603 - 3.627: 4.6775% ( 400) 00:17:15.324 3.627 - 3.650: 9.8378% ( 684) 00:17:15.324 3.650 - 3.674: 16.2052% ( 844) 00:17:15.324 3.674 - 3.698: 23.6137% ( 982) 00:17:15.324 3.698 - 3.721: 31.1354% ( 997) 00:17:15.324 3.721 - 3.745: 38.1667% ( 932) 00:17:15.324 3.745 - 3.769: 44.7982% ( 879) 00:17:15.324 3.769 - 3.793: 51.8748% ( 938) 00:17:15.324 3.793 - 3.816: 57.2765% ( 716) 00:17:15.324 3.816 - 3.840: 61.4334% ( 551) 00:17:15.324 3.840 - 3.864: 65.3414% ( 518) 00:17:15.324 3.864 - 3.887: 69.1739% ( 508) 00:17:15.324 3.887 - 3.911: 72.8404% ( 486) 00:17:15.324 3.911 - 3.935: 76.6730% ( 508) 00:17:15.324 3.935 - 3.959: 80.0453% ( 447) 00:17:15.324 3.959 - 3.982: 83.0102% ( 393) 00:17:15.324 3.982 - 4.006: 85.8242% ( 373) 00:17:15.324 4.006 - 4.030: 88.0573% ( 296) 00:17:15.324 4.030 - 4.053: 89.9434% ( 250) 00:17:15.324 4.053 - 4.077: 91.4900% ( 205) 00:17:15.324 4.077 - 4.101: 92.5009% ( 134) 00:17:15.324 4.101 - 4.124: 93.4892% ( 131) 00:17:15.324 4.124 - 4.148: 94.1833% ( 92) 00:17:15.324 4.148 - 4.172: 94.9830% ( 106) 00:17:15.324 4.172 - 4.196: 95.5187% ( 71) 00:17:15.324 4.196 - 4.219: 95.9562% ( 58) 00:17:15.324 4.219 - 4.243: 96.2731% ( 42) 00:17:15.324 4.243 - 4.267: 96.5070% ( 31) 00:17:15.324 4.267 - 4.290: 96.6654% ( 21) 00:17:15.324 4.290 - 4.314: 96.7861% ( 16) 00:17:15.324 4.314 - 4.338: 96.9144% ( 17) 00:17:15.324 4.338 - 4.361: 97.0502% ( 18) 00:17:15.324 4.361 - 4.385: 97.1558% ( 14) 00:17:15.324 4.385 - 4.409: 97.2840% ( 17) 00:17:15.324 4.409 - 4.433: 97.3293% ( 6) 00:17:15.324 4.433 - 4.456: 97.3595% ( 4) 00:17:15.324 4.456 - 4.480: 97.3897% ( 4) 00:17:15.324 4.480 - 4.504: 97.4198% ( 4) 00:17:15.324 4.504 - 4.527: 97.4576% ( 5) 00:17:15.324 4.527 - 4.551: 97.4953% ( 5) 00:17:15.324 4.551 - 4.575: 97.5104% ( 2) 00:17:15.324 4.622 - 4.646: 97.5255% ( 2) 00:17:15.324 4.646 - 4.670: 97.5330% ( 1) 00:17:15.324 4.670 - 4.693: 97.5406% ( 1) 00:17:15.324 4.717 - 4.741: 97.5481% ( 1) 00:17:15.324 4.764 - 4.788: 97.5556% ( 1) 00:17:15.324 4.788 - 4.812: 97.5707% ( 2) 00:17:15.324 4.812 - 4.836: 97.5783% ( 1) 00:17:15.324 4.836 - 4.859: 97.6160% ( 5) 00:17:15.324 4.859 - 4.883: 97.6386% ( 3) 00:17:15.324 4.883 - 4.907: 97.6613% ( 3) 00:17:15.324 4.907 - 4.930: 97.6914% ( 4) 00:17:15.324 4.930 - 4.954: 97.7442% ( 7) 00:17:15.324 4.954 - 4.978: 97.7820% ( 5) 00:17:15.324 4.978 - 5.001: 97.8197% ( 5) 00:17:15.324 5.001 - 5.025: 97.8574% ( 5) 00:17:15.324 5.025 - 5.049: 97.8951% ( 5) 00:17:15.324 5.049 - 5.073: 97.9178% ( 3) 00:17:15.324 5.073 - 5.096: 97.9630% ( 6) 00:17:15.324 5.096 - 5.120: 98.0234% ( 8) 00:17:15.324 5.120 - 5.144: 98.0611% ( 5) 00:17:15.324 5.144 - 5.167: 98.1139% ( 7) 00:17:15.325 5.167 - 5.191: 98.1366% ( 3) 00:17:15.325 5.191 - 5.215: 98.1592% ( 3) 00:17:15.325 5.215 - 5.239: 98.2120% ( 7) 00:17:15.325 5.239 - 5.262: 98.2195% ( 1) 00:17:15.325 5.262 - 5.286: 98.2497% ( 4) 00:17:15.325 5.286 - 5.310: 98.2648% ( 2) 00:17:15.325 5.310 - 5.333: 98.2874% ( 3) 00:17:15.325 5.333 - 5.357: 98.2950% ( 1) 00:17:15.325 5.357 - 5.381: 98.3176% ( 3) 00:17:15.325 5.381 - 5.404: 98.3252% ( 1) 00:17:15.325 5.404 - 5.428: 98.3327% ( 1) 00:17:15.325 5.476 - 5.499: 98.3402% ( 1) 00:17:15.325 5.523 - 5.547: 98.3478% ( 1) 00:17:15.325 5.570 - 5.594: 98.3704% ( 3) 00:17:15.325 5.713 - 5.736: 98.3780% ( 1) 00:17:15.325 5.760 - 5.784: 98.3931% ( 2) 00:17:15.325 5.807 - 5.831: 98.4006% ( 1) 00:17:15.325 5.831 - 5.855: 98.4081% ( 1) 00:17:15.325 5.950 - 5.973: 98.4232% ( 2) 00:17:15.325 5.973 - 5.997: 98.4383% ( 2) 00:17:15.325 6.068 - 6.116: 98.4534% ( 2) 00:17:15.325 6.163 - 6.210: 98.4610% ( 1) 00:17:15.325 6.495 - 6.542: 98.4685% ( 1) 00:17:15.325 6.637 - 6.684: 98.4760% ( 1) 00:17:15.325 6.874 - 6.921: 98.4911% ( 2) 00:17:15.325 6.969 - 7.016: 98.4987% ( 1) 00:17:15.325 7.016 - 7.064: 98.5062% ( 1) 00:17:15.325 7.206 - 7.253: 98.5138% ( 1) 00:17:15.325 7.253 - 7.301: 98.5213% ( 1) 00:17:15.325 7.301 - 7.348: 98.5364% ( 2) 00:17:15.325 7.348 - 7.396: 98.5439% ( 1) 00:17:15.325 7.443 - 7.490: 98.5590% ( 2) 00:17:15.325 7.490 - 7.538: 98.5666% ( 1) 00:17:15.325 7.538 - 7.585: 98.5817% ( 2) 00:17:15.325 7.680 - 7.727: 98.5968% ( 2) 00:17:15.325 7.775 - 7.822: 98.6194% ( 3) 00:17:15.325 7.822 - 7.870: 98.6269% ( 1) 00:17:15.325 7.870 - 7.917: 98.6571% ( 4) 00:17:15.325 7.964 - 8.012: 98.6647% ( 1) 00:17:15.325 8.059 - 8.107: 98.6797% ( 2) 00:17:15.325 8.107 - 8.154: 98.6873% ( 1) 00:17:15.325 8.154 - 8.201: 98.6948% ( 1) 00:17:15.325 8.201 - 8.249: 98.7175% ( 3) 00:17:15.325 8.249 - 8.296: 98.7326% ( 2) 00:17:15.325 8.344 - 8.391: 98.7401% ( 1) 00:17:15.325 8.391 - 8.439: 98.7476% ( 1) 00:17:15.325 8.533 - 8.581: 98.7552% ( 1) 00:17:15.325 8.628 - 8.676: 98.7627% ( 1) 00:17:15.325 8.770 - 8.818: 98.7778% ( 2) 00:17:15.325 8.865 - 8.913: 98.7854% ( 1) 00:17:15.325 8.913 - 8.960: 98.8005% ( 2) 00:17:15.325 9.055 - 9.102: 98.8080% ( 1) 00:17:15.325 9.244 - 9.292: 98.8155% ( 1) 00:17:15.325 9.292 - 9.339: 98.8231% ( 1) 00:17:15.325 9.339 - 9.387: 98.8306% ( 1) 00:17:15.325 9.434 - 9.481: 98.8382% ( 1) 00:17:15.325 10.335 - 10.382: 98.8457% ( 1) 00:17:15.325 11.141 - 11.188: 98.8533% ( 1) 00:17:15.325 11.188 - 11.236: 98.8684% ( 2) 00:17:15.325 11.378 - 11.425: 98.8834% ( 2) 00:17:15.325 11.615 - 11.662: 98.8910% ( 1) 00:17:15.325 11.899 - 11.947: 98.8985% ( 1) 00:17:15.325 11.994 - 12.041: 98.9061% ( 1) 00:17:15.325 12.136 - 12.231: 98.9136% ( 1) 00:17:15.325 12.231 - 12.326: 98.9212% ( 1) 00:17:15.325 12.516 - 12.610: 98.9287% ( 1) 00:17:15.325 12.895 - 12.990: 98.9363% ( 1) 00:17:15.325 13.559 - 13.653: 98.9438% ( 1) 00:17:15.325 13.748 - 13.843: 98.9513% ( 1) 00:17:15.325 14.696 - 14.791: 98.9589% ( 1) 00:17:15.325 15.076 - 15.170: 98.9664% ( 1) 00:17:15.325 15.170 - 15.265: 98.9740% ( 1) 00:17:15.325 16.972 - 17.067: 98.9815% ( 1) 00:17:15.325 17.161 - 17.256: 98.9891% ( 1) 00:17:15.325 17.256 - 17.351: 99.0041% ( 2) 00:17:15.325 17.351 - 17.446: 99.0343% ( 4) 00:17:15.325 17.446 - 17.541: 99.0570% ( 3) 00:17:15.325 17.541 - 17.636: 99.1022% ( 6) 00:17:15.325 17.636 - 17.730: 99.1475% ( 6) 00:17:15.325 17.730 - 17.825: 99.1777% ( 4) 00:17:15.325 17.825 - 17.920: 99.2229% ( 6) 00:17:15.325 17.920 - 18.015: 99.2682% ( 6) 00:17:15.325 18.015 - 18.110: 99.3210% ( 7) 00:17:15.325 18.110 - 18.204: 99.4417% ( 16) 00:17:15.325 18.204 - 18.299: 99.5398% ( 13) 00:17:15.325 18.299 - 18.394: 99.5851% ( 6) 00:17:15.325 18.394 - 18.489: 99.6379% ( 7) 00:17:15.325 18.489 - 18.584: 99.7058% ( 9) 00:17:15.325 18.584 - 18.679: 99.7359% ( 4) 00:17:15.325 18.679 - 18.773: 99.7963% ( 8) 00:17:15.325 18.773 - 18.868: 99.8038% ( 1) 00:17:15.325 18.868 - 18.963: 99.8265% ( 3) 00:17:15.325 18.963 - 19.058: 99.8567% ( 4) 00:17:15.325 19.342 - 19.437: 99.8717% ( 2) 00:17:15.325 19.437 - 19.532: 99.8793% ( 1) 00:17:15.325 19.532 - 19.627: 99.8868% ( 1) 00:17:15.325 19.721 - 19.816: 99.8944% ( 1) 00:17:15.325 19.816 - 19.911: 99.9019% ( 1) 00:17:15.325 21.523 - 21.618: 99.9095% ( 1) 00:17:15.325 22.092 - 22.187: 99.9170% ( 1) 00:17:15.325 24.652 - 24.841: 99.9246% ( 1) 00:17:15.325 3980.705 - 4004.978: 99.9849% ( 8) 00:17:15.325 4004.978 - 4029.250: 100.0000% ( 2) 00:17:15.325 00:17:15.325 Complete histogram 00:17:15.325 ================== 00:17:15.325 Range in us Cumulative Count 00:17:15.325 2.074 - 2.086: 0.3848% ( 51) 00:17:15.325 2.086 - 2.098: 21.7955% ( 2838) 00:17:15.325 2.098 - 2.110: 42.7763% ( 2781) 00:17:15.325 2.110 - 2.121: 45.1905% ( 320) 00:17:15.325 2.121 - 2.133: 56.5447% ( 1505) 00:17:15.325 2.133 - 2.145: 62.1124% ( 738) 00:17:15.325 2.145 - 2.157: 64.2173% ( 279) 00:17:15.325 2.157 - 2.169: 73.0743% ( 1174) 00:17:15.325 2.169 - 2.181: 75.9713% ( 384) 00:17:15.325 2.181 - 2.193: 77.4123% ( 191) 00:17:15.325 2.193 - 2.204: 81.9163% ( 597) 00:17:15.325 2.204 - 2.216: 83.4855% ( 208) 00:17:15.325 2.216 - 2.228: 84.2625% ( 103) 00:17:15.325 2.228 - 2.240: 87.4764% ( 426) 00:17:15.325 2.240 - 2.252: 90.6526% ( 421) 00:17:15.325 2.252 - 2.264: 91.7390% ( 144) 00:17:15.325 2.264 - 2.276: 93.1573% ( 188) 00:17:15.325 2.276 - 2.287: 94.1154% ( 127) 00:17:15.325 2.287 - 2.299: 94.3946% ( 37) 00:17:15.325 2.299 - 2.311: 94.7114% ( 42) 00:17:15.325 2.311 - 2.323: 95.3527% ( 85) 00:17:15.325 2.323 - 2.335: 95.6394% ( 38) 00:17:15.325 2.335 - 2.347: 95.6922% ( 7) 00:17:15.325 2.347 - 2.359: 95.7450% ( 7) 00:17:15.325 2.359 - 2.370: 95.8054% ( 8) 00:17:15.325 2.370 - 2.382: 95.9487% ( 19) 00:17:15.325 2.382 - 2.394: 96.2580% ( 41) 00:17:15.325 2.394 - 2.406: 96.7409% ( 64) 00:17:15.325 2.406 - 2.418: 97.0351% ( 39) 00:17:15.325 2.418 - 2.430: 97.2916% ( 34) 00:17:15.325 2.430 - 2.441: 97.4425% ( 20) 00:17:15.325 2.441 - 2.453: 97.6160% ( 23) 00:17:15.325 2.453 - 2.465: 97.7367% ( 16) 00:17:15.325 2.465 - 2.477: 97.9102% ( 23) 00:17:15.325 2.477 - 2.489: 98.0536% ( 19) 00:17:15.325 2.489 - 2.501: 98.2195% ( 22) 00:17:15.325 2.501 - 2.513: 98.2724% ( 7) 00:17:15.325 2.513 - 2.524: 98.3252% ( 7) 00:17:15.325 2.524 - 2.536: 98.3780% ( 7) 00:17:15.325 2.536 - 2.548: 98.4081% ( 4) 00:17:15.325 2.548 - 2.560: 98.4157% ( 1) 00:17:15.325 2.560 - 2.572: 98.4232% ( 1) 00:17:15.325 2.607 - 2.619: 98.4308% ( 1) 00:17:15.325 2.631 - 2.643: 98.4459% ( 2) 00:17:15.325 2.643 - 2.655: 98.4610% ( 2) 00:17:15.325 2.655 - 2.667: 98.4685% ( 1) 00:17:15.325 2.667 - 2.679: 98.4760% ( 1) 00:17:15.325 2.714 - 2.726: 98.4836% ( 1) 00:17:15.325 2.726 - 2.738: 98.4911% ( 1) 00:17:15.325 3.224 - 3.247: 98.4987% ( 1) 00:17:15.325 3.247 - 3.271: 98.5062% ( 1) 00:17:15.325 3.295 - 3.319: 98.5289% ( 3) 00:17:15.325 3.319 - 3.342: 98.5439% ( 2) 00:17:15.325 3.342 - 3.366: 98.5515% ( 1) 00:17:15.325 3.366 - 3.390: 98.5590% ( 1) 00:17:15.325 3.390 - 3.413: 98.5666% ( 1) 00:17:15.325 3.413 - 3.437: 98.5741% ( 1) 00:17:15.325 3.437 - 3.461: 98.5817% ( 1) 00:17:15.325 3.484 - 3.508: 98.5892% ( 1) 00:17:15.325 3.508 - 3.532: 98.6043% ( 2) 00:17:15.325 3.579 - 3.603: 98.6269% ( 3) 00:17:15.325 3.603 - 3.627: 98.6345% ( 1) 00:17:15.325 3.627 - 3.650: 98.6496% ( 2) 00:17:15.325 3.674 - 3.698: 98.6571% ( 1) 00:17:15.325 3.769 - 3.793: 98.6647% ( 1) 00:17:15.325 3.816 - 3.840: 98.6797% ( 2) 00:17:15.325 3.911 - 3.935: 98.6873% ( 1) 00:17:15.325 3.982 - 4.006: 98.6948% ( 1) 00:17:15.325 5.239 - 5.262: 98.7024% ( 1) 00:17:15.325 5.262 - 5.286: 98.7099% ( 1) 00:17:15.325 5.428 - 5.452: 98.7175% ( 1) 00:17:15.325 5.523 - 5.547: 98.7250% ( 1) 00:17:15.325 5.784 - 5.807: 98.7326% ( 1) 00:17:15.325 6.044 - 6.068: 98.7401% ( 1) 00:17:15.325 6.068 - 6.116: 98.7476% ( 1) 00:17:15.325 6.116 - 6.163: 98.7552% ( 1) 00:17:15.325 6.163 - 6.210: 98.7703% ( 2) 00:17:15.325 6.353 - 6.400: 98.7778% ( 1) 00:17:15.325 6.400 - 6.447: 98.7854% ( 1) 00:17:15.326 6.542 - 6.590: 98.7929% ( 1) 00:17:15.326 6.684 - 6.732: 98.8005% ( 1) 00:17:15.326 7.396 - 7.443: 98.8080% ( 1) 00:17:15.326 7.585 - 7.633: 98.8155% ( 1) 00:17:15.326 7.727 - 7.775: 9[2024-07-26 08:50:33.711086] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:15.326 8.8231% ( 1) 00:17:15.326 8.628 - 8.676: 98.8306% ( 1) 00:17:15.326 15.455 - 15.550: 98.8382% ( 1) 00:17:15.326 15.550 - 15.644: 98.8457% ( 1) 00:17:15.326 15.644 - 15.739: 98.8684% ( 3) 00:17:15.326 15.739 - 15.834: 98.8759% ( 1) 00:17:15.326 15.834 - 15.929: 98.9061% ( 4) 00:17:15.326 15.929 - 16.024: 98.9287% ( 3) 00:17:15.326 16.024 - 16.119: 98.9438% ( 2) 00:17:15.326 16.119 - 16.213: 98.9513% ( 1) 00:17:15.326 16.213 - 16.308: 98.9740% ( 3) 00:17:15.326 16.308 - 16.403: 98.9891% ( 2) 00:17:15.326 16.403 - 16.498: 99.0117% ( 3) 00:17:15.326 16.498 - 16.593: 99.0796% ( 9) 00:17:15.326 16.593 - 16.687: 99.1475% ( 9) 00:17:15.326 16.687 - 16.782: 99.1777% ( 4) 00:17:15.326 16.782 - 16.877: 99.2078% ( 4) 00:17:15.326 16.877 - 16.972: 99.2305% ( 3) 00:17:15.326 16.972 - 17.067: 99.2757% ( 6) 00:17:15.326 17.067 - 17.161: 99.2984% ( 3) 00:17:15.326 17.161 - 17.256: 99.3210% ( 3) 00:17:15.326 17.256 - 17.351: 99.3286% ( 1) 00:17:15.326 17.351 - 17.446: 99.3361% ( 1) 00:17:15.326 17.446 - 17.541: 99.3512% ( 2) 00:17:15.326 17.636 - 17.730: 99.3587% ( 1) 00:17:15.326 18.204 - 18.299: 99.3663% ( 1) 00:17:15.326 19.153 - 19.247: 99.3738% ( 1) 00:17:15.326 3980.705 - 4004.978: 99.9019% ( 70) 00:17:15.326 4004.978 - 4029.250: 100.0000% ( 13) 00:17:15.326 00:17:15.326 08:50:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:17:15.326 08:50:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:17:15.326 08:50:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:17:15.326 08:50:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:17:15.326 08:50:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:17:15.583 [ 00:17:15.583 { 00:17:15.583 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:15.583 "subtype": "Discovery", 00:17:15.583 "listen_addresses": [], 00:17:15.583 "allow_any_host": true, 00:17:15.583 "hosts": [] 00:17:15.583 }, 00:17:15.583 { 00:17:15.583 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:17:15.583 "subtype": "NVMe", 00:17:15.583 "listen_addresses": [ 00:17:15.583 { 00:17:15.583 "trtype": "VFIOUSER", 00:17:15.583 "adrfam": "IPv4", 00:17:15.583 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:17:15.583 "trsvcid": "0" 00:17:15.583 } 00:17:15.583 ], 00:17:15.583 "allow_any_host": true, 00:17:15.584 "hosts": [], 00:17:15.584 "serial_number": "SPDK1", 00:17:15.584 "model_number": "SPDK bdev Controller", 00:17:15.584 "max_namespaces": 32, 00:17:15.584 "min_cntlid": 1, 00:17:15.584 "max_cntlid": 65519, 00:17:15.584 "namespaces": [ 00:17:15.584 { 00:17:15.584 "nsid": 1, 00:17:15.584 "bdev_name": "Malloc1", 00:17:15.584 "name": "Malloc1", 00:17:15.584 "nguid": "0E98537DD2ED4895A5A2DBDB4469476D", 00:17:15.584 "uuid": "0e98537d-d2ed-4895-a5a2-dbdb4469476d" 00:17:15.584 }, 00:17:15.584 { 00:17:15.584 "nsid": 2, 00:17:15.584 "bdev_name": "Malloc3", 00:17:15.584 "name": "Malloc3", 00:17:15.584 "nguid": "A0E58CD13FCA4AED97DD4620DC5244D9", 00:17:15.584 "uuid": "a0e58cd1-3fca-4aed-97dd-4620dc5244d9" 00:17:15.584 } 00:17:15.584 ] 00:17:15.584 }, 00:17:15.584 { 00:17:15.584 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:17:15.584 "subtype": "NVMe", 00:17:15.584 "listen_addresses": [ 00:17:15.584 { 00:17:15.584 "trtype": "VFIOUSER", 00:17:15.584 "adrfam": "IPv4", 00:17:15.584 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:17:15.584 "trsvcid": "0" 00:17:15.584 } 00:17:15.584 ], 00:17:15.584 "allow_any_host": true, 00:17:15.584 "hosts": [], 00:17:15.584 "serial_number": "SPDK2", 00:17:15.584 "model_number": "SPDK bdev Controller", 00:17:15.584 "max_namespaces": 32, 00:17:15.584 "min_cntlid": 1, 00:17:15.584 "max_cntlid": 65519, 00:17:15.584 "namespaces": [ 00:17:15.584 { 00:17:15.584 "nsid": 1, 00:17:15.584 "bdev_name": "Malloc2", 00:17:15.584 "name": "Malloc2", 00:17:15.584 "nguid": "7955259F71CF4479982D6B0024843228", 00:17:15.584 "uuid": "7955259f-71cf-4479-982d-6b0024843228" 00:17:15.584 } 00:17:15.584 ] 00:17:15.584 } 00:17:15.584 ] 00:17:15.584 08:50:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:17:15.584 08:50:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=962860 00:17:15.584 08:50:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:17:15.584 08:50:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:17:15.584 08:50:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:17:15.584 08:50:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:15.584 08:50:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:15.584 08:50:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:17:15.584 08:50:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:17:15.584 08:50:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:17:15.842 EAL: No free 2048 kB hugepages reported on node 1 00:17:15.842 [2024-07-26 08:50:34.166610] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:15.842 Malloc4 00:17:15.842 08:50:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:17:16.100 [2024-07-26 08:50:34.536352] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:16.100 08:50:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:17:16.358 Asynchronous Event Request test 00:17:16.358 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:17:16.358 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:17:16.358 Registering asynchronous event callbacks... 00:17:16.358 Starting namespace attribute notice tests for all controllers... 00:17:16.358 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:17:16.358 aer_cb - Changed Namespace 00:17:16.358 Cleaning up... 00:17:16.358 [ 00:17:16.358 { 00:17:16.358 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:16.358 "subtype": "Discovery", 00:17:16.358 "listen_addresses": [], 00:17:16.358 "allow_any_host": true, 00:17:16.358 "hosts": [] 00:17:16.358 }, 00:17:16.358 { 00:17:16.358 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:17:16.358 "subtype": "NVMe", 00:17:16.358 "listen_addresses": [ 00:17:16.358 { 00:17:16.358 "trtype": "VFIOUSER", 00:17:16.358 "adrfam": "IPv4", 00:17:16.358 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:17:16.358 "trsvcid": "0" 00:17:16.358 } 00:17:16.358 ], 00:17:16.358 "allow_any_host": true, 00:17:16.358 "hosts": [], 00:17:16.358 "serial_number": "SPDK1", 00:17:16.358 "model_number": "SPDK bdev Controller", 00:17:16.358 "max_namespaces": 32, 00:17:16.358 "min_cntlid": 1, 00:17:16.358 "max_cntlid": 65519, 00:17:16.358 "namespaces": [ 00:17:16.358 { 00:17:16.358 "nsid": 1, 00:17:16.358 "bdev_name": "Malloc1", 00:17:16.358 "name": "Malloc1", 00:17:16.358 "nguid": "0E98537DD2ED4895A5A2DBDB4469476D", 00:17:16.358 "uuid": "0e98537d-d2ed-4895-a5a2-dbdb4469476d" 00:17:16.358 }, 00:17:16.358 { 00:17:16.358 "nsid": 2, 00:17:16.358 "bdev_name": "Malloc3", 00:17:16.358 "name": "Malloc3", 00:17:16.358 "nguid": "A0E58CD13FCA4AED97DD4620DC5244D9", 00:17:16.358 "uuid": "a0e58cd1-3fca-4aed-97dd-4620dc5244d9" 00:17:16.358 } 00:17:16.358 ] 00:17:16.358 }, 00:17:16.358 { 00:17:16.358 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:17:16.358 "subtype": "NVMe", 00:17:16.358 "listen_addresses": [ 00:17:16.358 { 00:17:16.358 "trtype": "VFIOUSER", 00:17:16.358 "adrfam": "IPv4", 00:17:16.359 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:17:16.359 "trsvcid": "0" 00:17:16.359 } 00:17:16.359 ], 00:17:16.359 "allow_any_host": true, 00:17:16.359 "hosts": [], 00:17:16.359 "serial_number": "SPDK2", 00:17:16.359 "model_number": "SPDK bdev Controller", 00:17:16.359 "max_namespaces": 32, 00:17:16.359 "min_cntlid": 1, 00:17:16.359 "max_cntlid": 65519, 00:17:16.359 "namespaces": [ 00:17:16.359 { 00:17:16.359 "nsid": 1, 00:17:16.359 "bdev_name": "Malloc2", 00:17:16.359 "name": "Malloc2", 00:17:16.359 "nguid": "7955259F71CF4479982D6B0024843228", 00:17:16.359 "uuid": "7955259f-71cf-4479-982d-6b0024843228" 00:17:16.359 }, 00:17:16.359 { 00:17:16.359 "nsid": 2, 00:17:16.359 "bdev_name": "Malloc4", 00:17:16.359 "name": "Malloc4", 00:17:16.359 "nguid": "D7878602BE9644BDA0D4732E7DED3C8B", 00:17:16.359 "uuid": "d7878602-be96-44bd-a0d4-732e7ded3c8b" 00:17:16.359 } 00:17:16.359 ] 00:17:16.359 } 00:17:16.359 ] 00:17:16.359 08:50:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 962860 00:17:16.359 08:50:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:17:16.359 08:50:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 957276 00:17:16.359 08:50:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@950 -- # '[' -z 957276 ']' 00:17:16.359 08:50:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # kill -0 957276 00:17:16.359 08:50:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # uname 00:17:16.359 08:50:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:16.359 08:50:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 957276 00:17:16.616 08:50:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:16.616 08:50:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:16.616 08:50:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@968 -- # echo 'killing process with pid 957276' 00:17:16.616 killing process with pid 957276 00:17:16.616 08:50:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # kill 957276 00:17:16.617 08:50:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@974 -- # wait 957276 00:17:16.875 08:50:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:17:16.875 08:50:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:17:16.875 08:50:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:17:16.875 08:50:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:17:16.875 08:50:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:17:16.875 08:50:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=963001 00:17:16.875 08:50:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:17:16.875 08:50:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 963001' 00:17:16.875 Process pid: 963001 00:17:16.875 08:50:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:17:16.875 08:50:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 963001 00:17:16.875 08:50:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@831 -- # '[' -z 963001 ']' 00:17:16.875 08:50:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:16.875 08:50:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:16.875 08:50:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:16.875 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:16.875 08:50:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:16.875 08:50:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:17:16.875 [2024-07-26 08:50:35.192203] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:17:16.875 [2024-07-26 08:50:35.193219] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:17:16.875 [2024-07-26 08:50:35.193273] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:16.875 EAL: No free 2048 kB hugepages reported on node 1 00:17:16.875 [2024-07-26 08:50:35.227841] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:16.875 [2024-07-26 08:50:35.254598] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:17.135 [2024-07-26 08:50:35.340530] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:17.135 [2024-07-26 08:50:35.340581] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:17.135 [2024-07-26 08:50:35.340610] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:17.135 [2024-07-26 08:50:35.340621] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:17.135 [2024-07-26 08:50:35.340630] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:17.135 [2024-07-26 08:50:35.340686] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:17.135 [2024-07-26 08:50:35.340710] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:17.135 [2024-07-26 08:50:35.340768] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:17.135 [2024-07-26 08:50:35.340771] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:17.135 [2024-07-26 08:50:35.432749] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:17:17.135 [2024-07-26 08:50:35.432956] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:17:17.135 [2024-07-26 08:50:35.433264] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:17:17.135 [2024-07-26 08:50:35.433809] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:17:17.135 [2024-07-26 08:50:35.434052] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:17:17.135 08:50:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:17.135 08:50:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # return 0 00:17:17.135 08:50:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:17:18.076 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:17:18.334 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:17:18.335 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:17:18.335 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:18.335 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:17:18.335 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:17:18.593 Malloc1 00:17:18.593 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:17:18.852 08:50:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:17:19.145 08:50:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:17:19.403 08:50:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:19.403 08:50:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:17:19.403 08:50:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:17:19.661 Malloc2 00:17:19.661 08:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:17:19.919 08:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:17:20.177 08:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:17:20.437 08:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:17:20.437 08:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 963001 00:17:20.437 08:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@950 -- # '[' -z 963001 ']' 00:17:20.437 08:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # kill -0 963001 00:17:20.437 08:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # uname 00:17:20.437 08:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:20.437 08:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 963001 00:17:20.437 08:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:20.437 08:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:20.437 08:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@968 -- # echo 'killing process with pid 963001' 00:17:20.437 killing process with pid 963001 00:17:20.437 08:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # kill 963001 00:17:20.437 08:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@974 -- # wait 963001 00:17:20.697 08:50:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:17:20.697 08:50:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:17:20.697 00:17:20.697 real 0m52.615s 00:17:20.697 user 3m27.757s 00:17:20.697 sys 0m4.262s 00:17:20.697 08:50:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:20.697 08:50:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:17:20.697 ************************************ 00:17:20.697 END TEST nvmf_vfio_user 00:17:20.697 ************************************ 00:17:20.697 08:50:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:17:20.697 08:50:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:20.697 08:50:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:20.697 08:50:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:20.697 ************************************ 00:17:20.697 START TEST nvmf_vfio_user_nvme_compliance 00:17:20.697 ************************************ 00:17:20.697 08:50:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:17:20.956 * Looking for test storage... 00:17:20.956 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:17:20.956 08:50:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:20.956 08:50:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:17:20.956 08:50:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:20.956 08:50:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:20.956 08:50:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:20.956 08:50:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:20.956 08:50:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:20.956 08:50:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:20.956 08:50:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:20.956 08:50:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:20.956 08:50:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:20.956 08:50:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:20.956 08:50:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:20.956 08:50:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:20.956 08:50:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:20.956 08:50:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:20.956 08:50:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:20.956 08:50:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:20.956 08:50:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:20.956 08:50:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:20.956 08:50:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:20.956 08:50:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:20.956 08:50:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:20.956 08:50:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:20.956 08:50:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:20.956 08:50:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:17:20.956 08:50:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:20.956 08:50:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:17:20.956 08:50:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:20.956 08:50:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:20.956 08:50:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:20.956 08:50:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:20.956 08:50:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:20.956 08:50:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:20.956 08:50:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:20.956 08:50:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:20.956 08:50:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:20.956 08:50:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:20.956 08:50:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:17:20.956 08:50:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:17:20.956 08:50:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:17:20.956 08:50:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=963590 00:17:20.956 08:50:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:17:20.956 08:50:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 963590' 00:17:20.956 Process pid: 963590 00:17:20.956 08:50:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:17:20.956 08:50:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 963590 00:17:20.956 08:50:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@831 -- # '[' -z 963590 ']' 00:17:20.956 08:50:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:20.956 08:50:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:20.957 08:50:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:20.957 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:20.957 08:50:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:20.957 08:50:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:20.957 [2024-07-26 08:50:39.253254] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:17:20.957 [2024-07-26 08:50:39.253344] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:20.957 EAL: No free 2048 kB hugepages reported on node 1 00:17:20.957 [2024-07-26 08:50:39.285202] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:20.957 [2024-07-26 08:50:39.311973] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:20.957 [2024-07-26 08:50:39.396207] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:20.957 [2024-07-26 08:50:39.396266] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:20.957 [2024-07-26 08:50:39.396291] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:20.957 [2024-07-26 08:50:39.396305] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:20.957 [2024-07-26 08:50:39.396317] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:20.957 [2024-07-26 08:50:39.396405] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:20.957 [2024-07-26 08:50:39.396457] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:20.957 [2024-07-26 08:50:39.396461] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:21.215 08:50:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:21.215 08:50:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # return 0 00:17:21.215 08:50:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:17:22.150 08:50:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:17:22.150 08:50:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:17:22.150 08:50:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:17:22.150 08:50:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.150 08:50:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:22.150 08:50:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.150 08:50:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:17:22.150 08:50:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:17:22.150 08:50:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.150 08:50:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:22.150 malloc0 00:17:22.150 08:50:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.150 08:50:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:17:22.150 08:50:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.150 08:50:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:22.150 08:50:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.150 08:50:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:17:22.150 08:50:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.150 08:50:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:22.150 08:50:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.150 08:50:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:17:22.150 08:50:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.150 08:50:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:22.150 08:50:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.150 08:50:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:17:22.425 EAL: No free 2048 kB hugepages reported on node 1 00:17:22.425 00:17:22.425 00:17:22.425 CUnit - A unit testing framework for C - Version 2.1-3 00:17:22.425 http://cunit.sourceforge.net/ 00:17:22.425 00:17:22.425 00:17:22.425 Suite: nvme_compliance 00:17:22.425 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-26 08:50:40.753678] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:22.425 [2024-07-26 08:50:40.755182] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:17:22.425 [2024-07-26 08:50:40.755225] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:17:22.425 [2024-07-26 08:50:40.755239] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:17:22.425 [2024-07-26 08:50:40.756703] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:22.425 passed 00:17:22.425 Test: admin_identify_ctrlr_verify_fused ...[2024-07-26 08:50:40.845321] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:22.425 [2024-07-26 08:50:40.848340] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:22.425 passed 00:17:22.682 Test: admin_identify_ns ...[2024-07-26 08:50:40.935927] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:22.682 [2024-07-26 08:50:41.001079] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:17:22.682 [2024-07-26 08:50:41.009103] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:17:22.682 [2024-07-26 08:50:41.030189] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:22.682 passed 00:17:22.682 Test: admin_get_features_mandatory_features ...[2024-07-26 08:50:41.114540] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:22.682 [2024-07-26 08:50:41.119572] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:22.939 passed 00:17:22.939 Test: admin_get_features_optional_features ...[2024-07-26 08:50:41.204148] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:22.939 [2024-07-26 08:50:41.207164] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:22.939 passed 00:17:22.939 Test: admin_set_features_number_of_queues ...[2024-07-26 08:50:41.291300] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:23.196 [2024-07-26 08:50:41.400191] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:23.196 passed 00:17:23.196 Test: admin_get_log_page_mandatory_logs ...[2024-07-26 08:50:41.485013] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:23.196 [2024-07-26 08:50:41.488036] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:23.196 passed 00:17:23.196 Test: admin_get_log_page_with_lpo ...[2024-07-26 08:50:41.574279] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:23.196 [2024-07-26 08:50:41.642074] ctrlr.c:2688:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:17:23.196 [2024-07-26 08:50:41.655173] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:23.454 passed 00:17:23.454 Test: fabric_property_get ...[2024-07-26 08:50:41.739734] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:23.454 [2024-07-26 08:50:41.741001] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:17:23.454 [2024-07-26 08:50:41.742755] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:23.454 passed 00:17:23.454 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-26 08:50:41.829325] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:23.454 [2024-07-26 08:50:41.830628] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:17:23.454 [2024-07-26 08:50:41.832348] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:23.454 passed 00:17:23.714 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-26 08:50:41.916534] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:23.714 [2024-07-26 08:50:42.003081] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:17:23.714 [2024-07-26 08:50:42.019072] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:17:23.714 [2024-07-26 08:50:42.024161] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:23.714 passed 00:17:23.714 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-26 08:50:42.109023] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:23.714 [2024-07-26 08:50:42.110342] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:17:23.714 [2024-07-26 08:50:42.112065] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:23.714 passed 00:17:23.973 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-26 08:50:42.199556] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:23.973 [2024-07-26 08:50:42.275073] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:17:23.973 [2024-07-26 08:50:42.299071] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:17:23.973 [2024-07-26 08:50:42.304176] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:23.973 passed 00:17:23.973 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-26 08:50:42.388911] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:23.973 [2024-07-26 08:50:42.390213] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:17:23.973 [2024-07-26 08:50:42.390253] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:17:23.973 [2024-07-26 08:50:42.391932] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:23.973 passed 00:17:24.233 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-26 08:50:42.476592] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:24.233 [2024-07-26 08:50:42.569070] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:17:24.233 [2024-07-26 08:50:42.577068] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:17:24.233 [2024-07-26 08:50:42.585071] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:17:24.233 [2024-07-26 08:50:42.593083] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:17:24.233 [2024-07-26 08:50:42.622190] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:24.233 passed 00:17:24.492 Test: admin_create_io_sq_verify_pc ...[2024-07-26 08:50:42.704533] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:24.492 [2024-07-26 08:50:42.720107] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:17:24.492 [2024-07-26 08:50:42.737948] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:24.492 passed 00:17:24.492 Test: admin_create_io_qp_max_qps ...[2024-07-26 08:50:42.827604] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:25.870 [2024-07-26 08:50:43.943075] nvme_ctrlr.c:5469:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:17:25.870 [2024-07-26 08:50:44.319243] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:26.128 passed 00:17:26.128 Test: admin_create_io_sq_shared_cq ...[2024-07-26 08:50:44.405571] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:26.128 [2024-07-26 08:50:44.541068] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:17:26.128 [2024-07-26 08:50:44.578157] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:26.385 passed 00:17:26.385 00:17:26.385 Run Summary: Type Total Ran Passed Failed Inactive 00:17:26.385 suites 1 1 n/a 0 0 00:17:26.385 tests 18 18 18 0 0 00:17:26.385 asserts 360 360 360 0 n/a 00:17:26.385 00:17:26.385 Elapsed time = 1.587 seconds 00:17:26.385 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 963590 00:17:26.385 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@950 -- # '[' -z 963590 ']' 00:17:26.385 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # kill -0 963590 00:17:26.385 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # uname 00:17:26.385 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:26.385 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 963590 00:17:26.385 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:26.385 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:26.385 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@968 -- # echo 'killing process with pid 963590' 00:17:26.385 killing process with pid 963590 00:17:26.385 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@969 -- # kill 963590 00:17:26.385 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@974 -- # wait 963590 00:17:26.644 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:17:26.644 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:17:26.644 00:17:26.644 real 0m5.766s 00:17:26.644 user 0m16.246s 00:17:26.644 sys 0m0.544s 00:17:26.644 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:26.644 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:26.644 ************************************ 00:17:26.644 END TEST nvmf_vfio_user_nvme_compliance 00:17:26.644 ************************************ 00:17:26.644 08:50:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:17:26.644 08:50:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:26.644 08:50:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:26.644 08:50:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:26.644 ************************************ 00:17:26.644 START TEST nvmf_vfio_user_fuzz 00:17:26.644 ************************************ 00:17:26.644 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:17:26.644 * Looking for test storage... 00:17:26.644 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:26.644 08:50:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:26.644 08:50:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:17:26.644 08:50:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:26.644 08:50:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:26.644 08:50:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:26.644 08:50:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:26.644 08:50:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:26.644 08:50:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:26.644 08:50:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:26.644 08:50:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:26.644 08:50:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:26.644 08:50:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:26.644 08:50:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:26.644 08:50:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:26.644 08:50:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:26.644 08:50:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:26.644 08:50:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:26.644 08:50:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:26.644 08:50:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:26.644 08:50:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:26.644 08:50:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:26.644 08:50:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:26.644 08:50:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:26.644 08:50:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:26.644 08:50:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:26.644 08:50:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:17:26.644 08:50:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:26.644 08:50:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:17:26.644 08:50:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:26.644 08:50:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:26.644 08:50:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:26.644 08:50:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:26.644 08:50:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:26.644 08:50:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:26.644 08:50:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:26.644 08:50:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:26.644 08:50:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:17:26.644 08:50:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:17:26.644 08:50:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:17:26.644 08:50:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:17:26.644 08:50:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:17:26.644 08:50:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:17:26.644 08:50:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:17:26.644 08:50:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=964313 00:17:26.644 08:50:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:26.644 08:50:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 964313' 00:17:26.644 Process pid: 964313 00:17:26.644 08:50:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:17:26.644 08:50:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 964313 00:17:26.644 08:50:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@831 -- # '[' -z 964313 ']' 00:17:26.644 08:50:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:26.644 08:50:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:26.644 08:50:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:26.644 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:26.644 08:50:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:26.644 08:50:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:26.902 08:50:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:26.903 08:50:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # return 0 00:17:26.903 08:50:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:17:28.281 08:50:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:17:28.281 08:50:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.281 08:50:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:28.281 08:50:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.281 08:50:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:17:28.281 08:50:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:17:28.281 08:50:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.281 08:50:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:28.281 malloc0 00:17:28.281 08:50:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.281 08:50:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:17:28.281 08:50:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.281 08:50:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:28.281 08:50:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.281 08:50:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:17:28.281 08:50:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.281 08:50:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:28.281 08:50:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.281 08:50:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:17:28.281 08:50:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.281 08:50:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:28.281 08:50:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.281 08:50:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:17:28.281 08:50:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:18:00.351 Fuzzing completed. Shutting down the fuzz application 00:18:00.351 00:18:00.351 Dumping successful admin opcodes: 00:18:00.351 8, 9, 10, 24, 00:18:00.351 Dumping successful io opcodes: 00:18:00.351 0, 00:18:00.351 NS: 0x200003a1ef00 I/O qp, Total commands completed: 613663, total successful commands: 2370, random_seed: 2065605632 00:18:00.351 NS: 0x200003a1ef00 admin qp, Total commands completed: 145533, total successful commands: 1180, random_seed: 751763712 00:18:00.352 08:51:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:18:00.352 08:51:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.352 08:51:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:00.352 08:51:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.352 08:51:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 964313 00:18:00.352 08:51:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@950 -- # '[' -z 964313 ']' 00:18:00.352 08:51:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # kill -0 964313 00:18:00.352 08:51:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # uname 00:18:00.352 08:51:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:00.352 08:51:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 964313 00:18:00.352 08:51:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:00.352 08:51:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:00.352 08:51:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@968 -- # echo 'killing process with pid 964313' 00:18:00.352 killing process with pid 964313 00:18:00.352 08:51:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@969 -- # kill 964313 00:18:00.352 08:51:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@974 -- # wait 964313 00:18:00.352 08:51:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:18:00.352 08:51:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:18:00.352 00:18:00.352 real 0m32.256s 00:18:00.352 user 0m32.672s 00:18:00.352 sys 0m26.906s 00:18:00.352 08:51:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:00.352 08:51:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:00.352 ************************************ 00:18:00.352 END TEST nvmf_vfio_user_fuzz 00:18:00.352 ************************************ 00:18:00.352 08:51:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:18:00.352 08:51:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:00.352 08:51:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:00.352 08:51:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:00.352 ************************************ 00:18:00.352 START TEST nvmf_auth_target 00:18:00.352 ************************************ 00:18:00.352 08:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:18:00.352 * Looking for test storage... 00:18:00.352 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:00.352 08:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:00.352 08:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:18:00.352 08:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:00.352 08:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:00.352 08:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:00.352 08:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:00.352 08:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:00.352 08:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:00.352 08:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:00.352 08:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:00.352 08:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:00.352 08:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:00.352 08:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:00.352 08:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:00.352 08:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:00.352 08:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:00.352 08:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:00.352 08:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:00.352 08:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:00.352 08:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:00.352 08:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:00.352 08:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:00.352 08:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:00.352 08:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:00.352 08:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:00.352 08:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:18:00.352 08:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:00.352 08:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:18:00.352 08:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:00.352 08:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:00.352 08:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:00.352 08:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:00.352 08:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:00.352 08:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:00.352 08:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:00.352 08:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:00.352 08:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:18:00.352 08:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:18:00.352 08:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:18:00.352 08:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:00.352 08:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:18:00.352 08:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:18:00.352 08:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:18:00.352 08:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:18:00.352 08:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:00.352 08:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:00.352 08:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:00.352 08:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:00.352 08:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:00.352 08:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:00.352 08:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:00.352 08:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:00.352 08:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:00.353 08:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:00.353 08:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:18:00.353 08:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.920 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:00.920 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:18:00.920 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:00.920 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:00.920 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:00.920 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:00.920 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:00.920 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:18:00.920 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:00.920 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:18:00.920 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:18:00.920 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:18:00.920 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:18:00.920 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:18:00.920 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:18:00.920 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:00.920 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:00.920 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:00.920 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:00.920 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:00.920 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:00.920 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:00.920 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:00.920 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:00.920 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:00.920 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:00.920 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:00.920 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:00.920 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:00.920 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:00.920 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:00.920 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:00.920 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:00.920 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:00.920 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:00.920 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:00.920 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:00.920 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:00.920 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:00.920 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:00.920 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:00.920 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:00.920 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:00.920 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:00.920 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:00.920 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:00.920 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:00.920 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:00.920 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:00.920 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:00.920 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:00.920 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:00.920 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:00.920 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:00.920 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:00.920 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:00.920 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:00.920 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:00.920 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:00.920 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:00.920 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:00.920 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:00.920 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:00.920 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:00.920 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:00.920 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:00.920 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:00.920 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:00.920 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:00.920 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:00.920 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:00.920 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:00.920 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:18:00.920 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:00.920 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:00.920 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:00.920 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:00.920 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:00.920 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:00.920 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:00.920 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:00.920 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:00.920 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:00.920 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:00.920 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:00.920 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:00.920 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:00.920 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:00.920 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:00.920 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:00.920 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:00.920 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:00.920 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:00.920 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:00.920 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:00.920 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:00.920 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:00.920 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.242 ms 00:18:00.920 00:18:00.920 --- 10.0.0.2 ping statistics --- 00:18:00.920 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:00.920 rtt min/avg/max/mdev = 0.242/0.242/0.242/0.000 ms 00:18:00.920 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:00.920 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:00.920 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.082 ms 00:18:00.920 00:18:00.920 --- 10.0.0.1 ping statistics --- 00:18:00.920 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:00.920 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:18:00.920 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:00.921 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:18:00.921 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:00.921 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:00.921 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:00.921 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:00.921 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:00.921 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:00.921 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:00.921 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:18:00.921 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:00.921 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:00.921 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.921 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=970247 00:18:00.921 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:18:00.921 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 970247 00:18:00.921 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 970247 ']' 00:18:00.921 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:00.921 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:00.921 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:01.181 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:01.181 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.440 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:01.440 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:18:01.440 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:01.440 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:01.440 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.440 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:01.440 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=970266 00:18:01.440 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:18:01.440 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:18:01.440 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:18:01.440 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:01.440 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:01.440 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:01.440 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:18:01.440 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:18:01.440 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:01.440 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=7ebde90d9a1676abc2eaf463019df9075e0d62d144fc940e 00:18:01.440 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:18:01.440 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.7xQ 00:18:01.440 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 7ebde90d9a1676abc2eaf463019df9075e0d62d144fc940e 0 00:18:01.440 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 7ebde90d9a1676abc2eaf463019df9075e0d62d144fc940e 0 00:18:01.440 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:01.440 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:01.440 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=7ebde90d9a1676abc2eaf463019df9075e0d62d144fc940e 00:18:01.440 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:18:01.440 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:01.440 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.7xQ 00:18:01.440 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.7xQ 00:18:01.440 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.7xQ 00:18:01.441 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:18:01.441 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:01.441 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:01.441 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:01.441 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:18:01.441 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:18:01.441 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:18:01.441 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=25bd5a987032890b5fd24344ebc1e13e37db7b5c60453c6416687b05c64ab9f8 00:18:01.441 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:18:01.441 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.eL4 00:18:01.441 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 25bd5a987032890b5fd24344ebc1e13e37db7b5c60453c6416687b05c64ab9f8 3 00:18:01.441 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 25bd5a987032890b5fd24344ebc1e13e37db7b5c60453c6416687b05c64ab9f8 3 00:18:01.441 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:01.441 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:01.441 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=25bd5a987032890b5fd24344ebc1e13e37db7b5c60453c6416687b05c64ab9f8 00:18:01.441 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:18:01.441 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:01.441 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.eL4 00:18:01.441 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.eL4 00:18:01.441 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.eL4 00:18:01.441 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:18:01.441 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:01.441 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:01.441 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:01.441 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:18:01.441 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:18:01.441 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:01.441 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=33bb86def1e1e8f3dcfdc04ae319360b 00:18:01.441 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:18:01.441 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.U3W 00:18:01.441 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 33bb86def1e1e8f3dcfdc04ae319360b 1 00:18:01.441 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 33bb86def1e1e8f3dcfdc04ae319360b 1 00:18:01.441 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:01.441 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:01.441 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=33bb86def1e1e8f3dcfdc04ae319360b 00:18:01.441 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:18:01.441 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:01.441 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.U3W 00:18:01.441 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.U3W 00:18:01.441 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.U3W 00:18:01.441 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:18:01.441 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:01.441 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:01.441 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:01.441 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:18:01.441 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:18:01.441 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:01.441 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=6214f33293f4cb75f22c542e85c750a346aef08880ca8bf7 00:18:01.441 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:18:01.441 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.BT4 00:18:01.441 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 6214f33293f4cb75f22c542e85c750a346aef08880ca8bf7 2 00:18:01.441 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 6214f33293f4cb75f22c542e85c750a346aef08880ca8bf7 2 00:18:01.441 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:01.441 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:01.441 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=6214f33293f4cb75f22c542e85c750a346aef08880ca8bf7 00:18:01.441 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:18:01.441 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:01.699 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.BT4 00:18:01.699 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.BT4 00:18:01.699 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.BT4 00:18:01.699 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:18:01.699 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:01.699 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:01.699 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:01.699 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:18:01.699 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:18:01.699 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:01.699 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=c6299f2df5287a400bf851f4c54c09585211b9ef0d36c93d 00:18:01.699 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:18:01.699 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.drj 00:18:01.699 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key c6299f2df5287a400bf851f4c54c09585211b9ef0d36c93d 2 00:18:01.699 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 c6299f2df5287a400bf851f4c54c09585211b9ef0d36c93d 2 00:18:01.699 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:01.699 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:01.699 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=c6299f2df5287a400bf851f4c54c09585211b9ef0d36c93d 00:18:01.699 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:18:01.699 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:01.699 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.drj 00:18:01.699 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.drj 00:18:01.699 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.drj 00:18:01.699 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:18:01.699 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:01.699 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:01.699 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:01.699 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:18:01.699 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:18:01.699 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:01.699 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=7161b68539b46beb6a9cb2fec92520c0 00:18:01.699 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:18:01.699 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.Siy 00:18:01.699 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 7161b68539b46beb6a9cb2fec92520c0 1 00:18:01.699 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 7161b68539b46beb6a9cb2fec92520c0 1 00:18:01.699 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:01.699 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:01.699 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=7161b68539b46beb6a9cb2fec92520c0 00:18:01.699 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:18:01.699 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:01.699 08:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.Siy 00:18:01.699 08:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.Siy 00:18:01.699 08:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.Siy 00:18:01.699 08:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:18:01.699 08:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:01.699 08:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:01.699 08:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:01.699 08:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:18:01.699 08:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:18:01.699 08:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:18:01.699 08:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=8f4762c435c6ce54dc8f9beced12378f13ae339b8433529afeb5ea1e9ea4e266 00:18:01.699 08:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:18:01.699 08:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.2VG 00:18:01.699 08:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 8f4762c435c6ce54dc8f9beced12378f13ae339b8433529afeb5ea1e9ea4e266 3 00:18:01.699 08:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 8f4762c435c6ce54dc8f9beced12378f13ae339b8433529afeb5ea1e9ea4e266 3 00:18:01.699 08:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:01.699 08:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:01.699 08:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=8f4762c435c6ce54dc8f9beced12378f13ae339b8433529afeb5ea1e9ea4e266 00:18:01.699 08:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:18:01.699 08:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:01.699 08:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.2VG 00:18:01.699 08:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.2VG 00:18:01.699 08:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.2VG 00:18:01.699 08:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:18:01.699 08:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 970247 00:18:01.699 08:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 970247 ']' 00:18:01.699 08:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:01.699 08:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:01.699 08:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:01.699 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:01.699 08:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:01.699 08:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.957 08:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:01.957 08:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:18:01.957 08:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 970266 /var/tmp/host.sock 00:18:01.957 08:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 970266 ']' 00:18:01.957 08:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:18:01.957 08:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:01.957 08:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:18:01.957 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:18:01.957 08:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:01.957 08:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.243 08:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:02.243 08:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:18:02.243 08:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:18:02.244 08:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.244 08:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.244 08:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.244 08:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:18:02.244 08:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.7xQ 00:18:02.244 08:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.244 08:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.244 08:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.244 08:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.7xQ 00:18:02.244 08:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.7xQ 00:18:02.507 08:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.eL4 ]] 00:18:02.507 08:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.eL4 00:18:02.507 08:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.507 08:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.507 08:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.507 08:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.eL4 00:18:02.507 08:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.eL4 00:18:02.765 08:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:18:02.765 08:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.U3W 00:18:02.765 08:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.765 08:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.765 08:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.765 08:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.U3W 00:18:02.765 08:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.U3W 00:18:03.023 08:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.BT4 ]] 00:18:03.023 08:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.BT4 00:18:03.023 08:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.023 08:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.023 08:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.023 08:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.BT4 00:18:03.023 08:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.BT4 00:18:03.281 08:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:18:03.281 08:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.drj 00:18:03.281 08:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.281 08:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.281 08:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.281 08:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.drj 00:18:03.281 08:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.drj 00:18:03.539 08:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.Siy ]] 00:18:03.539 08:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Siy 00:18:03.539 08:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.539 08:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.539 08:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.539 08:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Siy 00:18:03.539 08:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Siy 00:18:03.797 08:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:18:03.797 08:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.2VG 00:18:03.797 08:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.797 08:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.797 08:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.797 08:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.2VG 00:18:03.797 08:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.2VG 00:18:04.055 08:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:18:04.055 08:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:18:04.055 08:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:04.055 08:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:04.055 08:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:04.055 08:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:04.312 08:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:18:04.312 08:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:04.312 08:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:04.312 08:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:04.312 08:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:04.312 08:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:04.312 08:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:04.312 08:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.312 08:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.312 08:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.312 08:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:04.312 08:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:04.570 00:18:04.570 08:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:04.570 08:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:04.570 08:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:04.828 08:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:04.828 08:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:04.828 08:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.828 08:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.828 08:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.828 08:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:04.828 { 00:18:04.828 "cntlid": 1, 00:18:04.828 "qid": 0, 00:18:04.828 "state": "enabled", 00:18:04.828 "thread": "nvmf_tgt_poll_group_000", 00:18:04.828 "listen_address": { 00:18:04.828 "trtype": "TCP", 00:18:04.828 "adrfam": "IPv4", 00:18:04.828 "traddr": "10.0.0.2", 00:18:04.828 "trsvcid": "4420" 00:18:04.828 }, 00:18:04.828 "peer_address": { 00:18:04.828 "trtype": "TCP", 00:18:04.828 "adrfam": "IPv4", 00:18:04.828 "traddr": "10.0.0.1", 00:18:04.828 "trsvcid": "53432" 00:18:04.828 }, 00:18:04.828 "auth": { 00:18:04.828 "state": "completed", 00:18:04.828 "digest": "sha256", 00:18:04.828 "dhgroup": "null" 00:18:04.828 } 00:18:04.828 } 00:18:04.828 ]' 00:18:04.828 08:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:04.828 08:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:04.828 08:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:05.086 08:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:05.086 08:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:05.086 08:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:05.086 08:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:05.086 08:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:05.344 08:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:N2ViZGU5MGQ5YTE2NzZhYmMyZWFmNDYzMDE5ZGY5MDc1ZTBkNjJkMTQ0ZmM5NDBl8yaDbQ==: --dhchap-ctrl-secret DHHC-1:03:MjViZDVhOTg3MDMyODkwYjVmZDI0MzQ0ZWJjMWUxM2UzN2RiN2I1YzYwNDUzYzY0MTY2ODdiMDVjNjRhYjlmOIsqtQA=: 00:18:06.281 08:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:06.281 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:06.281 08:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:06.281 08:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.281 08:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.281 08:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.281 08:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:06.281 08:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:06.281 08:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:06.539 08:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:18:06.539 08:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:06.539 08:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:06.539 08:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:06.539 08:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:06.539 08:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:06.539 08:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:06.539 08:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.539 08:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.539 08:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.539 08:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:06.539 08:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:06.797 00:18:06.797 08:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:06.797 08:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:06.797 08:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:07.055 08:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:07.055 08:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:07.055 08:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.055 08:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.055 08:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.055 08:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:07.055 { 00:18:07.055 "cntlid": 3, 00:18:07.055 "qid": 0, 00:18:07.055 "state": "enabled", 00:18:07.055 "thread": "nvmf_tgt_poll_group_000", 00:18:07.055 "listen_address": { 00:18:07.055 "trtype": "TCP", 00:18:07.055 "adrfam": "IPv4", 00:18:07.055 "traddr": "10.0.0.2", 00:18:07.055 "trsvcid": "4420" 00:18:07.055 }, 00:18:07.055 "peer_address": { 00:18:07.055 "trtype": "TCP", 00:18:07.055 "adrfam": "IPv4", 00:18:07.055 "traddr": "10.0.0.1", 00:18:07.055 "trsvcid": "53470" 00:18:07.055 }, 00:18:07.055 "auth": { 00:18:07.055 "state": "completed", 00:18:07.055 "digest": "sha256", 00:18:07.055 "dhgroup": "null" 00:18:07.055 } 00:18:07.055 } 00:18:07.055 ]' 00:18:07.055 08:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:07.055 08:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:07.055 08:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:07.313 08:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:07.313 08:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:07.313 08:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:07.313 08:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:07.313 08:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:07.571 08:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MzNiYjg2ZGVmMWUxZThmM2RjZmRjMDRhZTMxOTM2MGKp5Rkk: --dhchap-ctrl-secret DHHC-1:02:NjIxNGYzMzI5M2Y0Y2I3NWYyMmM1NDJlODVjNzUwYTM0NmFlZjA4ODgwY2E4YmY35CdYOA==: 00:18:08.507 08:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:08.507 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:08.507 08:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:08.507 08:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.507 08:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.507 08:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.507 08:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:08.507 08:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:08.507 08:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:08.765 08:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:18:08.765 08:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:08.765 08:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:08.765 08:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:08.765 08:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:08.765 08:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:08.765 08:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:08.765 08:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.765 08:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.765 08:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.765 08:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:08.765 08:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:09.021 00:18:09.021 08:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:09.021 08:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:09.021 08:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:09.278 08:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:09.278 08:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:09.278 08:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.278 08:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.278 08:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.278 08:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:09.278 { 00:18:09.278 "cntlid": 5, 00:18:09.278 "qid": 0, 00:18:09.278 "state": "enabled", 00:18:09.278 "thread": "nvmf_tgt_poll_group_000", 00:18:09.278 "listen_address": { 00:18:09.278 "trtype": "TCP", 00:18:09.278 "adrfam": "IPv4", 00:18:09.278 "traddr": "10.0.0.2", 00:18:09.278 "trsvcid": "4420" 00:18:09.278 }, 00:18:09.278 "peer_address": { 00:18:09.278 "trtype": "TCP", 00:18:09.278 "adrfam": "IPv4", 00:18:09.278 "traddr": "10.0.0.1", 00:18:09.278 "trsvcid": "53496" 00:18:09.278 }, 00:18:09.278 "auth": { 00:18:09.278 "state": "completed", 00:18:09.278 "digest": "sha256", 00:18:09.278 "dhgroup": "null" 00:18:09.278 } 00:18:09.278 } 00:18:09.278 ]' 00:18:09.278 08:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:09.278 08:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:09.278 08:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:09.535 08:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:09.535 08:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:09.535 08:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:09.535 08:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:09.535 08:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:09.792 08:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YzYyOTlmMmRmNTI4N2E0MDBiZjg1MWY0YzU0YzA5NTg1MjExYjllZjBkMzZjOTNkp9eaHA==: --dhchap-ctrl-secret DHHC-1:01:NzE2MWI2ODUzOWI0NmJlYjZhOWNiMmZlYzkyNTIwYzAg0K5o: 00:18:10.727 08:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:10.727 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:10.727 08:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:10.727 08:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.727 08:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.727 08:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.727 08:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:10.727 08:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:10.727 08:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:10.985 08:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:18:10.985 08:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:10.985 08:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:10.985 08:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:10.985 08:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:10.985 08:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:10.985 08:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:18:10.985 08:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.985 08:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.985 08:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.985 08:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:10.985 08:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:11.255 00:18:11.255 08:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:11.255 08:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:11.255 08:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:11.513 08:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:11.513 08:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:11.513 08:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.513 08:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.513 08:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.513 08:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:11.513 { 00:18:11.513 "cntlid": 7, 00:18:11.513 "qid": 0, 00:18:11.513 "state": "enabled", 00:18:11.513 "thread": "nvmf_tgt_poll_group_000", 00:18:11.513 "listen_address": { 00:18:11.513 "trtype": "TCP", 00:18:11.513 "adrfam": "IPv4", 00:18:11.513 "traddr": "10.0.0.2", 00:18:11.513 "trsvcid": "4420" 00:18:11.513 }, 00:18:11.513 "peer_address": { 00:18:11.513 "trtype": "TCP", 00:18:11.513 "adrfam": "IPv4", 00:18:11.513 "traddr": "10.0.0.1", 00:18:11.513 "trsvcid": "43306" 00:18:11.513 }, 00:18:11.513 "auth": { 00:18:11.513 "state": "completed", 00:18:11.513 "digest": "sha256", 00:18:11.513 "dhgroup": "null" 00:18:11.513 } 00:18:11.513 } 00:18:11.513 ]' 00:18:11.513 08:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:11.513 08:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:11.513 08:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:11.513 08:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:11.513 08:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:11.769 08:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:11.769 08:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:11.769 08:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:12.027 08:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:OGY0NzYyYzQzNWM2Y2U1NGRjOGY5YmVjZWQxMjM3OGYxM2FlMzM5Yjg0MzM1MjlhZmViNWVhMWU5ZWE0ZTI2NuGucvM=: 00:18:12.961 08:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:12.961 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:12.961 08:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:12.961 08:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.961 08:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.961 08:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.961 08:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:12.961 08:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:12.961 08:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:12.961 08:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:13.220 08:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:18:13.220 08:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:13.220 08:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:13.220 08:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:13.220 08:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:13.220 08:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:13.220 08:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:13.220 08:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.220 08:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.220 08:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.220 08:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:13.220 08:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:13.478 00:18:13.478 08:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:13.478 08:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:13.478 08:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:13.735 08:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:13.735 08:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:13.735 08:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.735 08:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.735 08:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.735 08:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:13.735 { 00:18:13.735 "cntlid": 9, 00:18:13.735 "qid": 0, 00:18:13.735 "state": "enabled", 00:18:13.735 "thread": "nvmf_tgt_poll_group_000", 00:18:13.735 "listen_address": { 00:18:13.735 "trtype": "TCP", 00:18:13.735 "adrfam": "IPv4", 00:18:13.735 "traddr": "10.0.0.2", 00:18:13.735 "trsvcid": "4420" 00:18:13.735 }, 00:18:13.735 "peer_address": { 00:18:13.735 "trtype": "TCP", 00:18:13.735 "adrfam": "IPv4", 00:18:13.735 "traddr": "10.0.0.1", 00:18:13.735 "trsvcid": "43324" 00:18:13.735 }, 00:18:13.735 "auth": { 00:18:13.735 "state": "completed", 00:18:13.735 "digest": "sha256", 00:18:13.735 "dhgroup": "ffdhe2048" 00:18:13.735 } 00:18:13.735 } 00:18:13.735 ]' 00:18:13.735 08:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:13.735 08:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:13.735 08:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:13.735 08:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:13.735 08:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:13.993 08:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:13.993 08:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:13.993 08:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:14.251 08:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:N2ViZGU5MGQ5YTE2NzZhYmMyZWFmNDYzMDE5ZGY5MDc1ZTBkNjJkMTQ0ZmM5NDBl8yaDbQ==: --dhchap-ctrl-secret DHHC-1:03:MjViZDVhOTg3MDMyODkwYjVmZDI0MzQ0ZWJjMWUxM2UzN2RiN2I1YzYwNDUzYzY0MTY2ODdiMDVjNjRhYjlmOIsqtQA=: 00:18:15.197 08:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:15.197 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:15.197 08:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:15.197 08:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.197 08:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.198 08:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.198 08:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:15.198 08:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:15.198 08:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:15.457 08:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:18:15.457 08:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:15.457 08:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:15.457 08:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:15.457 08:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:15.457 08:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:15.457 08:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:15.457 08:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.457 08:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.457 08:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.457 08:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:15.457 08:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:15.714 00:18:15.714 08:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:15.714 08:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:15.714 08:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:15.971 08:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:15.971 08:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:15.971 08:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.971 08:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.971 08:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.971 08:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:15.971 { 00:18:15.971 "cntlid": 11, 00:18:15.971 "qid": 0, 00:18:15.971 "state": "enabled", 00:18:15.971 "thread": "nvmf_tgt_poll_group_000", 00:18:15.971 "listen_address": { 00:18:15.971 "trtype": "TCP", 00:18:15.971 "adrfam": "IPv4", 00:18:15.971 "traddr": "10.0.0.2", 00:18:15.971 "trsvcid": "4420" 00:18:15.971 }, 00:18:15.971 "peer_address": { 00:18:15.971 "trtype": "TCP", 00:18:15.971 "adrfam": "IPv4", 00:18:15.971 "traddr": "10.0.0.1", 00:18:15.971 "trsvcid": "43338" 00:18:15.971 }, 00:18:15.971 "auth": { 00:18:15.971 "state": "completed", 00:18:15.971 "digest": "sha256", 00:18:15.971 "dhgroup": "ffdhe2048" 00:18:15.971 } 00:18:15.971 } 00:18:15.971 ]' 00:18:15.971 08:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:15.971 08:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:15.971 08:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:15.971 08:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:15.971 08:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:16.228 08:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:16.228 08:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:16.228 08:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:16.487 08:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MzNiYjg2ZGVmMWUxZThmM2RjZmRjMDRhZTMxOTM2MGKp5Rkk: --dhchap-ctrl-secret DHHC-1:02:NjIxNGYzMzI5M2Y0Y2I3NWYyMmM1NDJlODVjNzUwYTM0NmFlZjA4ODgwY2E4YmY35CdYOA==: 00:18:17.423 08:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:17.423 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:17.423 08:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:17.423 08:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.423 08:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.423 08:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.423 08:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:17.423 08:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:17.423 08:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:17.682 08:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:18:17.682 08:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:17.682 08:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:17.682 08:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:17.682 08:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:17.682 08:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:17.682 08:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:17.682 08:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.682 08:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.682 08:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.682 08:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:17.682 08:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:17.940 00:18:17.940 08:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:17.940 08:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:17.940 08:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:18.228 08:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:18.228 08:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:18.228 08:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.228 08:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.228 08:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.228 08:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:18.228 { 00:18:18.228 "cntlid": 13, 00:18:18.228 "qid": 0, 00:18:18.228 "state": "enabled", 00:18:18.228 "thread": "nvmf_tgt_poll_group_000", 00:18:18.228 "listen_address": { 00:18:18.228 "trtype": "TCP", 00:18:18.228 "adrfam": "IPv4", 00:18:18.228 "traddr": "10.0.0.2", 00:18:18.228 "trsvcid": "4420" 00:18:18.228 }, 00:18:18.228 "peer_address": { 00:18:18.228 "trtype": "TCP", 00:18:18.228 "adrfam": "IPv4", 00:18:18.228 "traddr": "10.0.0.1", 00:18:18.228 "trsvcid": "43368" 00:18:18.228 }, 00:18:18.228 "auth": { 00:18:18.228 "state": "completed", 00:18:18.228 "digest": "sha256", 00:18:18.228 "dhgroup": "ffdhe2048" 00:18:18.228 } 00:18:18.228 } 00:18:18.228 ]' 00:18:18.228 08:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:18.228 08:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:18.228 08:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:18.486 08:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:18.486 08:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:18.486 08:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:18.486 08:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:18.486 08:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:18.745 08:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YzYyOTlmMmRmNTI4N2E0MDBiZjg1MWY0YzU0YzA5NTg1MjExYjllZjBkMzZjOTNkp9eaHA==: --dhchap-ctrl-secret DHHC-1:01:NzE2MWI2ODUzOWI0NmJlYjZhOWNiMmZlYzkyNTIwYzAg0K5o: 00:18:19.680 08:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:19.680 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:19.680 08:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:19.680 08:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.680 08:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.680 08:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.680 08:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:19.680 08:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:19.680 08:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:19.938 08:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:18:19.938 08:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:19.938 08:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:19.938 08:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:19.938 08:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:19.938 08:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:19.938 08:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:18:19.938 08:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.938 08:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.938 08:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.938 08:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:19.938 08:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:20.196 00:18:20.196 08:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:20.196 08:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:20.196 08:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:20.454 08:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:20.454 08:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:20.454 08:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.454 08:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.454 08:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.454 08:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:20.454 { 00:18:20.454 "cntlid": 15, 00:18:20.454 "qid": 0, 00:18:20.454 "state": "enabled", 00:18:20.454 "thread": "nvmf_tgt_poll_group_000", 00:18:20.454 "listen_address": { 00:18:20.454 "trtype": "TCP", 00:18:20.454 "adrfam": "IPv4", 00:18:20.454 "traddr": "10.0.0.2", 00:18:20.454 "trsvcid": "4420" 00:18:20.454 }, 00:18:20.454 "peer_address": { 00:18:20.454 "trtype": "TCP", 00:18:20.454 "adrfam": "IPv4", 00:18:20.454 "traddr": "10.0.0.1", 00:18:20.454 "trsvcid": "43384" 00:18:20.454 }, 00:18:20.454 "auth": { 00:18:20.454 "state": "completed", 00:18:20.454 "digest": "sha256", 00:18:20.454 "dhgroup": "ffdhe2048" 00:18:20.454 } 00:18:20.454 } 00:18:20.454 ]' 00:18:20.454 08:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:20.454 08:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:20.454 08:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:20.454 08:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:20.454 08:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:20.713 08:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:20.713 08:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:20.713 08:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:20.973 08:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:OGY0NzYyYzQzNWM2Y2U1NGRjOGY5YmVjZWQxMjM3OGYxM2FlMzM5Yjg0MzM1MjlhZmViNWVhMWU5ZWE0ZTI2NuGucvM=: 00:18:21.907 08:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:21.907 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:21.907 08:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:21.907 08:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.907 08:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.907 08:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.907 08:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:21.907 08:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:21.907 08:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:21.907 08:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:22.165 08:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:18:22.165 08:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:22.165 08:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:22.165 08:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:22.165 08:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:22.165 08:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:22.165 08:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:22.165 08:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.165 08:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.165 08:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.165 08:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:22.165 08:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:22.422 00:18:22.422 08:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:22.422 08:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:22.422 08:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:22.679 08:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:22.679 08:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:22.679 08:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.679 08:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.679 08:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.679 08:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:22.679 { 00:18:22.679 "cntlid": 17, 00:18:22.679 "qid": 0, 00:18:22.679 "state": "enabled", 00:18:22.679 "thread": "nvmf_tgt_poll_group_000", 00:18:22.679 "listen_address": { 00:18:22.679 "trtype": "TCP", 00:18:22.679 "adrfam": "IPv4", 00:18:22.679 "traddr": "10.0.0.2", 00:18:22.679 "trsvcid": "4420" 00:18:22.679 }, 00:18:22.679 "peer_address": { 00:18:22.679 "trtype": "TCP", 00:18:22.679 "adrfam": "IPv4", 00:18:22.679 "traddr": "10.0.0.1", 00:18:22.679 "trsvcid": "49520" 00:18:22.679 }, 00:18:22.679 "auth": { 00:18:22.679 "state": "completed", 00:18:22.679 "digest": "sha256", 00:18:22.679 "dhgroup": "ffdhe3072" 00:18:22.679 } 00:18:22.679 } 00:18:22.679 ]' 00:18:22.679 08:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:22.679 08:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:22.679 08:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:22.679 08:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:22.679 08:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:22.937 08:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:22.937 08:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:22.937 08:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:23.195 08:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:N2ViZGU5MGQ5YTE2NzZhYmMyZWFmNDYzMDE5ZGY5MDc1ZTBkNjJkMTQ0ZmM5NDBl8yaDbQ==: --dhchap-ctrl-secret DHHC-1:03:MjViZDVhOTg3MDMyODkwYjVmZDI0MzQ0ZWJjMWUxM2UzN2RiN2I1YzYwNDUzYzY0MTY2ODdiMDVjNjRhYjlmOIsqtQA=: 00:18:24.127 08:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:24.127 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:24.127 08:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:24.128 08:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.128 08:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.128 08:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.128 08:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:24.128 08:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:24.128 08:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:24.385 08:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:18:24.385 08:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:24.385 08:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:24.385 08:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:24.385 08:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:24.385 08:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:24.385 08:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:24.385 08:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.385 08:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.385 08:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.385 08:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:24.385 08:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:24.643 00:18:24.643 08:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:24.643 08:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:24.643 08:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:24.901 08:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:24.901 08:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:24.901 08:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.901 08:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.901 08:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.901 08:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:24.901 { 00:18:24.901 "cntlid": 19, 00:18:24.901 "qid": 0, 00:18:24.901 "state": "enabled", 00:18:24.901 "thread": "nvmf_tgt_poll_group_000", 00:18:24.901 "listen_address": { 00:18:24.901 "trtype": "TCP", 00:18:24.901 "adrfam": "IPv4", 00:18:24.901 "traddr": "10.0.0.2", 00:18:24.901 "trsvcid": "4420" 00:18:24.901 }, 00:18:24.901 "peer_address": { 00:18:24.901 "trtype": "TCP", 00:18:24.901 "adrfam": "IPv4", 00:18:24.901 "traddr": "10.0.0.1", 00:18:24.901 "trsvcid": "49542" 00:18:24.901 }, 00:18:24.901 "auth": { 00:18:24.901 "state": "completed", 00:18:24.901 "digest": "sha256", 00:18:24.901 "dhgroup": "ffdhe3072" 00:18:24.901 } 00:18:24.901 } 00:18:24.901 ]' 00:18:24.901 08:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:24.902 08:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:24.902 08:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:24.902 08:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:24.902 08:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:24.902 08:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:24.902 08:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:24.902 08:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:25.159 08:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MzNiYjg2ZGVmMWUxZThmM2RjZmRjMDRhZTMxOTM2MGKp5Rkk: --dhchap-ctrl-secret DHHC-1:02:NjIxNGYzMzI5M2Y0Y2I3NWYyMmM1NDJlODVjNzUwYTM0NmFlZjA4ODgwY2E4YmY35CdYOA==: 00:18:26.534 08:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:26.534 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:26.534 08:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:26.534 08:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.534 08:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.534 08:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.534 08:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:26.534 08:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:26.534 08:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:26.534 08:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:18:26.534 08:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:26.534 08:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:26.534 08:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:26.534 08:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:26.534 08:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:26.534 08:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:26.534 08:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.534 08:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.534 08:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.534 08:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:26.534 08:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:26.791 00:18:26.791 08:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:26.791 08:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:26.791 08:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:27.055 08:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:27.055 08:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:27.055 08:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.056 08:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.056 08:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.056 08:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:27.056 { 00:18:27.056 "cntlid": 21, 00:18:27.056 "qid": 0, 00:18:27.056 "state": "enabled", 00:18:27.056 "thread": "nvmf_tgt_poll_group_000", 00:18:27.056 "listen_address": { 00:18:27.056 "trtype": "TCP", 00:18:27.056 "adrfam": "IPv4", 00:18:27.056 "traddr": "10.0.0.2", 00:18:27.056 "trsvcid": "4420" 00:18:27.056 }, 00:18:27.056 "peer_address": { 00:18:27.056 "trtype": "TCP", 00:18:27.056 "adrfam": "IPv4", 00:18:27.056 "traddr": "10.0.0.1", 00:18:27.056 "trsvcid": "49570" 00:18:27.056 }, 00:18:27.056 "auth": { 00:18:27.056 "state": "completed", 00:18:27.056 "digest": "sha256", 00:18:27.056 "dhgroup": "ffdhe3072" 00:18:27.056 } 00:18:27.056 } 00:18:27.056 ]' 00:18:27.056 08:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:27.313 08:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:27.313 08:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:27.313 08:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:27.313 08:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:27.313 08:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:27.313 08:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:27.313 08:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:27.572 08:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YzYyOTlmMmRmNTI4N2E0MDBiZjg1MWY0YzU0YzA5NTg1MjExYjllZjBkMzZjOTNkp9eaHA==: --dhchap-ctrl-secret DHHC-1:01:NzE2MWI2ODUzOWI0NmJlYjZhOWNiMmZlYzkyNTIwYzAg0K5o: 00:18:28.508 08:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:28.508 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:28.508 08:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:28.508 08:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.508 08:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.508 08:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.508 08:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:28.508 08:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:28.508 08:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:28.766 08:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:18:28.766 08:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:28.766 08:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:28.766 08:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:28.766 08:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:28.766 08:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:28.766 08:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:18:28.766 08:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.766 08:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.766 08:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.766 08:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:28.766 08:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:29.024 00:18:29.024 08:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:29.024 08:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:29.024 08:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:29.282 08:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:29.282 08:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:29.282 08:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.282 08:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.282 08:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.282 08:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:29.282 { 00:18:29.282 "cntlid": 23, 00:18:29.282 "qid": 0, 00:18:29.282 "state": "enabled", 00:18:29.282 "thread": "nvmf_tgt_poll_group_000", 00:18:29.282 "listen_address": { 00:18:29.282 "trtype": "TCP", 00:18:29.282 "adrfam": "IPv4", 00:18:29.282 "traddr": "10.0.0.2", 00:18:29.282 "trsvcid": "4420" 00:18:29.282 }, 00:18:29.282 "peer_address": { 00:18:29.282 "trtype": "TCP", 00:18:29.282 "adrfam": "IPv4", 00:18:29.282 "traddr": "10.0.0.1", 00:18:29.282 "trsvcid": "49594" 00:18:29.282 }, 00:18:29.282 "auth": { 00:18:29.282 "state": "completed", 00:18:29.282 "digest": "sha256", 00:18:29.282 "dhgroup": "ffdhe3072" 00:18:29.282 } 00:18:29.282 } 00:18:29.282 ]' 00:18:29.282 08:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:29.282 08:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:29.282 08:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:29.540 08:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:29.540 08:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:29.540 08:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:29.540 08:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:29.540 08:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:29.798 08:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:OGY0NzYyYzQzNWM2Y2U1NGRjOGY5YmVjZWQxMjM3OGYxM2FlMzM5Yjg0MzM1MjlhZmViNWVhMWU5ZWE0ZTI2NuGucvM=: 00:18:30.734 08:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:30.734 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:30.734 08:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:30.734 08:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.734 08:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.734 08:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.734 08:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:30.734 08:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:30.734 08:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:30.734 08:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:30.992 08:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:18:30.992 08:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:30.992 08:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:30.992 08:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:30.992 08:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:30.992 08:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:30.992 08:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:30.992 08:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.992 08:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.992 08:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.992 08:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:30.992 08:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:31.250 00:18:31.250 08:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:31.250 08:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:31.250 08:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:31.508 08:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:31.508 08:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:31.508 08:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.508 08:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.508 08:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.508 08:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:31.508 { 00:18:31.508 "cntlid": 25, 00:18:31.508 "qid": 0, 00:18:31.508 "state": "enabled", 00:18:31.508 "thread": "nvmf_tgt_poll_group_000", 00:18:31.508 "listen_address": { 00:18:31.508 "trtype": "TCP", 00:18:31.508 "adrfam": "IPv4", 00:18:31.508 "traddr": "10.0.0.2", 00:18:31.508 "trsvcid": "4420" 00:18:31.508 }, 00:18:31.508 "peer_address": { 00:18:31.508 "trtype": "TCP", 00:18:31.508 "adrfam": "IPv4", 00:18:31.508 "traddr": "10.0.0.1", 00:18:31.508 "trsvcid": "53452" 00:18:31.508 }, 00:18:31.508 "auth": { 00:18:31.508 "state": "completed", 00:18:31.508 "digest": "sha256", 00:18:31.508 "dhgroup": "ffdhe4096" 00:18:31.508 } 00:18:31.508 } 00:18:31.508 ]' 00:18:31.508 08:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:31.508 08:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:31.508 08:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:31.766 08:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:31.766 08:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:31.766 08:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:31.766 08:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:31.766 08:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:32.024 08:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:N2ViZGU5MGQ5YTE2NzZhYmMyZWFmNDYzMDE5ZGY5MDc1ZTBkNjJkMTQ0ZmM5NDBl8yaDbQ==: --dhchap-ctrl-secret DHHC-1:03:MjViZDVhOTg3MDMyODkwYjVmZDI0MzQ0ZWJjMWUxM2UzN2RiN2I1YzYwNDUzYzY0MTY2ODdiMDVjNjRhYjlmOIsqtQA=: 00:18:32.962 08:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:32.962 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:32.962 08:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:32.962 08:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.962 08:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.962 08:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.962 08:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:32.962 08:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:32.962 08:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:33.220 08:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:18:33.220 08:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:33.220 08:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:33.220 08:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:33.220 08:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:33.220 08:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:33.220 08:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:33.220 08:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.220 08:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.220 08:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.220 08:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:33.220 08:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:33.788 00:18:33.788 08:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:33.788 08:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:33.788 08:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:33.788 08:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:33.788 08:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:33.788 08:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.788 08:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.788 08:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.788 08:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:33.788 { 00:18:33.788 "cntlid": 27, 00:18:33.788 "qid": 0, 00:18:33.788 "state": "enabled", 00:18:33.788 "thread": "nvmf_tgt_poll_group_000", 00:18:33.788 "listen_address": { 00:18:33.788 "trtype": "TCP", 00:18:33.788 "adrfam": "IPv4", 00:18:33.788 "traddr": "10.0.0.2", 00:18:33.788 "trsvcid": "4420" 00:18:33.788 }, 00:18:33.788 "peer_address": { 00:18:33.788 "trtype": "TCP", 00:18:33.788 "adrfam": "IPv4", 00:18:33.788 "traddr": "10.0.0.1", 00:18:33.788 "trsvcid": "53498" 00:18:33.788 }, 00:18:33.788 "auth": { 00:18:33.788 "state": "completed", 00:18:33.788 "digest": "sha256", 00:18:33.788 "dhgroup": "ffdhe4096" 00:18:33.788 } 00:18:33.788 } 00:18:33.788 ]' 00:18:33.788 08:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:34.046 08:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:34.046 08:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:34.046 08:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:34.046 08:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:34.046 08:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:34.046 08:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:34.046 08:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:34.329 08:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MzNiYjg2ZGVmMWUxZThmM2RjZmRjMDRhZTMxOTM2MGKp5Rkk: --dhchap-ctrl-secret DHHC-1:02:NjIxNGYzMzI5M2Y0Y2I3NWYyMmM1NDJlODVjNzUwYTM0NmFlZjA4ODgwY2E4YmY35CdYOA==: 00:18:35.273 08:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:35.273 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:35.273 08:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:35.273 08:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.273 08:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.273 08:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.273 08:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:35.273 08:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:35.273 08:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:35.532 08:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:18:35.532 08:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:35.532 08:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:35.532 08:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:35.532 08:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:35.532 08:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:35.532 08:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:35.532 08:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.532 08:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.532 08:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.532 08:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:35.532 08:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:36.101 00:18:36.101 08:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:36.101 08:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:36.101 08:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:36.361 08:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:36.361 08:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:36.361 08:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.361 08:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.361 08:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.361 08:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:36.361 { 00:18:36.361 "cntlid": 29, 00:18:36.361 "qid": 0, 00:18:36.361 "state": "enabled", 00:18:36.361 "thread": "nvmf_tgt_poll_group_000", 00:18:36.361 "listen_address": { 00:18:36.361 "trtype": "TCP", 00:18:36.361 "adrfam": "IPv4", 00:18:36.361 "traddr": "10.0.0.2", 00:18:36.361 "trsvcid": "4420" 00:18:36.361 }, 00:18:36.361 "peer_address": { 00:18:36.361 "trtype": "TCP", 00:18:36.361 "adrfam": "IPv4", 00:18:36.361 "traddr": "10.0.0.1", 00:18:36.361 "trsvcid": "53514" 00:18:36.361 }, 00:18:36.361 "auth": { 00:18:36.361 "state": "completed", 00:18:36.361 "digest": "sha256", 00:18:36.361 "dhgroup": "ffdhe4096" 00:18:36.361 } 00:18:36.361 } 00:18:36.361 ]' 00:18:36.361 08:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:36.361 08:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:36.361 08:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:36.361 08:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:36.361 08:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:36.361 08:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:36.361 08:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:36.361 08:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:36.619 08:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YzYyOTlmMmRmNTI4N2E0MDBiZjg1MWY0YzU0YzA5NTg1MjExYjllZjBkMzZjOTNkp9eaHA==: --dhchap-ctrl-secret DHHC-1:01:NzE2MWI2ODUzOWI0NmJlYjZhOWNiMmZlYzkyNTIwYzAg0K5o: 00:18:37.555 08:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:37.555 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:37.555 08:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:37.555 08:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.555 08:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.555 08:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.555 08:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:37.555 08:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:37.555 08:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:38.124 08:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:18:38.124 08:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:38.124 08:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:38.124 08:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:38.124 08:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:38.124 08:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:38.124 08:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:18:38.124 08:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.124 08:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.124 08:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.124 08:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:38.124 08:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:38.382 00:18:38.382 08:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:38.382 08:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:38.382 08:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:38.640 08:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:38.640 08:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:38.640 08:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.640 08:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.640 08:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.640 08:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:38.640 { 00:18:38.640 "cntlid": 31, 00:18:38.640 "qid": 0, 00:18:38.640 "state": "enabled", 00:18:38.640 "thread": "nvmf_tgt_poll_group_000", 00:18:38.640 "listen_address": { 00:18:38.640 "trtype": "TCP", 00:18:38.640 "adrfam": "IPv4", 00:18:38.640 "traddr": "10.0.0.2", 00:18:38.640 "trsvcid": "4420" 00:18:38.640 }, 00:18:38.640 "peer_address": { 00:18:38.640 "trtype": "TCP", 00:18:38.640 "adrfam": "IPv4", 00:18:38.640 "traddr": "10.0.0.1", 00:18:38.640 "trsvcid": "53554" 00:18:38.640 }, 00:18:38.640 "auth": { 00:18:38.640 "state": "completed", 00:18:38.640 "digest": "sha256", 00:18:38.640 "dhgroup": "ffdhe4096" 00:18:38.640 } 00:18:38.640 } 00:18:38.640 ]' 00:18:38.640 08:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:38.640 08:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:38.640 08:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:38.640 08:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:38.640 08:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:38.640 08:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:38.640 08:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:38.640 08:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:38.899 08:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:OGY0NzYyYzQzNWM2Y2U1NGRjOGY5YmVjZWQxMjM3OGYxM2FlMzM5Yjg0MzM1MjlhZmViNWVhMWU5ZWE0ZTI2NuGucvM=: 00:18:40.275 08:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:40.275 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:40.275 08:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:40.275 08:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.275 08:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.275 08:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.275 08:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:40.275 08:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:40.275 08:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:40.275 08:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:40.275 08:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:18:40.275 08:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:40.275 08:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:40.275 08:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:40.275 08:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:40.275 08:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:40.275 08:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:40.275 08:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.275 08:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.275 08:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.275 08:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:40.275 08:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:40.843 00:18:40.843 08:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:40.843 08:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:40.843 08:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:41.101 08:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:41.101 08:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:41.101 08:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.101 08:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.101 08:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.101 08:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:41.101 { 00:18:41.101 "cntlid": 33, 00:18:41.101 "qid": 0, 00:18:41.101 "state": "enabled", 00:18:41.101 "thread": "nvmf_tgt_poll_group_000", 00:18:41.101 "listen_address": { 00:18:41.101 "trtype": "TCP", 00:18:41.101 "adrfam": "IPv4", 00:18:41.101 "traddr": "10.0.0.2", 00:18:41.101 "trsvcid": "4420" 00:18:41.101 }, 00:18:41.101 "peer_address": { 00:18:41.101 "trtype": "TCP", 00:18:41.101 "adrfam": "IPv4", 00:18:41.101 "traddr": "10.0.0.1", 00:18:41.101 "trsvcid": "53580" 00:18:41.101 }, 00:18:41.101 "auth": { 00:18:41.101 "state": "completed", 00:18:41.101 "digest": "sha256", 00:18:41.101 "dhgroup": "ffdhe6144" 00:18:41.101 } 00:18:41.101 } 00:18:41.101 ]' 00:18:41.101 08:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:41.101 08:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:41.101 08:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:41.101 08:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:41.101 08:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:41.101 08:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:41.101 08:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:41.101 08:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:41.359 08:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:N2ViZGU5MGQ5YTE2NzZhYmMyZWFmNDYzMDE5ZGY5MDc1ZTBkNjJkMTQ0ZmM5NDBl8yaDbQ==: --dhchap-ctrl-secret DHHC-1:03:MjViZDVhOTg3MDMyODkwYjVmZDI0MzQ0ZWJjMWUxM2UzN2RiN2I1YzYwNDUzYzY0MTY2ODdiMDVjNjRhYjlmOIsqtQA=: 00:18:42.292 08:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:42.292 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:42.292 08:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:42.292 08:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.292 08:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.292 08:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.292 08:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:42.292 08:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:42.292 08:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:42.551 08:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:18:42.551 08:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:42.551 08:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:42.551 08:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:42.551 08:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:42.551 08:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:42.551 08:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:42.551 08:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.551 08:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.551 08:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.551 08:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:42.551 08:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:43.119 00:18:43.377 08:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:43.377 08:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:43.377 08:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:43.634 08:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:43.634 08:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:43.634 08:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.634 08:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.634 08:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.634 08:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:43.634 { 00:18:43.634 "cntlid": 35, 00:18:43.634 "qid": 0, 00:18:43.634 "state": "enabled", 00:18:43.634 "thread": "nvmf_tgt_poll_group_000", 00:18:43.634 "listen_address": { 00:18:43.635 "trtype": "TCP", 00:18:43.635 "adrfam": "IPv4", 00:18:43.635 "traddr": "10.0.0.2", 00:18:43.635 "trsvcid": "4420" 00:18:43.635 }, 00:18:43.635 "peer_address": { 00:18:43.635 "trtype": "TCP", 00:18:43.635 "adrfam": "IPv4", 00:18:43.635 "traddr": "10.0.0.1", 00:18:43.635 "trsvcid": "38422" 00:18:43.635 }, 00:18:43.635 "auth": { 00:18:43.635 "state": "completed", 00:18:43.635 "digest": "sha256", 00:18:43.635 "dhgroup": "ffdhe6144" 00:18:43.635 } 00:18:43.635 } 00:18:43.635 ]' 00:18:43.635 08:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:43.635 08:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:43.635 08:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:43.635 08:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:43.635 08:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:43.635 08:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:43.635 08:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:43.635 08:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:43.893 08:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MzNiYjg2ZGVmMWUxZThmM2RjZmRjMDRhZTMxOTM2MGKp5Rkk: --dhchap-ctrl-secret DHHC-1:02:NjIxNGYzMzI5M2Y0Y2I3NWYyMmM1NDJlODVjNzUwYTM0NmFlZjA4ODgwY2E4YmY35CdYOA==: 00:18:44.831 08:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:44.831 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:44.831 08:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:44.831 08:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.831 08:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.831 08:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.831 08:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:44.831 08:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:44.831 08:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:45.088 08:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:18:45.088 08:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:45.088 08:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:45.088 08:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:45.088 08:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:45.088 08:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:45.088 08:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:45.088 08:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.088 08:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.088 08:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.088 08:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:45.089 08:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:45.657 00:18:45.915 08:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:45.915 08:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:45.916 08:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:46.173 08:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:46.173 08:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:46.173 08:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.173 08:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.173 08:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.173 08:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:46.173 { 00:18:46.173 "cntlid": 37, 00:18:46.173 "qid": 0, 00:18:46.173 "state": "enabled", 00:18:46.173 "thread": "nvmf_tgt_poll_group_000", 00:18:46.173 "listen_address": { 00:18:46.173 "trtype": "TCP", 00:18:46.173 "adrfam": "IPv4", 00:18:46.173 "traddr": "10.0.0.2", 00:18:46.173 "trsvcid": "4420" 00:18:46.173 }, 00:18:46.173 "peer_address": { 00:18:46.173 "trtype": "TCP", 00:18:46.173 "adrfam": "IPv4", 00:18:46.173 "traddr": "10.0.0.1", 00:18:46.173 "trsvcid": "38442" 00:18:46.173 }, 00:18:46.173 "auth": { 00:18:46.173 "state": "completed", 00:18:46.173 "digest": "sha256", 00:18:46.173 "dhgroup": "ffdhe6144" 00:18:46.173 } 00:18:46.173 } 00:18:46.173 ]' 00:18:46.173 08:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:46.173 08:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:46.173 08:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:46.173 08:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:46.173 08:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:46.173 08:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:46.173 08:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:46.173 08:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:46.430 08:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YzYyOTlmMmRmNTI4N2E0MDBiZjg1MWY0YzU0YzA5NTg1MjExYjllZjBkMzZjOTNkp9eaHA==: --dhchap-ctrl-secret DHHC-1:01:NzE2MWI2ODUzOWI0NmJlYjZhOWNiMmZlYzkyNTIwYzAg0K5o: 00:18:47.365 08:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:47.365 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:47.365 08:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:47.365 08:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.365 08:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.365 08:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.365 08:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:47.365 08:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:47.365 08:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:47.624 08:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:18:47.624 08:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:47.624 08:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:47.624 08:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:47.624 08:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:47.624 08:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:47.624 08:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:18:47.624 08:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.624 08:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.624 08:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.624 08:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:47.624 08:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:48.192 00:18:48.192 08:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:48.192 08:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:48.192 08:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:48.450 08:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:48.450 08:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:48.450 08:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.450 08:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.450 08:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.450 08:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:48.450 { 00:18:48.450 "cntlid": 39, 00:18:48.450 "qid": 0, 00:18:48.450 "state": "enabled", 00:18:48.450 "thread": "nvmf_tgt_poll_group_000", 00:18:48.450 "listen_address": { 00:18:48.450 "trtype": "TCP", 00:18:48.450 "adrfam": "IPv4", 00:18:48.450 "traddr": "10.0.0.2", 00:18:48.450 "trsvcid": "4420" 00:18:48.450 }, 00:18:48.450 "peer_address": { 00:18:48.450 "trtype": "TCP", 00:18:48.450 "adrfam": "IPv4", 00:18:48.450 "traddr": "10.0.0.1", 00:18:48.450 "trsvcid": "38470" 00:18:48.450 }, 00:18:48.450 "auth": { 00:18:48.450 "state": "completed", 00:18:48.450 "digest": "sha256", 00:18:48.450 "dhgroup": "ffdhe6144" 00:18:48.450 } 00:18:48.450 } 00:18:48.450 ]' 00:18:48.450 08:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:48.450 08:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:48.450 08:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:48.708 08:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:48.708 08:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:48.708 08:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:48.708 08:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:48.708 08:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:48.966 08:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:OGY0NzYyYzQzNWM2Y2U1NGRjOGY5YmVjZWQxMjM3OGYxM2FlMzM5Yjg0MzM1MjlhZmViNWVhMWU5ZWE0ZTI2NuGucvM=: 00:18:49.903 08:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:49.903 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:49.903 08:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:49.903 08:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.903 08:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.903 08:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.903 08:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:49.903 08:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:49.903 08:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:49.903 08:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:50.161 08:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:18:50.161 08:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:50.161 08:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:50.161 08:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:50.161 08:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:50.161 08:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:50.161 08:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:50.161 08:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.161 08:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.161 08:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.161 08:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:50.161 08:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:51.098 00:18:51.098 08:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:51.098 08:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:51.098 08:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:51.385 08:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:51.385 08:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:51.385 08:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.385 08:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.385 08:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.385 08:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:51.385 { 00:18:51.385 "cntlid": 41, 00:18:51.385 "qid": 0, 00:18:51.385 "state": "enabled", 00:18:51.385 "thread": "nvmf_tgt_poll_group_000", 00:18:51.385 "listen_address": { 00:18:51.385 "trtype": "TCP", 00:18:51.385 "adrfam": "IPv4", 00:18:51.385 "traddr": "10.0.0.2", 00:18:51.385 "trsvcid": "4420" 00:18:51.385 }, 00:18:51.385 "peer_address": { 00:18:51.385 "trtype": "TCP", 00:18:51.385 "adrfam": "IPv4", 00:18:51.385 "traddr": "10.0.0.1", 00:18:51.385 "trsvcid": "38496" 00:18:51.385 }, 00:18:51.385 "auth": { 00:18:51.385 "state": "completed", 00:18:51.385 "digest": "sha256", 00:18:51.385 "dhgroup": "ffdhe8192" 00:18:51.385 } 00:18:51.385 } 00:18:51.385 ]' 00:18:51.385 08:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:51.385 08:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:51.385 08:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:51.385 08:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:51.385 08:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:51.385 08:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:51.385 08:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:51.385 08:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:51.643 08:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:N2ViZGU5MGQ5YTE2NzZhYmMyZWFmNDYzMDE5ZGY5MDc1ZTBkNjJkMTQ0ZmM5NDBl8yaDbQ==: --dhchap-ctrl-secret DHHC-1:03:MjViZDVhOTg3MDMyODkwYjVmZDI0MzQ0ZWJjMWUxM2UzN2RiN2I1YzYwNDUzYzY0MTY2ODdiMDVjNjRhYjlmOIsqtQA=: 00:18:52.579 08:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:52.579 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:52.579 08:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:52.579 08:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.579 08:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.579 08:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.579 08:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:52.579 08:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:52.579 08:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:52.837 08:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:18:52.837 08:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:52.837 08:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:52.837 08:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:52.837 08:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:52.837 08:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:52.837 08:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:52.837 08:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.837 08:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.837 08:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.837 08:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:52.838 08:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:53.770 00:18:53.770 08:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:53.770 08:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:53.770 08:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:54.028 08:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:54.028 08:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:54.028 08:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.028 08:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.028 08:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.028 08:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:54.028 { 00:18:54.028 "cntlid": 43, 00:18:54.028 "qid": 0, 00:18:54.028 "state": "enabled", 00:18:54.028 "thread": "nvmf_tgt_poll_group_000", 00:18:54.028 "listen_address": { 00:18:54.028 "trtype": "TCP", 00:18:54.028 "adrfam": "IPv4", 00:18:54.028 "traddr": "10.0.0.2", 00:18:54.028 "trsvcid": "4420" 00:18:54.028 }, 00:18:54.028 "peer_address": { 00:18:54.028 "trtype": "TCP", 00:18:54.028 "adrfam": "IPv4", 00:18:54.028 "traddr": "10.0.0.1", 00:18:54.028 "trsvcid": "42532" 00:18:54.028 }, 00:18:54.028 "auth": { 00:18:54.028 "state": "completed", 00:18:54.028 "digest": "sha256", 00:18:54.028 "dhgroup": "ffdhe8192" 00:18:54.028 } 00:18:54.028 } 00:18:54.028 ]' 00:18:54.028 08:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:54.287 08:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:54.287 08:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:54.287 08:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:54.287 08:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:54.287 08:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:54.287 08:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:54.287 08:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:54.543 08:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MzNiYjg2ZGVmMWUxZThmM2RjZmRjMDRhZTMxOTM2MGKp5Rkk: --dhchap-ctrl-secret DHHC-1:02:NjIxNGYzMzI5M2Y0Y2I3NWYyMmM1NDJlODVjNzUwYTM0NmFlZjA4ODgwY2E4YmY35CdYOA==: 00:18:55.477 08:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:55.477 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:55.477 08:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:55.477 08:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.477 08:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.477 08:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.477 08:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:55.477 08:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:55.477 08:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:55.734 08:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:18:55.734 08:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:55.734 08:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:55.734 08:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:55.734 08:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:55.734 08:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:55.734 08:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:55.734 08:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.734 08:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.734 08:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.734 08:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:55.734 08:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:56.672 00:18:56.672 08:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:56.672 08:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:56.672 08:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:56.929 08:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:56.929 08:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:56.929 08:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.929 08:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.929 08:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.929 08:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:56.929 { 00:18:56.929 "cntlid": 45, 00:18:56.929 "qid": 0, 00:18:56.929 "state": "enabled", 00:18:56.929 "thread": "nvmf_tgt_poll_group_000", 00:18:56.929 "listen_address": { 00:18:56.929 "trtype": "TCP", 00:18:56.929 "adrfam": "IPv4", 00:18:56.929 "traddr": "10.0.0.2", 00:18:56.929 "trsvcid": "4420" 00:18:56.929 }, 00:18:56.929 "peer_address": { 00:18:56.929 "trtype": "TCP", 00:18:56.929 "adrfam": "IPv4", 00:18:56.929 "traddr": "10.0.0.1", 00:18:56.929 "trsvcid": "42546" 00:18:56.929 }, 00:18:56.929 "auth": { 00:18:56.929 "state": "completed", 00:18:56.929 "digest": "sha256", 00:18:56.929 "dhgroup": "ffdhe8192" 00:18:56.929 } 00:18:56.929 } 00:18:56.929 ]' 00:18:56.929 08:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:56.929 08:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:56.929 08:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:56.929 08:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:56.929 08:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:57.189 08:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:57.189 08:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:57.189 08:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:57.448 08:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YzYyOTlmMmRmNTI4N2E0MDBiZjg1MWY0YzU0YzA5NTg1MjExYjllZjBkMzZjOTNkp9eaHA==: --dhchap-ctrl-secret DHHC-1:01:NzE2MWI2ODUzOWI0NmJlYjZhOWNiMmZlYzkyNTIwYzAg0K5o: 00:18:58.384 08:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:58.384 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:58.384 08:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:58.384 08:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.384 08:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.384 08:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.384 08:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:58.384 08:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:58.384 08:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:58.642 08:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:18:58.642 08:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:58.642 08:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:58.642 08:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:58.642 08:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:58.642 08:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:58.642 08:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:18:58.642 08:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.642 08:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.642 08:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.642 08:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:58.642 08:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:59.574 00:18:59.574 08:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:59.574 08:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:59.574 08:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:59.832 08:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:59.832 08:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:59.832 08:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.832 08:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.832 08:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.832 08:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:59.832 { 00:18:59.832 "cntlid": 47, 00:18:59.832 "qid": 0, 00:18:59.832 "state": "enabled", 00:18:59.832 "thread": "nvmf_tgt_poll_group_000", 00:18:59.832 "listen_address": { 00:18:59.832 "trtype": "TCP", 00:18:59.832 "adrfam": "IPv4", 00:18:59.832 "traddr": "10.0.0.2", 00:18:59.832 "trsvcid": "4420" 00:18:59.832 }, 00:18:59.832 "peer_address": { 00:18:59.832 "trtype": "TCP", 00:18:59.832 "adrfam": "IPv4", 00:18:59.832 "traddr": "10.0.0.1", 00:18:59.832 "trsvcid": "42576" 00:18:59.832 }, 00:18:59.832 "auth": { 00:18:59.832 "state": "completed", 00:18:59.832 "digest": "sha256", 00:18:59.832 "dhgroup": "ffdhe8192" 00:18:59.832 } 00:18:59.832 } 00:18:59.832 ]' 00:18:59.832 08:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:59.832 08:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:59.832 08:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:59.832 08:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:59.832 08:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:00.090 08:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:00.090 08:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:00.090 08:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:00.347 08:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:OGY0NzYyYzQzNWM2Y2U1NGRjOGY5YmVjZWQxMjM3OGYxM2FlMzM5Yjg0MzM1MjlhZmViNWVhMWU5ZWE0ZTI2NuGucvM=: 00:19:01.282 08:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:01.282 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:01.282 08:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:01.282 08:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.282 08:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.282 08:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.282 08:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:19:01.282 08:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:01.282 08:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:01.282 08:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:01.282 08:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:01.540 08:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:19:01.540 08:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:01.540 08:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:01.540 08:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:01.540 08:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:01.540 08:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:01.540 08:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:01.540 08:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.540 08:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.540 08:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.540 08:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:01.540 08:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:01.798 00:19:01.798 08:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:01.798 08:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:01.798 08:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:02.055 08:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:02.055 08:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:02.055 08:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.055 08:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.055 08:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.055 08:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:02.055 { 00:19:02.055 "cntlid": 49, 00:19:02.055 "qid": 0, 00:19:02.055 "state": "enabled", 00:19:02.055 "thread": "nvmf_tgt_poll_group_000", 00:19:02.055 "listen_address": { 00:19:02.055 "trtype": "TCP", 00:19:02.055 "adrfam": "IPv4", 00:19:02.055 "traddr": "10.0.0.2", 00:19:02.055 "trsvcid": "4420" 00:19:02.055 }, 00:19:02.055 "peer_address": { 00:19:02.055 "trtype": "TCP", 00:19:02.055 "adrfam": "IPv4", 00:19:02.055 "traddr": "10.0.0.1", 00:19:02.055 "trsvcid": "47610" 00:19:02.055 }, 00:19:02.055 "auth": { 00:19:02.055 "state": "completed", 00:19:02.055 "digest": "sha384", 00:19:02.055 "dhgroup": "null" 00:19:02.055 } 00:19:02.055 } 00:19:02.055 ]' 00:19:02.055 08:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:02.314 08:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:02.314 08:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:02.314 08:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:02.314 08:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:02.314 08:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:02.314 08:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:02.314 08:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:02.571 08:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:N2ViZGU5MGQ5YTE2NzZhYmMyZWFmNDYzMDE5ZGY5MDc1ZTBkNjJkMTQ0ZmM5NDBl8yaDbQ==: --dhchap-ctrl-secret DHHC-1:03:MjViZDVhOTg3MDMyODkwYjVmZDI0MzQ0ZWJjMWUxM2UzN2RiN2I1YzYwNDUzYzY0MTY2ODdiMDVjNjRhYjlmOIsqtQA=: 00:19:03.510 08:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:03.510 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:03.510 08:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:03.510 08:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.510 08:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.510 08:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.510 08:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:03.510 08:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:03.510 08:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:03.769 08:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:19:03.769 08:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:03.769 08:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:03.769 08:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:03.769 08:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:03.769 08:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:03.769 08:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:03.769 08:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.769 08:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.769 08:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.769 08:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:03.769 08:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:04.027 00:19:04.027 08:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:04.027 08:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:04.027 08:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:04.285 08:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:04.285 08:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:04.285 08:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.285 08:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.285 08:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.285 08:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:04.285 { 00:19:04.285 "cntlid": 51, 00:19:04.285 "qid": 0, 00:19:04.285 "state": "enabled", 00:19:04.285 "thread": "nvmf_tgt_poll_group_000", 00:19:04.285 "listen_address": { 00:19:04.285 "trtype": "TCP", 00:19:04.285 "adrfam": "IPv4", 00:19:04.285 "traddr": "10.0.0.2", 00:19:04.285 "trsvcid": "4420" 00:19:04.285 }, 00:19:04.285 "peer_address": { 00:19:04.285 "trtype": "TCP", 00:19:04.285 "adrfam": "IPv4", 00:19:04.285 "traddr": "10.0.0.1", 00:19:04.285 "trsvcid": "47640" 00:19:04.285 }, 00:19:04.285 "auth": { 00:19:04.285 "state": "completed", 00:19:04.285 "digest": "sha384", 00:19:04.285 "dhgroup": "null" 00:19:04.285 } 00:19:04.285 } 00:19:04.285 ]' 00:19:04.285 08:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:04.542 08:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:04.542 08:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:04.542 08:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:04.542 08:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:04.543 08:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:04.543 08:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:04.543 08:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:04.800 08:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MzNiYjg2ZGVmMWUxZThmM2RjZmRjMDRhZTMxOTM2MGKp5Rkk: --dhchap-ctrl-secret DHHC-1:02:NjIxNGYzMzI5M2Y0Y2I3NWYyMmM1NDJlODVjNzUwYTM0NmFlZjA4ODgwY2E4YmY35CdYOA==: 00:19:05.744 08:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:05.744 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:05.744 08:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:05.744 08:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.744 08:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.744 08:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.744 08:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:05.744 08:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:05.744 08:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:06.002 08:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:19:06.002 08:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:06.002 08:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:06.002 08:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:06.002 08:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:06.002 08:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:06.002 08:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:06.002 08:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.002 08:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.002 08:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.002 08:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:06.002 08:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:06.569 00:19:06.569 08:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:06.569 08:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:06.569 08:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:06.569 08:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:06.569 08:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:06.569 08:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.569 08:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.569 08:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.569 08:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:06.569 { 00:19:06.569 "cntlid": 53, 00:19:06.569 "qid": 0, 00:19:06.569 "state": "enabled", 00:19:06.569 "thread": "nvmf_tgt_poll_group_000", 00:19:06.569 "listen_address": { 00:19:06.569 "trtype": "TCP", 00:19:06.569 "adrfam": "IPv4", 00:19:06.569 "traddr": "10.0.0.2", 00:19:06.569 "trsvcid": "4420" 00:19:06.569 }, 00:19:06.569 "peer_address": { 00:19:06.569 "trtype": "TCP", 00:19:06.569 "adrfam": "IPv4", 00:19:06.569 "traddr": "10.0.0.1", 00:19:06.569 "trsvcid": "47658" 00:19:06.569 }, 00:19:06.569 "auth": { 00:19:06.569 "state": "completed", 00:19:06.569 "digest": "sha384", 00:19:06.569 "dhgroup": "null" 00:19:06.569 } 00:19:06.569 } 00:19:06.569 ]' 00:19:06.569 08:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:06.826 08:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:06.826 08:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:06.827 08:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:06.827 08:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:06.827 08:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:06.827 08:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:06.827 08:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:07.104 08:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YzYyOTlmMmRmNTI4N2E0MDBiZjg1MWY0YzU0YzA5NTg1MjExYjllZjBkMzZjOTNkp9eaHA==: --dhchap-ctrl-secret DHHC-1:01:NzE2MWI2ODUzOWI0NmJlYjZhOWNiMmZlYzkyNTIwYzAg0K5o: 00:19:08.056 08:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:08.056 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:08.057 08:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:08.057 08:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.057 08:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.057 08:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.057 08:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:08.057 08:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:08.057 08:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:08.314 08:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:19:08.314 08:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:08.314 08:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:08.314 08:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:08.314 08:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:08.314 08:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:08.314 08:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:08.314 08:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.314 08:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.314 08:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.314 08:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:08.314 08:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:08.882 00:19:08.882 08:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:08.882 08:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:08.882 08:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:08.882 08:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:08.882 08:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:08.882 08:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.882 08:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.882 08:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.882 08:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:08.882 { 00:19:08.882 "cntlid": 55, 00:19:08.883 "qid": 0, 00:19:08.883 "state": "enabled", 00:19:08.883 "thread": "nvmf_tgt_poll_group_000", 00:19:08.883 "listen_address": { 00:19:08.883 "trtype": "TCP", 00:19:08.883 "adrfam": "IPv4", 00:19:08.883 "traddr": "10.0.0.2", 00:19:08.883 "trsvcid": "4420" 00:19:08.883 }, 00:19:08.883 "peer_address": { 00:19:08.883 "trtype": "TCP", 00:19:08.883 "adrfam": "IPv4", 00:19:08.883 "traddr": "10.0.0.1", 00:19:08.883 "trsvcid": "47690" 00:19:08.883 }, 00:19:08.883 "auth": { 00:19:08.883 "state": "completed", 00:19:08.883 "digest": "sha384", 00:19:08.883 "dhgroup": "null" 00:19:08.883 } 00:19:08.883 } 00:19:08.883 ]' 00:19:08.883 08:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:09.140 08:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:09.140 08:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:09.140 08:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:09.140 08:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:09.140 08:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:09.140 08:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:09.140 08:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:09.397 08:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:OGY0NzYyYzQzNWM2Y2U1NGRjOGY5YmVjZWQxMjM3OGYxM2FlMzM5Yjg0MzM1MjlhZmViNWVhMWU5ZWE0ZTI2NuGucvM=: 00:19:10.331 08:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:10.331 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:10.331 08:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:10.331 08:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.331 08:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.331 08:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.331 08:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:10.331 08:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:10.331 08:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:10.331 08:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:10.589 08:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:19:10.589 08:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:10.589 08:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:10.589 08:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:10.589 08:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:10.589 08:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:10.589 08:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:10.589 08:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.589 08:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.589 08:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.589 08:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:10.589 08:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:11.158 00:19:11.158 08:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:11.158 08:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:11.158 08:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:11.158 08:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:11.158 08:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:11.158 08:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.158 08:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.158 08:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.158 08:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:11.158 { 00:19:11.158 "cntlid": 57, 00:19:11.158 "qid": 0, 00:19:11.158 "state": "enabled", 00:19:11.158 "thread": "nvmf_tgt_poll_group_000", 00:19:11.158 "listen_address": { 00:19:11.158 "trtype": "TCP", 00:19:11.158 "adrfam": "IPv4", 00:19:11.158 "traddr": "10.0.0.2", 00:19:11.158 "trsvcid": "4420" 00:19:11.158 }, 00:19:11.159 "peer_address": { 00:19:11.159 "trtype": "TCP", 00:19:11.159 "adrfam": "IPv4", 00:19:11.159 "traddr": "10.0.0.1", 00:19:11.159 "trsvcid": "40544" 00:19:11.159 }, 00:19:11.159 "auth": { 00:19:11.159 "state": "completed", 00:19:11.159 "digest": "sha384", 00:19:11.159 "dhgroup": "ffdhe2048" 00:19:11.159 } 00:19:11.159 } 00:19:11.159 ]' 00:19:11.159 08:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:11.417 08:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:11.417 08:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:11.417 08:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:11.417 08:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:11.417 08:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:11.417 08:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:11.417 08:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:11.675 08:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:N2ViZGU5MGQ5YTE2NzZhYmMyZWFmNDYzMDE5ZGY5MDc1ZTBkNjJkMTQ0ZmM5NDBl8yaDbQ==: --dhchap-ctrl-secret DHHC-1:03:MjViZDVhOTg3MDMyODkwYjVmZDI0MzQ0ZWJjMWUxM2UzN2RiN2I1YzYwNDUzYzY0MTY2ODdiMDVjNjRhYjlmOIsqtQA=: 00:19:12.610 08:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:12.610 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:12.610 08:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:12.610 08:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.610 08:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.610 08:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.610 08:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:12.610 08:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:12.610 08:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:12.868 08:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:19:12.868 08:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:12.868 08:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:12.868 08:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:12.868 08:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:12.868 08:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:12.869 08:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:12.869 08:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.869 08:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.869 08:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.869 08:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:12.869 08:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:13.128 00:19:13.387 08:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:13.387 08:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:13.387 08:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:13.387 08:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:13.387 08:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:13.387 08:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.387 08:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.646 08:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.646 08:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:13.646 { 00:19:13.646 "cntlid": 59, 00:19:13.646 "qid": 0, 00:19:13.646 "state": "enabled", 00:19:13.646 "thread": "nvmf_tgt_poll_group_000", 00:19:13.646 "listen_address": { 00:19:13.646 "trtype": "TCP", 00:19:13.646 "adrfam": "IPv4", 00:19:13.646 "traddr": "10.0.0.2", 00:19:13.646 "trsvcid": "4420" 00:19:13.646 }, 00:19:13.646 "peer_address": { 00:19:13.646 "trtype": "TCP", 00:19:13.646 "adrfam": "IPv4", 00:19:13.646 "traddr": "10.0.0.1", 00:19:13.646 "trsvcid": "40568" 00:19:13.646 }, 00:19:13.646 "auth": { 00:19:13.646 "state": "completed", 00:19:13.646 "digest": "sha384", 00:19:13.646 "dhgroup": "ffdhe2048" 00:19:13.646 } 00:19:13.646 } 00:19:13.646 ]' 00:19:13.646 08:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:13.646 08:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:13.646 08:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:13.646 08:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:13.646 08:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:13.646 08:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:13.646 08:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:13.646 08:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:13.906 08:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MzNiYjg2ZGVmMWUxZThmM2RjZmRjMDRhZTMxOTM2MGKp5Rkk: --dhchap-ctrl-secret DHHC-1:02:NjIxNGYzMzI5M2Y0Y2I3NWYyMmM1NDJlODVjNzUwYTM0NmFlZjA4ODgwY2E4YmY35CdYOA==: 00:19:14.840 08:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:15.097 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:15.097 08:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:15.097 08:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.097 08:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.097 08:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.097 08:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:15.097 08:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:15.097 08:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:15.355 08:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:19:15.355 08:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:15.355 08:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:15.355 08:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:15.355 08:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:15.355 08:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:15.355 08:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:15.355 08:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.355 08:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.355 08:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.355 08:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:15.355 08:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:15.613 00:19:15.613 08:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:15.613 08:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:15.613 08:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:15.870 08:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:15.870 08:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:15.870 08:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.870 08:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.870 08:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.870 08:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:15.870 { 00:19:15.870 "cntlid": 61, 00:19:15.870 "qid": 0, 00:19:15.870 "state": "enabled", 00:19:15.870 "thread": "nvmf_tgt_poll_group_000", 00:19:15.870 "listen_address": { 00:19:15.870 "trtype": "TCP", 00:19:15.870 "adrfam": "IPv4", 00:19:15.870 "traddr": "10.0.0.2", 00:19:15.870 "trsvcid": "4420" 00:19:15.870 }, 00:19:15.870 "peer_address": { 00:19:15.870 "trtype": "TCP", 00:19:15.870 "adrfam": "IPv4", 00:19:15.870 "traddr": "10.0.0.1", 00:19:15.870 "trsvcid": "40598" 00:19:15.870 }, 00:19:15.870 "auth": { 00:19:15.870 "state": "completed", 00:19:15.870 "digest": "sha384", 00:19:15.870 "dhgroup": "ffdhe2048" 00:19:15.870 } 00:19:15.870 } 00:19:15.870 ]' 00:19:15.870 08:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:15.870 08:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:15.870 08:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:15.870 08:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:15.870 08:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:15.870 08:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:15.870 08:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:15.870 08:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:16.128 08:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YzYyOTlmMmRmNTI4N2E0MDBiZjg1MWY0YzU0YzA5NTg1MjExYjllZjBkMzZjOTNkp9eaHA==: --dhchap-ctrl-secret DHHC-1:01:NzE2MWI2ODUzOWI0NmJlYjZhOWNiMmZlYzkyNTIwYzAg0K5o: 00:19:17.061 08:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:17.061 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:17.061 08:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:17.061 08:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:17.061 08:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.061 08:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:17.061 08:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:17.061 08:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:17.061 08:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:17.629 08:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:19:17.629 08:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:17.629 08:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:17.629 08:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:17.629 08:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:17.629 08:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:17.629 08:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:17.629 08:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:17.629 08:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.629 08:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:17.629 08:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:17.629 08:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:17.888 00:19:17.888 08:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:17.888 08:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:17.888 08:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:18.146 08:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:18.146 08:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:18.146 08:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.146 08:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.146 08:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.146 08:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:18.146 { 00:19:18.146 "cntlid": 63, 00:19:18.146 "qid": 0, 00:19:18.146 "state": "enabled", 00:19:18.146 "thread": "nvmf_tgt_poll_group_000", 00:19:18.146 "listen_address": { 00:19:18.146 "trtype": "TCP", 00:19:18.146 "adrfam": "IPv4", 00:19:18.146 "traddr": "10.0.0.2", 00:19:18.146 "trsvcid": "4420" 00:19:18.146 }, 00:19:18.146 "peer_address": { 00:19:18.146 "trtype": "TCP", 00:19:18.146 "adrfam": "IPv4", 00:19:18.146 "traddr": "10.0.0.1", 00:19:18.146 "trsvcid": "40630" 00:19:18.146 }, 00:19:18.146 "auth": { 00:19:18.146 "state": "completed", 00:19:18.146 "digest": "sha384", 00:19:18.146 "dhgroup": "ffdhe2048" 00:19:18.146 } 00:19:18.146 } 00:19:18.146 ]' 00:19:18.146 08:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:18.146 08:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:18.146 08:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:18.147 08:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:18.147 08:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:18.147 08:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:18.147 08:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:18.147 08:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:18.714 08:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:OGY0NzYyYzQzNWM2Y2U1NGRjOGY5YmVjZWQxMjM3OGYxM2FlMzM5Yjg0MzM1MjlhZmViNWVhMWU5ZWE0ZTI2NuGucvM=: 00:19:19.650 08:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:19.650 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:19.650 08:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:19.650 08:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.650 08:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.650 08:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.651 08:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:19.651 08:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:19.651 08:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:19.651 08:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:19.908 08:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:19:19.908 08:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:19.908 08:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:19.908 08:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:19.908 08:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:19.908 08:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:19.908 08:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:19.908 08:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.908 08:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.908 08:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.908 08:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:19.908 08:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:20.166 00:19:20.166 08:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:20.166 08:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:20.166 08:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:20.424 08:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:20.424 08:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:20.424 08:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.424 08:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.424 08:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.424 08:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:20.424 { 00:19:20.424 "cntlid": 65, 00:19:20.424 "qid": 0, 00:19:20.424 "state": "enabled", 00:19:20.424 "thread": "nvmf_tgt_poll_group_000", 00:19:20.424 "listen_address": { 00:19:20.424 "trtype": "TCP", 00:19:20.424 "adrfam": "IPv4", 00:19:20.424 "traddr": "10.0.0.2", 00:19:20.424 "trsvcid": "4420" 00:19:20.424 }, 00:19:20.424 "peer_address": { 00:19:20.424 "trtype": "TCP", 00:19:20.424 "adrfam": "IPv4", 00:19:20.424 "traddr": "10.0.0.1", 00:19:20.424 "trsvcid": "40644" 00:19:20.424 }, 00:19:20.424 "auth": { 00:19:20.424 "state": "completed", 00:19:20.424 "digest": "sha384", 00:19:20.424 "dhgroup": "ffdhe3072" 00:19:20.424 } 00:19:20.424 } 00:19:20.424 ]' 00:19:20.424 08:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:20.682 08:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:20.682 08:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:20.682 08:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:20.682 08:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:20.682 08:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:20.682 08:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:20.682 08:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:20.940 08:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:N2ViZGU5MGQ5YTE2NzZhYmMyZWFmNDYzMDE5ZGY5MDc1ZTBkNjJkMTQ0ZmM5NDBl8yaDbQ==: --dhchap-ctrl-secret DHHC-1:03:MjViZDVhOTg3MDMyODkwYjVmZDI0MzQ0ZWJjMWUxM2UzN2RiN2I1YzYwNDUzYzY0MTY2ODdiMDVjNjRhYjlmOIsqtQA=: 00:19:21.874 08:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:21.874 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:21.874 08:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:21.874 08:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:21.874 08:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.874 08:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:21.874 08:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:21.874 08:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:21.874 08:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:22.444 08:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:19:22.444 08:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:22.444 08:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:22.444 08:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:22.444 08:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:22.444 08:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:22.444 08:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:22.444 08:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.444 08:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.444 08:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.444 08:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:22.444 08:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:22.702 00:19:22.702 08:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:22.702 08:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:22.702 08:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:22.960 08:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:22.960 08:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:22.960 08:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.960 08:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.960 08:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.960 08:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:22.960 { 00:19:22.960 "cntlid": 67, 00:19:22.960 "qid": 0, 00:19:22.960 "state": "enabled", 00:19:22.960 "thread": "nvmf_tgt_poll_group_000", 00:19:22.960 "listen_address": { 00:19:22.960 "trtype": "TCP", 00:19:22.960 "adrfam": "IPv4", 00:19:22.960 "traddr": "10.0.0.2", 00:19:22.960 "trsvcid": "4420" 00:19:22.960 }, 00:19:22.960 "peer_address": { 00:19:22.960 "trtype": "TCP", 00:19:22.960 "adrfam": "IPv4", 00:19:22.960 "traddr": "10.0.0.1", 00:19:22.960 "trsvcid": "40582" 00:19:22.960 }, 00:19:22.960 "auth": { 00:19:22.960 "state": "completed", 00:19:22.960 "digest": "sha384", 00:19:22.960 "dhgroup": "ffdhe3072" 00:19:22.960 } 00:19:22.960 } 00:19:22.960 ]' 00:19:22.960 08:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:22.960 08:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:22.960 08:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:22.960 08:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:22.960 08:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:22.960 08:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:22.960 08:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:22.960 08:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:23.220 08:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MzNiYjg2ZGVmMWUxZThmM2RjZmRjMDRhZTMxOTM2MGKp5Rkk: --dhchap-ctrl-secret DHHC-1:02:NjIxNGYzMzI5M2Y0Y2I3NWYyMmM1NDJlODVjNzUwYTM0NmFlZjA4ODgwY2E4YmY35CdYOA==: 00:19:24.190 08:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:24.190 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:24.190 08:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:24.190 08:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.190 08:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.190 08:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.190 08:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:24.190 08:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:24.190 08:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:24.759 08:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:19:24.759 08:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:24.759 08:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:24.759 08:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:24.759 08:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:24.759 08:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:24.759 08:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:24.759 08:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.759 08:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.759 08:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.759 08:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:24.759 08:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:25.017 00:19:25.017 08:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:25.017 08:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:25.017 08:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:25.275 08:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:25.275 08:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:25.275 08:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:25.275 08:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.275 08:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:25.275 08:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:25.275 { 00:19:25.275 "cntlid": 69, 00:19:25.275 "qid": 0, 00:19:25.275 "state": "enabled", 00:19:25.275 "thread": "nvmf_tgt_poll_group_000", 00:19:25.275 "listen_address": { 00:19:25.275 "trtype": "TCP", 00:19:25.275 "adrfam": "IPv4", 00:19:25.275 "traddr": "10.0.0.2", 00:19:25.275 "trsvcid": "4420" 00:19:25.275 }, 00:19:25.275 "peer_address": { 00:19:25.275 "trtype": "TCP", 00:19:25.275 "adrfam": "IPv4", 00:19:25.275 "traddr": "10.0.0.1", 00:19:25.275 "trsvcid": "40600" 00:19:25.275 }, 00:19:25.275 "auth": { 00:19:25.275 "state": "completed", 00:19:25.275 "digest": "sha384", 00:19:25.275 "dhgroup": "ffdhe3072" 00:19:25.275 } 00:19:25.275 } 00:19:25.275 ]' 00:19:25.275 08:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:25.275 08:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:25.275 08:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:25.275 08:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:25.275 08:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:25.275 08:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:25.275 08:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:25.275 08:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:25.534 08:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YzYyOTlmMmRmNTI4N2E0MDBiZjg1MWY0YzU0YzA5NTg1MjExYjllZjBkMzZjOTNkp9eaHA==: --dhchap-ctrl-secret DHHC-1:01:NzE2MWI2ODUzOWI0NmJlYjZhOWNiMmZlYzkyNTIwYzAg0K5o: 00:19:26.470 08:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:26.470 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:26.470 08:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:26.470 08:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:26.470 08:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.730 08:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:26.730 08:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:26.730 08:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:26.730 08:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:26.991 08:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:19:26.991 08:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:26.991 08:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:26.991 08:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:26.991 08:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:26.991 08:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:26.991 08:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:26.991 08:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:26.991 08:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.991 08:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:26.991 08:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:26.991 08:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:27.250 00:19:27.250 08:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:27.250 08:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:27.250 08:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:27.507 08:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:27.507 08:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:27.507 08:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:27.507 08:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.507 08:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:27.507 08:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:27.507 { 00:19:27.507 "cntlid": 71, 00:19:27.507 "qid": 0, 00:19:27.507 "state": "enabled", 00:19:27.507 "thread": "nvmf_tgt_poll_group_000", 00:19:27.507 "listen_address": { 00:19:27.507 "trtype": "TCP", 00:19:27.507 "adrfam": "IPv4", 00:19:27.507 "traddr": "10.0.0.2", 00:19:27.507 "trsvcid": "4420" 00:19:27.507 }, 00:19:27.507 "peer_address": { 00:19:27.507 "trtype": "TCP", 00:19:27.507 "adrfam": "IPv4", 00:19:27.507 "traddr": "10.0.0.1", 00:19:27.507 "trsvcid": "40622" 00:19:27.507 }, 00:19:27.507 "auth": { 00:19:27.507 "state": "completed", 00:19:27.507 "digest": "sha384", 00:19:27.507 "dhgroup": "ffdhe3072" 00:19:27.507 } 00:19:27.507 } 00:19:27.507 ]' 00:19:27.507 08:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:27.507 08:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:27.507 08:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:27.507 08:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:27.507 08:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:27.507 08:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:27.507 08:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:27.507 08:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:27.767 08:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:OGY0NzYyYzQzNWM2Y2U1NGRjOGY5YmVjZWQxMjM3OGYxM2FlMzM5Yjg0MzM1MjlhZmViNWVhMWU5ZWE0ZTI2NuGucvM=: 00:19:28.704 08:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:28.704 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:28.704 08:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:28.704 08:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.704 08:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.704 08:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:28.704 08:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:28.704 08:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:28.704 08:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:28.704 08:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:28.963 08:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:19:28.963 08:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:28.963 08:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:28.963 08:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:28.963 08:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:28.963 08:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:28.963 08:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:28.963 08:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.963 08:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.222 08:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.222 08:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:29.222 08:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:29.480 00:19:29.480 08:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:29.480 08:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:29.480 08:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:29.738 08:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:29.738 08:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:29.738 08:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.738 08:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.738 08:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.738 08:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:29.738 { 00:19:29.738 "cntlid": 73, 00:19:29.738 "qid": 0, 00:19:29.738 "state": "enabled", 00:19:29.738 "thread": "nvmf_tgt_poll_group_000", 00:19:29.738 "listen_address": { 00:19:29.738 "trtype": "TCP", 00:19:29.738 "adrfam": "IPv4", 00:19:29.738 "traddr": "10.0.0.2", 00:19:29.738 "trsvcid": "4420" 00:19:29.738 }, 00:19:29.738 "peer_address": { 00:19:29.738 "trtype": "TCP", 00:19:29.738 "adrfam": "IPv4", 00:19:29.738 "traddr": "10.0.0.1", 00:19:29.738 "trsvcid": "40636" 00:19:29.738 }, 00:19:29.738 "auth": { 00:19:29.738 "state": "completed", 00:19:29.738 "digest": "sha384", 00:19:29.738 "dhgroup": "ffdhe4096" 00:19:29.738 } 00:19:29.738 } 00:19:29.738 ]' 00:19:29.738 08:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:29.738 08:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:29.738 08:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:29.738 08:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:29.738 08:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:29.996 08:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:29.996 08:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:29.996 08:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:30.254 08:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:N2ViZGU5MGQ5YTE2NzZhYmMyZWFmNDYzMDE5ZGY5MDc1ZTBkNjJkMTQ0ZmM5NDBl8yaDbQ==: --dhchap-ctrl-secret DHHC-1:03:MjViZDVhOTg3MDMyODkwYjVmZDI0MzQ0ZWJjMWUxM2UzN2RiN2I1YzYwNDUzYzY0MTY2ODdiMDVjNjRhYjlmOIsqtQA=: 00:19:31.189 08:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:31.189 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:31.189 08:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:31.189 08:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.189 08:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.189 08:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.189 08:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:31.189 08:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:31.189 08:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:31.446 08:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:19:31.446 08:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:31.446 08:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:31.446 08:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:31.446 08:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:31.446 08:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:31.446 08:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:31.446 08:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.446 08:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.446 08:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.446 08:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:31.447 08:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:31.704 00:19:31.704 08:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:31.704 08:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:31.704 08:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:31.961 08:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:31.961 08:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:31.961 08:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.961 08:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.961 08:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.961 08:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:31.961 { 00:19:31.961 "cntlid": 75, 00:19:31.961 "qid": 0, 00:19:31.961 "state": "enabled", 00:19:31.961 "thread": "nvmf_tgt_poll_group_000", 00:19:31.961 "listen_address": { 00:19:31.961 "trtype": "TCP", 00:19:31.961 "adrfam": "IPv4", 00:19:31.961 "traddr": "10.0.0.2", 00:19:31.961 "trsvcid": "4420" 00:19:31.961 }, 00:19:31.961 "peer_address": { 00:19:31.961 "trtype": "TCP", 00:19:31.961 "adrfam": "IPv4", 00:19:31.961 "traddr": "10.0.0.1", 00:19:31.961 "trsvcid": "55498" 00:19:31.961 }, 00:19:31.961 "auth": { 00:19:31.961 "state": "completed", 00:19:31.961 "digest": "sha384", 00:19:31.961 "dhgroup": "ffdhe4096" 00:19:31.961 } 00:19:31.961 } 00:19:31.961 ]' 00:19:31.961 08:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:31.961 08:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:31.961 08:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:31.961 08:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:31.961 08:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:32.220 08:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:32.220 08:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:32.220 08:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:32.479 08:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MzNiYjg2ZGVmMWUxZThmM2RjZmRjMDRhZTMxOTM2MGKp5Rkk: --dhchap-ctrl-secret DHHC-1:02:NjIxNGYzMzI5M2Y0Y2I3NWYyMmM1NDJlODVjNzUwYTM0NmFlZjA4ODgwY2E4YmY35CdYOA==: 00:19:33.412 08:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:33.412 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:33.412 08:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:33.412 08:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.412 08:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.412 08:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.412 08:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:33.412 08:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:33.412 08:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:33.671 08:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:19:33.671 08:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:33.671 08:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:33.671 08:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:33.671 08:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:33.671 08:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:33.671 08:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:33.671 08:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.671 08:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.671 08:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.671 08:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:33.671 08:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:33.929 00:19:33.929 08:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:33.929 08:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:33.929 08:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:34.187 08:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:34.187 08:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:34.187 08:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.187 08:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.187 08:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.187 08:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:34.187 { 00:19:34.187 "cntlid": 77, 00:19:34.187 "qid": 0, 00:19:34.187 "state": "enabled", 00:19:34.187 "thread": "nvmf_tgt_poll_group_000", 00:19:34.187 "listen_address": { 00:19:34.187 "trtype": "TCP", 00:19:34.187 "adrfam": "IPv4", 00:19:34.187 "traddr": "10.0.0.2", 00:19:34.187 "trsvcid": "4420" 00:19:34.187 }, 00:19:34.187 "peer_address": { 00:19:34.187 "trtype": "TCP", 00:19:34.187 "adrfam": "IPv4", 00:19:34.187 "traddr": "10.0.0.1", 00:19:34.187 "trsvcid": "55518" 00:19:34.187 }, 00:19:34.187 "auth": { 00:19:34.187 "state": "completed", 00:19:34.187 "digest": "sha384", 00:19:34.187 "dhgroup": "ffdhe4096" 00:19:34.187 } 00:19:34.187 } 00:19:34.187 ]' 00:19:34.187 08:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:34.187 08:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:34.187 08:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:34.446 08:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:34.446 08:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:34.446 08:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:34.446 08:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:34.446 08:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:34.704 08:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YzYyOTlmMmRmNTI4N2E0MDBiZjg1MWY0YzU0YzA5NTg1MjExYjllZjBkMzZjOTNkp9eaHA==: --dhchap-ctrl-secret DHHC-1:01:NzE2MWI2ODUzOWI0NmJlYjZhOWNiMmZlYzkyNTIwYzAg0K5o: 00:19:35.641 08:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:35.641 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:35.641 08:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:35.641 08:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.641 08:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.641 08:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.641 08:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:35.641 08:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:35.641 08:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:35.900 08:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:19:35.900 08:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:35.900 08:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:35.900 08:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:35.900 08:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:35.900 08:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:35.900 08:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:35.900 08:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.900 08:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.900 08:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.900 08:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:35.900 08:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:36.468 00:19:36.468 08:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:36.468 08:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:36.468 08:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:36.468 08:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:36.468 08:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:36.468 08:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.468 08:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.468 08:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.468 08:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:36.468 { 00:19:36.468 "cntlid": 79, 00:19:36.468 "qid": 0, 00:19:36.468 "state": "enabled", 00:19:36.468 "thread": "nvmf_tgt_poll_group_000", 00:19:36.468 "listen_address": { 00:19:36.468 "trtype": "TCP", 00:19:36.468 "adrfam": "IPv4", 00:19:36.468 "traddr": "10.0.0.2", 00:19:36.468 "trsvcid": "4420" 00:19:36.468 }, 00:19:36.468 "peer_address": { 00:19:36.468 "trtype": "TCP", 00:19:36.468 "adrfam": "IPv4", 00:19:36.468 "traddr": "10.0.0.1", 00:19:36.468 "trsvcid": "55540" 00:19:36.468 }, 00:19:36.468 "auth": { 00:19:36.468 "state": "completed", 00:19:36.468 "digest": "sha384", 00:19:36.469 "dhgroup": "ffdhe4096" 00:19:36.469 } 00:19:36.469 } 00:19:36.469 ]' 00:19:36.469 08:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:36.727 08:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:36.727 08:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:36.727 08:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:36.727 08:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:36.727 08:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:36.727 08:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:36.727 08:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:36.985 08:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:OGY0NzYyYzQzNWM2Y2U1NGRjOGY5YmVjZWQxMjM3OGYxM2FlMzM5Yjg0MzM1MjlhZmViNWVhMWU5ZWE0ZTI2NuGucvM=: 00:19:37.921 08:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:37.921 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:37.921 08:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:37.921 08:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.921 08:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.921 08:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.921 08:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:37.921 08:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:37.921 08:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:37.921 08:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:38.179 08:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:19:38.179 08:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:38.179 08:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:38.179 08:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:38.179 08:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:38.179 08:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:38.179 08:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:38.179 08:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.179 08:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.179 08:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:38.180 08:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:38.180 08:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:38.745 00:19:38.745 08:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:38.745 08:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:38.745 08:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:39.003 08:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:39.003 08:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:39.003 08:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.003 08:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.003 08:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.003 08:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:39.003 { 00:19:39.003 "cntlid": 81, 00:19:39.003 "qid": 0, 00:19:39.003 "state": "enabled", 00:19:39.003 "thread": "nvmf_tgt_poll_group_000", 00:19:39.003 "listen_address": { 00:19:39.003 "trtype": "TCP", 00:19:39.003 "adrfam": "IPv4", 00:19:39.003 "traddr": "10.0.0.2", 00:19:39.003 "trsvcid": "4420" 00:19:39.003 }, 00:19:39.003 "peer_address": { 00:19:39.003 "trtype": "TCP", 00:19:39.003 "adrfam": "IPv4", 00:19:39.003 "traddr": "10.0.0.1", 00:19:39.003 "trsvcid": "55566" 00:19:39.003 }, 00:19:39.003 "auth": { 00:19:39.004 "state": "completed", 00:19:39.004 "digest": "sha384", 00:19:39.004 "dhgroup": "ffdhe6144" 00:19:39.004 } 00:19:39.004 } 00:19:39.004 ]' 00:19:39.004 08:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:39.261 08:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:39.261 08:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:39.261 08:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:39.261 08:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:39.261 08:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:39.261 08:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:39.261 08:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:39.520 08:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:N2ViZGU5MGQ5YTE2NzZhYmMyZWFmNDYzMDE5ZGY5MDc1ZTBkNjJkMTQ0ZmM5NDBl8yaDbQ==: --dhchap-ctrl-secret DHHC-1:03:MjViZDVhOTg3MDMyODkwYjVmZDI0MzQ0ZWJjMWUxM2UzN2RiN2I1YzYwNDUzYzY0MTY2ODdiMDVjNjRhYjlmOIsqtQA=: 00:19:40.514 08:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:40.514 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:40.514 08:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:40.514 08:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.514 08:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.514 08:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.514 08:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:40.514 08:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:40.514 08:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:40.772 08:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:19:40.772 08:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:40.772 08:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:40.772 08:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:40.772 08:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:40.772 08:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:40.772 08:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:40.772 08:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.772 08:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.772 08:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.772 08:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:40.772 08:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:41.338 00:19:41.338 08:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:41.338 08:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:41.338 08:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:41.596 08:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:41.596 08:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:41.596 08:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.596 08:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.596 08:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.596 08:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:41.596 { 00:19:41.596 "cntlid": 83, 00:19:41.596 "qid": 0, 00:19:41.596 "state": "enabled", 00:19:41.596 "thread": "nvmf_tgt_poll_group_000", 00:19:41.596 "listen_address": { 00:19:41.596 "trtype": "TCP", 00:19:41.596 "adrfam": "IPv4", 00:19:41.596 "traddr": "10.0.0.2", 00:19:41.596 "trsvcid": "4420" 00:19:41.596 }, 00:19:41.596 "peer_address": { 00:19:41.596 "trtype": "TCP", 00:19:41.596 "adrfam": "IPv4", 00:19:41.596 "traddr": "10.0.0.1", 00:19:41.596 "trsvcid": "32960" 00:19:41.596 }, 00:19:41.596 "auth": { 00:19:41.596 "state": "completed", 00:19:41.596 "digest": "sha384", 00:19:41.596 "dhgroup": "ffdhe6144" 00:19:41.596 } 00:19:41.596 } 00:19:41.596 ]' 00:19:41.596 08:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:41.596 08:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:41.596 08:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:41.596 08:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:41.596 08:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:41.596 08:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:41.596 08:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:41.596 08:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:41.855 08:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MzNiYjg2ZGVmMWUxZThmM2RjZmRjMDRhZTMxOTM2MGKp5Rkk: --dhchap-ctrl-secret DHHC-1:02:NjIxNGYzMzI5M2Y0Y2I3NWYyMmM1NDJlODVjNzUwYTM0NmFlZjA4ODgwY2E4YmY35CdYOA==: 00:19:43.227 08:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:43.227 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:43.227 08:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:43.227 08:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.227 08:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.227 08:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.227 08:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:43.227 08:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:43.227 08:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:43.227 08:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:19:43.227 08:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:43.227 08:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:43.227 08:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:43.227 08:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:43.227 08:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:43.227 08:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:43.227 08:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.227 08:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.227 08:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.227 08:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:43.227 08:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:43.792 00:19:43.792 08:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:43.792 08:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:43.792 08:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:44.050 08:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:44.050 08:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:44.050 08:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.050 08:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.050 08:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.050 08:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:44.050 { 00:19:44.050 "cntlid": 85, 00:19:44.050 "qid": 0, 00:19:44.050 "state": "enabled", 00:19:44.050 "thread": "nvmf_tgt_poll_group_000", 00:19:44.050 "listen_address": { 00:19:44.050 "trtype": "TCP", 00:19:44.050 "adrfam": "IPv4", 00:19:44.050 "traddr": "10.0.0.2", 00:19:44.050 "trsvcid": "4420" 00:19:44.050 }, 00:19:44.050 "peer_address": { 00:19:44.050 "trtype": "TCP", 00:19:44.050 "adrfam": "IPv4", 00:19:44.050 "traddr": "10.0.0.1", 00:19:44.050 "trsvcid": "32984" 00:19:44.050 }, 00:19:44.050 "auth": { 00:19:44.050 "state": "completed", 00:19:44.050 "digest": "sha384", 00:19:44.050 "dhgroup": "ffdhe6144" 00:19:44.050 } 00:19:44.050 } 00:19:44.050 ]' 00:19:44.050 08:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:44.050 08:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:44.050 08:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:44.050 08:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:44.050 08:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:44.050 08:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:44.050 08:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:44.050 08:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:44.308 08:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YzYyOTlmMmRmNTI4N2E0MDBiZjg1MWY0YzU0YzA5NTg1MjExYjllZjBkMzZjOTNkp9eaHA==: --dhchap-ctrl-secret DHHC-1:01:NzE2MWI2ODUzOWI0NmJlYjZhOWNiMmZlYzkyNTIwYzAg0K5o: 00:19:45.679 08:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:45.679 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:45.679 08:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:45.679 08:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.679 08:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.679 08:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.679 08:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:45.680 08:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:45.680 08:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:45.680 08:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:19:45.680 08:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:45.680 08:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:45.680 08:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:45.680 08:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:45.680 08:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:45.680 08:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:45.680 08:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.680 08:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.680 08:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.680 08:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:45.680 08:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:46.244 00:19:46.244 08:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:46.244 08:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:46.244 08:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:46.501 08:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:46.501 08:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:46.501 08:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:46.501 08:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.501 08:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:46.501 08:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:46.501 { 00:19:46.501 "cntlid": 87, 00:19:46.501 "qid": 0, 00:19:46.501 "state": "enabled", 00:19:46.501 "thread": "nvmf_tgt_poll_group_000", 00:19:46.501 "listen_address": { 00:19:46.501 "trtype": "TCP", 00:19:46.501 "adrfam": "IPv4", 00:19:46.501 "traddr": "10.0.0.2", 00:19:46.501 "trsvcid": "4420" 00:19:46.501 }, 00:19:46.501 "peer_address": { 00:19:46.501 "trtype": "TCP", 00:19:46.501 "adrfam": "IPv4", 00:19:46.501 "traddr": "10.0.0.1", 00:19:46.501 "trsvcid": "33012" 00:19:46.501 }, 00:19:46.501 "auth": { 00:19:46.501 "state": "completed", 00:19:46.501 "digest": "sha384", 00:19:46.501 "dhgroup": "ffdhe6144" 00:19:46.501 } 00:19:46.501 } 00:19:46.501 ]' 00:19:46.501 08:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:46.501 08:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:46.501 08:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:46.501 08:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:46.501 08:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:46.501 08:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:46.501 08:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:46.501 08:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:46.759 08:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:OGY0NzYyYzQzNWM2Y2U1NGRjOGY5YmVjZWQxMjM3OGYxM2FlMzM5Yjg0MzM1MjlhZmViNWVhMWU5ZWE0ZTI2NuGucvM=: 00:19:47.690 08:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:47.690 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:47.690 08:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:47.690 08:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.690 08:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.690 08:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.690 08:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:47.690 08:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:47.690 08:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:47.690 08:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:48.256 08:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:19:48.256 08:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:48.256 08:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:48.256 08:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:48.256 08:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:48.256 08:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:48.256 08:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:48.256 08:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.256 08:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.256 08:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.256 08:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:48.256 08:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:49.189 00:19:49.189 08:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:49.189 08:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:49.189 08:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:49.446 08:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:49.446 08:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:49.446 08:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.446 08:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.446 08:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:49.446 08:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:49.446 { 00:19:49.446 "cntlid": 89, 00:19:49.446 "qid": 0, 00:19:49.446 "state": "enabled", 00:19:49.446 "thread": "nvmf_tgt_poll_group_000", 00:19:49.446 "listen_address": { 00:19:49.446 "trtype": "TCP", 00:19:49.446 "adrfam": "IPv4", 00:19:49.446 "traddr": "10.0.0.2", 00:19:49.446 "trsvcid": "4420" 00:19:49.446 }, 00:19:49.446 "peer_address": { 00:19:49.446 "trtype": "TCP", 00:19:49.446 "adrfam": "IPv4", 00:19:49.446 "traddr": "10.0.0.1", 00:19:49.446 "trsvcid": "33056" 00:19:49.446 }, 00:19:49.446 "auth": { 00:19:49.446 "state": "completed", 00:19:49.446 "digest": "sha384", 00:19:49.446 "dhgroup": "ffdhe8192" 00:19:49.446 } 00:19:49.446 } 00:19:49.446 ]' 00:19:49.446 08:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:49.446 08:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:49.446 08:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:49.446 08:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:49.446 08:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:49.446 08:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:49.446 08:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:49.446 08:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:49.703 08:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:N2ViZGU5MGQ5YTE2NzZhYmMyZWFmNDYzMDE5ZGY5MDc1ZTBkNjJkMTQ0ZmM5NDBl8yaDbQ==: --dhchap-ctrl-secret DHHC-1:03:MjViZDVhOTg3MDMyODkwYjVmZDI0MzQ0ZWJjMWUxM2UzN2RiN2I1YzYwNDUzYzY0MTY2ODdiMDVjNjRhYjlmOIsqtQA=: 00:19:50.635 08:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:50.635 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:50.635 08:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:50.635 08:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.635 08:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.635 08:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.635 08:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:50.635 08:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:50.635 08:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:50.893 08:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:19:50.893 08:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:50.893 08:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:50.893 08:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:50.893 08:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:50.893 08:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:50.893 08:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:50.893 08:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.893 08:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.893 08:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.893 08:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:50.893 08:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:51.827 00:19:51.827 08:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:51.827 08:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:51.827 08:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:52.085 08:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:52.085 08:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:52.085 08:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.085 08:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.085 08:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.085 08:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:52.085 { 00:19:52.085 "cntlid": 91, 00:19:52.085 "qid": 0, 00:19:52.085 "state": "enabled", 00:19:52.085 "thread": "nvmf_tgt_poll_group_000", 00:19:52.085 "listen_address": { 00:19:52.085 "trtype": "TCP", 00:19:52.085 "adrfam": "IPv4", 00:19:52.085 "traddr": "10.0.0.2", 00:19:52.085 "trsvcid": "4420" 00:19:52.085 }, 00:19:52.085 "peer_address": { 00:19:52.085 "trtype": "TCP", 00:19:52.085 "adrfam": "IPv4", 00:19:52.085 "traddr": "10.0.0.1", 00:19:52.085 "trsvcid": "34544" 00:19:52.085 }, 00:19:52.085 "auth": { 00:19:52.085 "state": "completed", 00:19:52.085 "digest": "sha384", 00:19:52.085 "dhgroup": "ffdhe8192" 00:19:52.085 } 00:19:52.085 } 00:19:52.085 ]' 00:19:52.085 08:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:52.085 08:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:52.085 08:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:52.085 08:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:52.085 08:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:52.085 08:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:52.085 08:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:52.085 08:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:52.343 08:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MzNiYjg2ZGVmMWUxZThmM2RjZmRjMDRhZTMxOTM2MGKp5Rkk: --dhchap-ctrl-secret DHHC-1:02:NjIxNGYzMzI5M2Y0Y2I3NWYyMmM1NDJlODVjNzUwYTM0NmFlZjA4ODgwY2E4YmY35CdYOA==: 00:19:53.276 08:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:53.276 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:53.276 08:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:53.276 08:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.276 08:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.276 08:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.276 08:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:53.276 08:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:53.276 08:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:53.842 08:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:19:53.842 08:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:53.842 08:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:53.842 08:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:53.842 08:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:53.842 08:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:53.842 08:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:53.842 08:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.842 08:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.842 08:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.842 08:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:53.843 08:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:54.776 00:19:54.776 08:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:54.776 08:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:54.776 08:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:54.776 08:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:54.776 08:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:54.776 08:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.776 08:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.776 08:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.776 08:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:54.776 { 00:19:54.776 "cntlid": 93, 00:19:54.776 "qid": 0, 00:19:54.776 "state": "enabled", 00:19:54.776 "thread": "nvmf_tgt_poll_group_000", 00:19:54.776 "listen_address": { 00:19:54.776 "trtype": "TCP", 00:19:54.776 "adrfam": "IPv4", 00:19:54.776 "traddr": "10.0.0.2", 00:19:54.776 "trsvcid": "4420" 00:19:54.776 }, 00:19:54.776 "peer_address": { 00:19:54.776 "trtype": "TCP", 00:19:54.776 "adrfam": "IPv4", 00:19:54.776 "traddr": "10.0.0.1", 00:19:54.776 "trsvcid": "34578" 00:19:54.776 }, 00:19:54.776 "auth": { 00:19:54.776 "state": "completed", 00:19:54.776 "digest": "sha384", 00:19:54.776 "dhgroup": "ffdhe8192" 00:19:54.776 } 00:19:54.776 } 00:19:54.776 ]' 00:19:54.776 08:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:55.038 08:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:55.038 08:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:55.038 08:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:55.038 08:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:55.038 08:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:55.038 08:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:55.038 08:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:55.301 08:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YzYyOTlmMmRmNTI4N2E0MDBiZjg1MWY0YzU0YzA5NTg1MjExYjllZjBkMzZjOTNkp9eaHA==: --dhchap-ctrl-secret DHHC-1:01:NzE2MWI2ODUzOWI0NmJlYjZhOWNiMmZlYzkyNTIwYzAg0K5o: 00:19:56.233 08:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:56.233 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:56.233 08:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:56.233 08:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.234 08:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.234 08:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.234 08:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:56.234 08:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:56.234 08:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:56.799 08:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:19:56.799 08:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:56.799 08:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:56.799 08:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:56.799 08:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:56.799 08:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:56.799 08:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:56.799 08:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.799 08:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.799 08:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.799 08:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:56.799 08:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:57.394 00:19:57.652 08:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:57.652 08:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:57.652 08:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:57.652 08:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:57.652 08:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:57.652 08:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.652 08:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.910 08:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.910 08:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:57.910 { 00:19:57.910 "cntlid": 95, 00:19:57.910 "qid": 0, 00:19:57.910 "state": "enabled", 00:19:57.910 "thread": "nvmf_tgt_poll_group_000", 00:19:57.910 "listen_address": { 00:19:57.910 "trtype": "TCP", 00:19:57.910 "adrfam": "IPv4", 00:19:57.910 "traddr": "10.0.0.2", 00:19:57.910 "trsvcid": "4420" 00:19:57.910 }, 00:19:57.910 "peer_address": { 00:19:57.910 "trtype": "TCP", 00:19:57.910 "adrfam": "IPv4", 00:19:57.910 "traddr": "10.0.0.1", 00:19:57.910 "trsvcid": "34608" 00:19:57.910 }, 00:19:57.910 "auth": { 00:19:57.910 "state": "completed", 00:19:57.910 "digest": "sha384", 00:19:57.910 "dhgroup": "ffdhe8192" 00:19:57.910 } 00:19:57.910 } 00:19:57.910 ]' 00:19:57.910 08:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:57.910 08:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:57.910 08:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:57.910 08:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:57.910 08:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:57.910 08:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:57.910 08:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:57.910 08:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:58.168 08:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:OGY0NzYyYzQzNWM2Y2U1NGRjOGY5YmVjZWQxMjM3OGYxM2FlMzM5Yjg0MzM1MjlhZmViNWVhMWU5ZWE0ZTI2NuGucvM=: 00:19:59.101 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:59.101 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:59.101 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:59.101 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.101 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.101 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.101 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:19:59.101 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:59.101 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:59.101 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:59.101 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:59.359 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:19:59.359 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:59.359 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:59.359 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:59.359 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:59.359 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:59.359 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:59.359 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.359 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.359 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.359 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:59.359 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:59.617 00:19:59.617 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:59.617 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:59.617 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:59.875 08:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:59.875 08:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:59.875 08:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.875 08:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.875 08:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.875 08:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:59.875 { 00:19:59.875 "cntlid": 97, 00:19:59.875 "qid": 0, 00:19:59.875 "state": "enabled", 00:19:59.875 "thread": "nvmf_tgt_poll_group_000", 00:19:59.875 "listen_address": { 00:19:59.875 "trtype": "TCP", 00:19:59.875 "adrfam": "IPv4", 00:19:59.875 "traddr": "10.0.0.2", 00:19:59.875 "trsvcid": "4420" 00:19:59.875 }, 00:19:59.875 "peer_address": { 00:19:59.875 "trtype": "TCP", 00:19:59.875 "adrfam": "IPv4", 00:19:59.875 "traddr": "10.0.0.1", 00:19:59.875 "trsvcid": "34618" 00:19:59.875 }, 00:19:59.875 "auth": { 00:19:59.876 "state": "completed", 00:19:59.876 "digest": "sha512", 00:19:59.876 "dhgroup": "null" 00:19:59.876 } 00:19:59.876 } 00:19:59.876 ]' 00:19:59.876 08:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:59.876 08:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:59.876 08:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:00.133 08:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:00.133 08:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:00.133 08:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:00.133 08:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:00.133 08:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:00.391 08:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:N2ViZGU5MGQ5YTE2NzZhYmMyZWFmNDYzMDE5ZGY5MDc1ZTBkNjJkMTQ0ZmM5NDBl8yaDbQ==: --dhchap-ctrl-secret DHHC-1:03:MjViZDVhOTg3MDMyODkwYjVmZDI0MzQ0ZWJjMWUxM2UzN2RiN2I1YzYwNDUzYzY0MTY2ODdiMDVjNjRhYjlmOIsqtQA=: 00:20:01.325 08:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:01.325 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:01.325 08:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:01.325 08:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.325 08:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.325 08:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.325 08:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:01.325 08:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:01.325 08:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:01.583 08:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:20:01.583 08:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:01.583 08:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:01.583 08:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:01.583 08:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:01.583 08:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:01.583 08:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:01.583 08:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.583 08:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.583 08:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.583 08:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:01.583 08:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:01.840 00:20:01.840 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:01.840 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:01.840 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:02.098 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:02.098 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:02.098 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.098 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.098 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.098 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:02.098 { 00:20:02.098 "cntlid": 99, 00:20:02.098 "qid": 0, 00:20:02.098 "state": "enabled", 00:20:02.098 "thread": "nvmf_tgt_poll_group_000", 00:20:02.098 "listen_address": { 00:20:02.098 "trtype": "TCP", 00:20:02.098 "adrfam": "IPv4", 00:20:02.098 "traddr": "10.0.0.2", 00:20:02.098 "trsvcid": "4420" 00:20:02.098 }, 00:20:02.098 "peer_address": { 00:20:02.098 "trtype": "TCP", 00:20:02.098 "adrfam": "IPv4", 00:20:02.098 "traddr": "10.0.0.1", 00:20:02.098 "trsvcid": "55684" 00:20:02.098 }, 00:20:02.098 "auth": { 00:20:02.098 "state": "completed", 00:20:02.098 "digest": "sha512", 00:20:02.098 "dhgroup": "null" 00:20:02.098 } 00:20:02.098 } 00:20:02.098 ]' 00:20:02.098 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:02.355 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:02.355 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:02.355 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:02.355 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:02.355 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:02.355 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:02.355 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:02.613 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MzNiYjg2ZGVmMWUxZThmM2RjZmRjMDRhZTMxOTM2MGKp5Rkk: --dhchap-ctrl-secret DHHC-1:02:NjIxNGYzMzI5M2Y0Y2I3NWYyMmM1NDJlODVjNzUwYTM0NmFlZjA4ODgwY2E4YmY35CdYOA==: 00:20:03.547 08:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:03.547 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:03.547 08:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:03.547 08:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.547 08:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.547 08:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.547 08:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:03.547 08:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:03.547 08:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:03.805 08:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:20:03.805 08:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:03.805 08:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:03.805 08:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:03.805 08:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:03.805 08:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:03.805 08:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:03.805 08:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.805 08:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.805 08:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.805 08:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:03.805 08:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:04.063 00:20:04.063 08:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:04.063 08:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:04.063 08:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:04.321 08:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:04.321 08:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:04.321 08:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.321 08:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.321 08:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.321 08:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:04.321 { 00:20:04.321 "cntlid": 101, 00:20:04.321 "qid": 0, 00:20:04.321 "state": "enabled", 00:20:04.321 "thread": "nvmf_tgt_poll_group_000", 00:20:04.321 "listen_address": { 00:20:04.321 "trtype": "TCP", 00:20:04.321 "adrfam": "IPv4", 00:20:04.321 "traddr": "10.0.0.2", 00:20:04.321 "trsvcid": "4420" 00:20:04.321 }, 00:20:04.321 "peer_address": { 00:20:04.321 "trtype": "TCP", 00:20:04.321 "adrfam": "IPv4", 00:20:04.321 "traddr": "10.0.0.1", 00:20:04.321 "trsvcid": "55710" 00:20:04.321 }, 00:20:04.321 "auth": { 00:20:04.321 "state": "completed", 00:20:04.321 "digest": "sha512", 00:20:04.321 "dhgroup": "null" 00:20:04.321 } 00:20:04.321 } 00:20:04.321 ]' 00:20:04.321 08:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:04.321 08:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:04.321 08:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:04.321 08:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:04.321 08:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:04.579 08:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:04.579 08:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:04.579 08:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:04.837 08:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YzYyOTlmMmRmNTI4N2E0MDBiZjg1MWY0YzU0YzA5NTg1MjExYjllZjBkMzZjOTNkp9eaHA==: --dhchap-ctrl-secret DHHC-1:01:NzE2MWI2ODUzOWI0NmJlYjZhOWNiMmZlYzkyNTIwYzAg0K5o: 00:20:05.793 08:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:05.793 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:05.793 08:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:05.793 08:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.793 08:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.793 08:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.793 08:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:05.793 08:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:05.793 08:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:06.052 08:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:20:06.052 08:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:06.052 08:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:06.052 08:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:06.052 08:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:06.052 08:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:06.052 08:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:06.052 08:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.052 08:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.052 08:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.052 08:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:06.052 08:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:06.310 00:20:06.310 08:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:06.310 08:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:06.310 08:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:06.568 08:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:06.568 08:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:06.568 08:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.568 08:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.568 08:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.568 08:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:06.568 { 00:20:06.568 "cntlid": 103, 00:20:06.568 "qid": 0, 00:20:06.568 "state": "enabled", 00:20:06.568 "thread": "nvmf_tgt_poll_group_000", 00:20:06.568 "listen_address": { 00:20:06.568 "trtype": "TCP", 00:20:06.568 "adrfam": "IPv4", 00:20:06.568 "traddr": "10.0.0.2", 00:20:06.568 "trsvcid": "4420" 00:20:06.568 }, 00:20:06.568 "peer_address": { 00:20:06.568 "trtype": "TCP", 00:20:06.568 "adrfam": "IPv4", 00:20:06.568 "traddr": "10.0.0.1", 00:20:06.568 "trsvcid": "55730" 00:20:06.568 }, 00:20:06.568 "auth": { 00:20:06.568 "state": "completed", 00:20:06.568 "digest": "sha512", 00:20:06.568 "dhgroup": "null" 00:20:06.568 } 00:20:06.568 } 00:20:06.568 ]' 00:20:06.568 08:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:06.568 08:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:06.568 08:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:06.568 08:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:06.568 08:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:06.568 08:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:06.568 08:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:06.568 08:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:06.826 08:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:OGY0NzYyYzQzNWM2Y2U1NGRjOGY5YmVjZWQxMjM3OGYxM2FlMzM5Yjg0MzM1MjlhZmViNWVhMWU5ZWE0ZTI2NuGucvM=: 00:20:07.759 08:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:08.017 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:08.017 08:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:08.017 08:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.017 08:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.017 08:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.017 08:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:08.017 08:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:08.017 08:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:08.017 08:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:08.275 08:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:20:08.275 08:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:08.275 08:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:08.275 08:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:08.275 08:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:08.275 08:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:08.275 08:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:08.275 08:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.275 08:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.275 08:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.275 08:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:08.275 08:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:08.532 00:20:08.532 08:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:08.532 08:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:08.532 08:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:08.790 08:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:08.790 08:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:08.790 08:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.790 08:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.790 08:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.790 08:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:08.790 { 00:20:08.790 "cntlid": 105, 00:20:08.790 "qid": 0, 00:20:08.790 "state": "enabled", 00:20:08.790 "thread": "nvmf_tgt_poll_group_000", 00:20:08.790 "listen_address": { 00:20:08.790 "trtype": "TCP", 00:20:08.790 "adrfam": "IPv4", 00:20:08.790 "traddr": "10.0.0.2", 00:20:08.790 "trsvcid": "4420" 00:20:08.790 }, 00:20:08.790 "peer_address": { 00:20:08.790 "trtype": "TCP", 00:20:08.790 "adrfam": "IPv4", 00:20:08.790 "traddr": "10.0.0.1", 00:20:08.790 "trsvcid": "55760" 00:20:08.790 }, 00:20:08.790 "auth": { 00:20:08.790 "state": "completed", 00:20:08.790 "digest": "sha512", 00:20:08.790 "dhgroup": "ffdhe2048" 00:20:08.790 } 00:20:08.790 } 00:20:08.790 ]' 00:20:08.790 08:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:08.790 08:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:08.790 08:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:08.790 08:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:08.790 08:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:08.790 08:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:08.790 08:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:08.790 08:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:09.048 08:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:N2ViZGU5MGQ5YTE2NzZhYmMyZWFmNDYzMDE5ZGY5MDc1ZTBkNjJkMTQ0ZmM5NDBl8yaDbQ==: --dhchap-ctrl-secret DHHC-1:03:MjViZDVhOTg3MDMyODkwYjVmZDI0MzQ0ZWJjMWUxM2UzN2RiN2I1YzYwNDUzYzY0MTY2ODdiMDVjNjRhYjlmOIsqtQA=: 00:20:09.981 08:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:09.981 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:09.981 08:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:09.981 08:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.981 08:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.981 08:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.981 08:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:09.981 08:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:09.981 08:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:10.547 08:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:20:10.547 08:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:10.547 08:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:10.547 08:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:10.547 08:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:10.547 08:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:10.547 08:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:10.547 08:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.547 08:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.547 08:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.547 08:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:10.547 08:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:10.805 00:20:10.805 08:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:10.805 08:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:10.805 08:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:11.063 08:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:11.063 08:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:11.063 08:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.063 08:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.063 08:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.063 08:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:11.063 { 00:20:11.063 "cntlid": 107, 00:20:11.063 "qid": 0, 00:20:11.063 "state": "enabled", 00:20:11.063 "thread": "nvmf_tgt_poll_group_000", 00:20:11.063 "listen_address": { 00:20:11.063 "trtype": "TCP", 00:20:11.063 "adrfam": "IPv4", 00:20:11.063 "traddr": "10.0.0.2", 00:20:11.063 "trsvcid": "4420" 00:20:11.063 }, 00:20:11.063 "peer_address": { 00:20:11.063 "trtype": "TCP", 00:20:11.063 "adrfam": "IPv4", 00:20:11.063 "traddr": "10.0.0.1", 00:20:11.063 "trsvcid": "55784" 00:20:11.063 }, 00:20:11.063 "auth": { 00:20:11.063 "state": "completed", 00:20:11.063 "digest": "sha512", 00:20:11.063 "dhgroup": "ffdhe2048" 00:20:11.063 } 00:20:11.063 } 00:20:11.063 ]' 00:20:11.063 08:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:11.063 08:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:11.063 08:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:11.063 08:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:11.063 08:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:11.063 08:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:11.063 08:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:11.063 08:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:11.327 08:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MzNiYjg2ZGVmMWUxZThmM2RjZmRjMDRhZTMxOTM2MGKp5Rkk: --dhchap-ctrl-secret DHHC-1:02:NjIxNGYzMzI5M2Y0Y2I3NWYyMmM1NDJlODVjNzUwYTM0NmFlZjA4ODgwY2E4YmY35CdYOA==: 00:20:12.308 08:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:12.308 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:12.308 08:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:12.308 08:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.308 08:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.308 08:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.308 08:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:12.308 08:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:12.308 08:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:12.566 08:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:20:12.566 08:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:12.566 08:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:12.566 08:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:12.566 08:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:12.566 08:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:12.566 08:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:12.566 08:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.566 08:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.566 08:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.566 08:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:12.566 08:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:12.824 00:20:13.082 08:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:13.082 08:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:13.082 08:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:13.082 08:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:13.082 08:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:13.082 08:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.082 08:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.340 08:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.340 08:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:13.340 { 00:20:13.340 "cntlid": 109, 00:20:13.340 "qid": 0, 00:20:13.340 "state": "enabled", 00:20:13.340 "thread": "nvmf_tgt_poll_group_000", 00:20:13.340 "listen_address": { 00:20:13.340 "trtype": "TCP", 00:20:13.340 "adrfam": "IPv4", 00:20:13.340 "traddr": "10.0.0.2", 00:20:13.340 "trsvcid": "4420" 00:20:13.340 }, 00:20:13.340 "peer_address": { 00:20:13.340 "trtype": "TCP", 00:20:13.340 "adrfam": "IPv4", 00:20:13.340 "traddr": "10.0.0.1", 00:20:13.340 "trsvcid": "36508" 00:20:13.340 }, 00:20:13.340 "auth": { 00:20:13.340 "state": "completed", 00:20:13.340 "digest": "sha512", 00:20:13.340 "dhgroup": "ffdhe2048" 00:20:13.340 } 00:20:13.340 } 00:20:13.340 ]' 00:20:13.340 08:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:13.340 08:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:13.340 08:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:13.340 08:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:13.340 08:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:13.340 08:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:13.340 08:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:13.340 08:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:13.598 08:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YzYyOTlmMmRmNTI4N2E0MDBiZjg1MWY0YzU0YzA5NTg1MjExYjllZjBkMzZjOTNkp9eaHA==: --dhchap-ctrl-secret DHHC-1:01:NzE2MWI2ODUzOWI0NmJlYjZhOWNiMmZlYzkyNTIwYzAg0K5o: 00:20:14.532 08:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:14.532 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:14.532 08:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:14.532 08:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.532 08:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.532 08:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.532 08:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:14.532 08:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:14.532 08:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:14.790 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:20:14.790 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:14.790 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:14.790 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:14.790 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:14.790 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:14.790 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:14.790 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.790 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.790 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.790 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:14.790 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:15.075 00:20:15.075 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:15.075 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:15.075 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:15.333 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:15.333 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:15.333 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.333 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.333 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.333 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:15.333 { 00:20:15.333 "cntlid": 111, 00:20:15.333 "qid": 0, 00:20:15.333 "state": "enabled", 00:20:15.333 "thread": "nvmf_tgt_poll_group_000", 00:20:15.333 "listen_address": { 00:20:15.333 "trtype": "TCP", 00:20:15.333 "adrfam": "IPv4", 00:20:15.333 "traddr": "10.0.0.2", 00:20:15.333 "trsvcid": "4420" 00:20:15.333 }, 00:20:15.333 "peer_address": { 00:20:15.333 "trtype": "TCP", 00:20:15.333 "adrfam": "IPv4", 00:20:15.333 "traddr": "10.0.0.1", 00:20:15.333 "trsvcid": "36532" 00:20:15.333 }, 00:20:15.333 "auth": { 00:20:15.333 "state": "completed", 00:20:15.333 "digest": "sha512", 00:20:15.333 "dhgroup": "ffdhe2048" 00:20:15.333 } 00:20:15.333 } 00:20:15.333 ]' 00:20:15.333 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:15.333 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:15.333 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:15.592 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:15.592 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:15.592 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:15.592 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:15.592 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:15.850 08:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:OGY0NzYyYzQzNWM2Y2U1NGRjOGY5YmVjZWQxMjM3OGYxM2FlMzM5Yjg0MzM1MjlhZmViNWVhMWU5ZWE0ZTI2NuGucvM=: 00:20:16.782 08:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:16.782 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:16.782 08:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:16.782 08:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.782 08:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.782 08:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.782 08:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:16.782 08:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:16.782 08:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:16.782 08:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:17.040 08:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:20:17.040 08:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:17.040 08:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:17.040 08:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:17.040 08:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:17.040 08:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:17.040 08:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:17.040 08:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.040 08:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.040 08:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.040 08:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:17.040 08:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:17.299 00:20:17.299 08:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:17.299 08:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:17.299 08:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:17.557 08:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:17.557 08:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:17.557 08:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.557 08:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.557 08:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.557 08:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:17.557 { 00:20:17.557 "cntlid": 113, 00:20:17.557 "qid": 0, 00:20:17.557 "state": "enabled", 00:20:17.557 "thread": "nvmf_tgt_poll_group_000", 00:20:17.557 "listen_address": { 00:20:17.557 "trtype": "TCP", 00:20:17.557 "adrfam": "IPv4", 00:20:17.557 "traddr": "10.0.0.2", 00:20:17.557 "trsvcid": "4420" 00:20:17.557 }, 00:20:17.557 "peer_address": { 00:20:17.557 "trtype": "TCP", 00:20:17.557 "adrfam": "IPv4", 00:20:17.557 "traddr": "10.0.0.1", 00:20:17.557 "trsvcid": "36564" 00:20:17.557 }, 00:20:17.557 "auth": { 00:20:17.557 "state": "completed", 00:20:17.557 "digest": "sha512", 00:20:17.557 "dhgroup": "ffdhe3072" 00:20:17.557 } 00:20:17.557 } 00:20:17.557 ]' 00:20:17.557 08:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:17.557 08:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:17.557 08:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:17.557 08:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:17.557 08:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:17.815 08:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:17.815 08:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:17.815 08:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:18.073 08:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:N2ViZGU5MGQ5YTE2NzZhYmMyZWFmNDYzMDE5ZGY5MDc1ZTBkNjJkMTQ0ZmM5NDBl8yaDbQ==: --dhchap-ctrl-secret DHHC-1:03:MjViZDVhOTg3MDMyODkwYjVmZDI0MzQ0ZWJjMWUxM2UzN2RiN2I1YzYwNDUzYzY0MTY2ODdiMDVjNjRhYjlmOIsqtQA=: 00:20:19.006 08:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:19.006 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:19.006 08:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:19.006 08:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.006 08:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.006 08:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.006 08:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:19.006 08:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:19.006 08:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:19.263 08:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:20:19.263 08:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:19.263 08:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:19.263 08:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:19.263 08:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:19.263 08:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:19.264 08:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:19.264 08:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.264 08:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.264 08:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.264 08:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:19.264 08:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:19.521 00:20:19.521 08:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:19.521 08:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:19.521 08:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:19.779 08:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:19.779 08:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:19.779 08:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.779 08:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.779 08:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.779 08:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:19.779 { 00:20:19.779 "cntlid": 115, 00:20:19.779 "qid": 0, 00:20:19.779 "state": "enabled", 00:20:19.779 "thread": "nvmf_tgt_poll_group_000", 00:20:19.779 "listen_address": { 00:20:19.779 "trtype": "TCP", 00:20:19.779 "adrfam": "IPv4", 00:20:19.779 "traddr": "10.0.0.2", 00:20:19.779 "trsvcid": "4420" 00:20:19.779 }, 00:20:19.779 "peer_address": { 00:20:19.779 "trtype": "TCP", 00:20:19.779 "adrfam": "IPv4", 00:20:19.779 "traddr": "10.0.0.1", 00:20:19.779 "trsvcid": "36588" 00:20:19.779 }, 00:20:19.779 "auth": { 00:20:19.779 "state": "completed", 00:20:19.779 "digest": "sha512", 00:20:19.779 "dhgroup": "ffdhe3072" 00:20:19.779 } 00:20:19.779 } 00:20:19.779 ]' 00:20:19.779 08:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:19.779 08:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:19.779 08:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:20.044 08:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:20.044 08:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:20.044 08:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:20.044 08:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:20.044 08:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:20.303 08:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MzNiYjg2ZGVmMWUxZThmM2RjZmRjMDRhZTMxOTM2MGKp5Rkk: --dhchap-ctrl-secret DHHC-1:02:NjIxNGYzMzI5M2Y0Y2I3NWYyMmM1NDJlODVjNzUwYTM0NmFlZjA4ODgwY2E4YmY35CdYOA==: 00:20:21.237 08:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:21.237 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:21.237 08:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:21.237 08:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:21.237 08:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.237 08:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:21.237 08:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:21.237 08:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:21.237 08:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:21.495 08:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:20:21.496 08:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:21.496 08:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:21.496 08:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:21.496 08:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:21.496 08:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:21.496 08:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:21.496 08:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:21.496 08:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.496 08:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:21.496 08:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:21.496 08:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:21.754 00:20:21.754 08:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:21.754 08:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:21.754 08:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:22.011 08:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:22.011 08:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:22.011 08:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.011 08:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.011 08:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.011 08:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:22.011 { 00:20:22.011 "cntlid": 117, 00:20:22.011 "qid": 0, 00:20:22.011 "state": "enabled", 00:20:22.011 "thread": "nvmf_tgt_poll_group_000", 00:20:22.011 "listen_address": { 00:20:22.011 "trtype": "TCP", 00:20:22.011 "adrfam": "IPv4", 00:20:22.011 "traddr": "10.0.0.2", 00:20:22.011 "trsvcid": "4420" 00:20:22.011 }, 00:20:22.011 "peer_address": { 00:20:22.011 "trtype": "TCP", 00:20:22.011 "adrfam": "IPv4", 00:20:22.011 "traddr": "10.0.0.1", 00:20:22.011 "trsvcid": "43126" 00:20:22.011 }, 00:20:22.011 "auth": { 00:20:22.011 "state": "completed", 00:20:22.011 "digest": "sha512", 00:20:22.011 "dhgroup": "ffdhe3072" 00:20:22.011 } 00:20:22.011 } 00:20:22.011 ]' 00:20:22.011 08:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:22.011 08:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:22.011 08:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:22.269 08:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:22.269 08:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:22.269 08:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:22.269 08:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:22.269 08:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:22.526 08:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YzYyOTlmMmRmNTI4N2E0MDBiZjg1MWY0YzU0YzA5NTg1MjExYjllZjBkMzZjOTNkp9eaHA==: --dhchap-ctrl-secret DHHC-1:01:NzE2MWI2ODUzOWI0NmJlYjZhOWNiMmZlYzkyNTIwYzAg0K5o: 00:20:23.461 08:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:23.461 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:23.461 08:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:23.461 08:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.461 08:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.461 08:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.461 08:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:23.461 08:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:23.461 08:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:23.720 08:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:20:23.720 08:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:23.720 08:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:23.720 08:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:23.720 08:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:23.720 08:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:23.720 08:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:23.720 08:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.720 08:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.720 08:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.720 08:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:23.720 08:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:23.978 00:20:23.978 08:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:23.978 08:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:23.978 08:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:24.236 08:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:24.236 08:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:24.236 08:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.236 08:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.236 08:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.236 08:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:24.236 { 00:20:24.236 "cntlid": 119, 00:20:24.236 "qid": 0, 00:20:24.236 "state": "enabled", 00:20:24.236 "thread": "nvmf_tgt_poll_group_000", 00:20:24.236 "listen_address": { 00:20:24.236 "trtype": "TCP", 00:20:24.236 "adrfam": "IPv4", 00:20:24.236 "traddr": "10.0.0.2", 00:20:24.236 "trsvcid": "4420" 00:20:24.236 }, 00:20:24.236 "peer_address": { 00:20:24.236 "trtype": "TCP", 00:20:24.236 "adrfam": "IPv4", 00:20:24.236 "traddr": "10.0.0.1", 00:20:24.236 "trsvcid": "43156" 00:20:24.236 }, 00:20:24.236 "auth": { 00:20:24.236 "state": "completed", 00:20:24.236 "digest": "sha512", 00:20:24.236 "dhgroup": "ffdhe3072" 00:20:24.236 } 00:20:24.236 } 00:20:24.236 ]' 00:20:24.236 08:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:24.494 08:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:24.494 08:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:24.494 08:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:24.494 08:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:24.494 08:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:24.494 08:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:24.494 08:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:24.752 08:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:OGY0NzYyYzQzNWM2Y2U1NGRjOGY5YmVjZWQxMjM3OGYxM2FlMzM5Yjg0MzM1MjlhZmViNWVhMWU5ZWE0ZTI2NuGucvM=: 00:20:25.685 08:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:25.685 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:25.685 08:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:25.685 08:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.685 08:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.685 08:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.685 08:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:25.685 08:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:25.685 08:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:25.685 08:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:25.943 08:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:20:25.943 08:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:25.943 08:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:25.943 08:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:25.943 08:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:25.943 08:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:25.943 08:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:25.943 08:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.943 08:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.943 08:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.943 08:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:25.943 08:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:26.509 00:20:26.509 08:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:26.509 08:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:26.509 08:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:26.509 08:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:26.509 08:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:26.509 08:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.509 08:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.799 08:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.799 08:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:26.799 { 00:20:26.799 "cntlid": 121, 00:20:26.799 "qid": 0, 00:20:26.799 "state": "enabled", 00:20:26.799 "thread": "nvmf_tgt_poll_group_000", 00:20:26.799 "listen_address": { 00:20:26.799 "trtype": "TCP", 00:20:26.799 "adrfam": "IPv4", 00:20:26.799 "traddr": "10.0.0.2", 00:20:26.799 "trsvcid": "4420" 00:20:26.799 }, 00:20:26.799 "peer_address": { 00:20:26.799 "trtype": "TCP", 00:20:26.799 "adrfam": "IPv4", 00:20:26.799 "traddr": "10.0.0.1", 00:20:26.799 "trsvcid": "43172" 00:20:26.799 }, 00:20:26.799 "auth": { 00:20:26.799 "state": "completed", 00:20:26.799 "digest": "sha512", 00:20:26.799 "dhgroup": "ffdhe4096" 00:20:26.799 } 00:20:26.799 } 00:20:26.799 ]' 00:20:26.799 08:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:26.799 08:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:26.799 08:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:26.799 08:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:26.799 08:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:26.799 08:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:26.799 08:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:26.799 08:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:27.057 08:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:N2ViZGU5MGQ5YTE2NzZhYmMyZWFmNDYzMDE5ZGY5MDc1ZTBkNjJkMTQ0ZmM5NDBl8yaDbQ==: --dhchap-ctrl-secret DHHC-1:03:MjViZDVhOTg3MDMyODkwYjVmZDI0MzQ0ZWJjMWUxM2UzN2RiN2I1YzYwNDUzYzY0MTY2ODdiMDVjNjRhYjlmOIsqtQA=: 00:20:27.992 08:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:27.992 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:27.992 08:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:27.992 08:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.992 08:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.992 08:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.992 08:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:27.992 08:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:27.992 08:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:28.250 08:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:20:28.250 08:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:28.250 08:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:28.250 08:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:28.250 08:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:28.250 08:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:28.250 08:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:28.250 08:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.250 08:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.250 08:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.250 08:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:28.250 08:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:28.816 00:20:28.816 08:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:28.816 08:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:28.816 08:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:28.816 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:28.816 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:28.816 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.816 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.816 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.816 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:28.816 { 00:20:28.816 "cntlid": 123, 00:20:28.816 "qid": 0, 00:20:28.816 "state": "enabled", 00:20:28.816 "thread": "nvmf_tgt_poll_group_000", 00:20:28.816 "listen_address": { 00:20:28.816 "trtype": "TCP", 00:20:28.816 "adrfam": "IPv4", 00:20:28.816 "traddr": "10.0.0.2", 00:20:28.816 "trsvcid": "4420" 00:20:28.816 }, 00:20:28.816 "peer_address": { 00:20:28.816 "trtype": "TCP", 00:20:28.816 "adrfam": "IPv4", 00:20:28.816 "traddr": "10.0.0.1", 00:20:28.816 "trsvcid": "43210" 00:20:28.816 }, 00:20:28.816 "auth": { 00:20:28.816 "state": "completed", 00:20:28.816 "digest": "sha512", 00:20:28.816 "dhgroup": "ffdhe4096" 00:20:28.816 } 00:20:28.816 } 00:20:28.816 ]' 00:20:28.816 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:29.074 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:29.074 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:29.074 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:29.074 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:29.074 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:29.074 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:29.074 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:29.332 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MzNiYjg2ZGVmMWUxZThmM2RjZmRjMDRhZTMxOTM2MGKp5Rkk: --dhchap-ctrl-secret DHHC-1:02:NjIxNGYzMzI5M2Y0Y2I3NWYyMmM1NDJlODVjNzUwYTM0NmFlZjA4ODgwY2E4YmY35CdYOA==: 00:20:30.266 08:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:30.266 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:30.266 08:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:30.266 08:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.266 08:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.266 08:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.266 08:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:30.266 08:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:30.266 08:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:30.524 08:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:20:30.524 08:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:30.524 08:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:30.525 08:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:30.525 08:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:30.525 08:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:30.525 08:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:30.525 08:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.525 08:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.525 08:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.525 08:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:30.525 08:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:31.091 00:20:31.091 08:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:31.091 08:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:31.091 08:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:31.091 08:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:31.091 08:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:31.091 08:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.091 08:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.091 08:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.091 08:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:31.091 { 00:20:31.091 "cntlid": 125, 00:20:31.091 "qid": 0, 00:20:31.091 "state": "enabled", 00:20:31.091 "thread": "nvmf_tgt_poll_group_000", 00:20:31.091 "listen_address": { 00:20:31.091 "trtype": "TCP", 00:20:31.091 "adrfam": "IPv4", 00:20:31.091 "traddr": "10.0.0.2", 00:20:31.091 "trsvcid": "4420" 00:20:31.091 }, 00:20:31.091 "peer_address": { 00:20:31.091 "trtype": "TCP", 00:20:31.091 "adrfam": "IPv4", 00:20:31.091 "traddr": "10.0.0.1", 00:20:31.091 "trsvcid": "43244" 00:20:31.091 }, 00:20:31.091 "auth": { 00:20:31.092 "state": "completed", 00:20:31.092 "digest": "sha512", 00:20:31.092 "dhgroup": "ffdhe4096" 00:20:31.092 } 00:20:31.092 } 00:20:31.092 ]' 00:20:31.092 08:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:31.350 08:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:31.350 08:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:31.350 08:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:31.350 08:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:31.350 08:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:31.350 08:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:31.350 08:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:31.609 08:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YzYyOTlmMmRmNTI4N2E0MDBiZjg1MWY0YzU0YzA5NTg1MjExYjllZjBkMzZjOTNkp9eaHA==: --dhchap-ctrl-secret DHHC-1:01:NzE2MWI2ODUzOWI0NmJlYjZhOWNiMmZlYzkyNTIwYzAg0K5o: 00:20:32.541 08:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:32.541 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:32.541 08:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:32.541 08:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.541 08:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.541 08:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.541 08:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:32.541 08:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:32.541 08:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:32.799 08:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:20:32.799 08:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:32.799 08:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:32.799 08:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:32.799 08:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:32.799 08:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:32.799 08:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:32.799 08:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.799 08:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.799 08:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.799 08:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:32.799 08:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:33.365 00:20:33.365 08:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:33.365 08:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:33.365 08:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:33.365 08:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:33.365 08:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:33.365 08:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:33.365 08:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.365 08:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:33.365 08:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:33.365 { 00:20:33.365 "cntlid": 127, 00:20:33.365 "qid": 0, 00:20:33.365 "state": "enabled", 00:20:33.365 "thread": "nvmf_tgt_poll_group_000", 00:20:33.365 "listen_address": { 00:20:33.365 "trtype": "TCP", 00:20:33.365 "adrfam": "IPv4", 00:20:33.365 "traddr": "10.0.0.2", 00:20:33.365 "trsvcid": "4420" 00:20:33.365 }, 00:20:33.365 "peer_address": { 00:20:33.365 "trtype": "TCP", 00:20:33.365 "adrfam": "IPv4", 00:20:33.365 "traddr": "10.0.0.1", 00:20:33.365 "trsvcid": "58084" 00:20:33.365 }, 00:20:33.365 "auth": { 00:20:33.365 "state": "completed", 00:20:33.365 "digest": "sha512", 00:20:33.365 "dhgroup": "ffdhe4096" 00:20:33.365 } 00:20:33.365 } 00:20:33.365 ]' 00:20:33.365 08:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:33.624 08:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:33.624 08:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:33.624 08:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:33.624 08:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:33.624 08:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:33.624 08:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:33.624 08:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:33.882 08:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:OGY0NzYyYzQzNWM2Y2U1NGRjOGY5YmVjZWQxMjM3OGYxM2FlMzM5Yjg0MzM1MjlhZmViNWVhMWU5ZWE0ZTI2NuGucvM=: 00:20:34.815 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:34.815 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:34.815 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:34.815 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.815 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.815 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.815 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:34.815 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:34.815 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:34.815 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:35.080 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:20:35.080 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:35.080 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:35.080 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:35.080 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:35.080 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:35.080 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:35.080 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.080 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.080 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.080 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:35.080 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:35.649 00:20:35.649 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:35.649 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:35.649 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:35.906 08:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:35.906 08:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:35.906 08:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.906 08:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.906 08:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.906 08:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:35.906 { 00:20:35.906 "cntlid": 129, 00:20:35.906 "qid": 0, 00:20:35.906 "state": "enabled", 00:20:35.906 "thread": "nvmf_tgt_poll_group_000", 00:20:35.906 "listen_address": { 00:20:35.906 "trtype": "TCP", 00:20:35.906 "adrfam": "IPv4", 00:20:35.906 "traddr": "10.0.0.2", 00:20:35.906 "trsvcid": "4420" 00:20:35.906 }, 00:20:35.906 "peer_address": { 00:20:35.906 "trtype": "TCP", 00:20:35.906 "adrfam": "IPv4", 00:20:35.906 "traddr": "10.0.0.1", 00:20:35.906 "trsvcid": "58108" 00:20:35.906 }, 00:20:35.906 "auth": { 00:20:35.906 "state": "completed", 00:20:35.906 "digest": "sha512", 00:20:35.906 "dhgroup": "ffdhe6144" 00:20:35.906 } 00:20:35.906 } 00:20:35.906 ]' 00:20:35.906 08:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:35.906 08:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:35.906 08:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:35.906 08:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:35.906 08:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:35.906 08:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:35.906 08:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:35.906 08:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:36.163 08:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:N2ViZGU5MGQ5YTE2NzZhYmMyZWFmNDYzMDE5ZGY5MDc1ZTBkNjJkMTQ0ZmM5NDBl8yaDbQ==: --dhchap-ctrl-secret DHHC-1:03:MjViZDVhOTg3MDMyODkwYjVmZDI0MzQ0ZWJjMWUxM2UzN2RiN2I1YzYwNDUzYzY0MTY2ODdiMDVjNjRhYjlmOIsqtQA=: 00:20:37.097 08:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:37.097 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:37.097 08:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:37.097 08:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.097 08:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.097 08:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.097 08:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:37.355 08:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:37.355 08:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:37.613 08:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:20:37.613 08:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:37.613 08:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:37.613 08:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:37.613 08:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:37.613 08:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:37.613 08:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:37.613 08:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.613 08:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.613 08:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.613 08:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:37.613 08:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:38.178 00:20:38.178 08:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:38.178 08:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:38.178 08:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:38.436 08:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:38.436 08:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:38.436 08:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.436 08:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.436 08:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.436 08:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:38.436 { 00:20:38.436 "cntlid": 131, 00:20:38.436 "qid": 0, 00:20:38.436 "state": "enabled", 00:20:38.436 "thread": "nvmf_tgt_poll_group_000", 00:20:38.436 "listen_address": { 00:20:38.436 "trtype": "TCP", 00:20:38.436 "adrfam": "IPv4", 00:20:38.436 "traddr": "10.0.0.2", 00:20:38.436 "trsvcid": "4420" 00:20:38.436 }, 00:20:38.436 "peer_address": { 00:20:38.436 "trtype": "TCP", 00:20:38.436 "adrfam": "IPv4", 00:20:38.436 "traddr": "10.0.0.1", 00:20:38.436 "trsvcid": "58136" 00:20:38.436 }, 00:20:38.436 "auth": { 00:20:38.436 "state": "completed", 00:20:38.436 "digest": "sha512", 00:20:38.436 "dhgroup": "ffdhe6144" 00:20:38.436 } 00:20:38.436 } 00:20:38.436 ]' 00:20:38.436 08:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:38.436 08:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:38.436 08:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:38.436 08:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:38.436 08:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:38.436 08:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:38.436 08:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:38.437 08:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:38.695 08:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MzNiYjg2ZGVmMWUxZThmM2RjZmRjMDRhZTMxOTM2MGKp5Rkk: --dhchap-ctrl-secret DHHC-1:02:NjIxNGYzMzI5M2Y0Y2I3NWYyMmM1NDJlODVjNzUwYTM0NmFlZjA4ODgwY2E4YmY35CdYOA==: 00:20:39.628 08:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:39.628 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:39.628 08:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:39.628 08:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.628 08:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.628 08:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.628 08:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:39.628 08:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:39.628 08:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:39.885 08:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:20:39.885 08:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:39.885 08:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:39.885 08:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:39.885 08:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:39.885 08:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:39.885 08:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:39.886 08:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.886 08:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.886 08:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.886 08:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:39.886 08:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:40.819 00:20:40.819 08:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:40.819 08:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:40.819 08:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:40.819 08:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:40.819 08:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:40.819 08:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.819 08:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.819 08:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.819 08:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:40.819 { 00:20:40.819 "cntlid": 133, 00:20:40.819 "qid": 0, 00:20:40.819 "state": "enabled", 00:20:40.819 "thread": "nvmf_tgt_poll_group_000", 00:20:40.819 "listen_address": { 00:20:40.819 "trtype": "TCP", 00:20:40.819 "adrfam": "IPv4", 00:20:40.819 "traddr": "10.0.0.2", 00:20:40.819 "trsvcid": "4420" 00:20:40.819 }, 00:20:40.819 "peer_address": { 00:20:40.819 "trtype": "TCP", 00:20:40.819 "adrfam": "IPv4", 00:20:40.819 "traddr": "10.0.0.1", 00:20:40.819 "trsvcid": "58154" 00:20:40.819 }, 00:20:40.819 "auth": { 00:20:40.819 "state": "completed", 00:20:40.819 "digest": "sha512", 00:20:40.819 "dhgroup": "ffdhe6144" 00:20:40.819 } 00:20:40.819 } 00:20:40.819 ]' 00:20:40.819 08:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:40.819 08:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:40.819 08:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:40.819 08:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:40.819 08:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:41.077 08:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:41.077 08:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:41.077 08:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:41.334 08:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YzYyOTlmMmRmNTI4N2E0MDBiZjg1MWY0YzU0YzA5NTg1MjExYjllZjBkMzZjOTNkp9eaHA==: --dhchap-ctrl-secret DHHC-1:01:NzE2MWI2ODUzOWI0NmJlYjZhOWNiMmZlYzkyNTIwYzAg0K5o: 00:20:42.302 08:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:42.302 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:42.302 08:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:42.302 08:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.302 08:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.302 08:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.302 08:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:42.302 08:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:42.302 08:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:42.302 08:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:20:42.302 08:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:42.302 08:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:42.302 08:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:42.302 08:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:42.302 08:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:42.302 08:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:42.302 08:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.302 08:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.302 08:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.302 08:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:42.302 08:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:42.867 00:20:43.125 08:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:43.125 08:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:43.125 08:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:43.382 08:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:43.382 08:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:43.382 08:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.382 08:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.382 08:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.382 08:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:43.382 { 00:20:43.382 "cntlid": 135, 00:20:43.382 "qid": 0, 00:20:43.382 "state": "enabled", 00:20:43.382 "thread": "nvmf_tgt_poll_group_000", 00:20:43.382 "listen_address": { 00:20:43.382 "trtype": "TCP", 00:20:43.382 "adrfam": "IPv4", 00:20:43.382 "traddr": "10.0.0.2", 00:20:43.382 "trsvcid": "4420" 00:20:43.382 }, 00:20:43.382 "peer_address": { 00:20:43.382 "trtype": "TCP", 00:20:43.382 "adrfam": "IPv4", 00:20:43.382 "traddr": "10.0.0.1", 00:20:43.382 "trsvcid": "37774" 00:20:43.382 }, 00:20:43.382 "auth": { 00:20:43.382 "state": "completed", 00:20:43.382 "digest": "sha512", 00:20:43.382 "dhgroup": "ffdhe6144" 00:20:43.382 } 00:20:43.382 } 00:20:43.382 ]' 00:20:43.382 08:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:43.382 08:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:43.382 08:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:43.382 08:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:43.382 08:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:43.382 08:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:43.382 08:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:43.382 08:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:43.640 08:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:OGY0NzYyYzQzNWM2Y2U1NGRjOGY5YmVjZWQxMjM3OGYxM2FlMzM5Yjg0MzM1MjlhZmViNWVhMWU5ZWE0ZTI2NuGucvM=: 00:20:44.572 08:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:44.572 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:44.572 08:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:44.573 08:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.573 08:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.573 08:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.573 08:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:44.573 08:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:44.573 08:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:44.573 08:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:44.830 08:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:20:44.830 08:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:44.830 08:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:44.830 08:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:44.830 08:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:44.830 08:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:44.830 08:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:44.830 08:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.830 08:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.830 08:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.830 08:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:44.830 08:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:45.760 00:20:45.760 08:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:45.760 08:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:45.760 08:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:46.018 08:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:46.018 08:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:46.018 08:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.018 08:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.018 08:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.018 08:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:46.018 { 00:20:46.018 "cntlid": 137, 00:20:46.018 "qid": 0, 00:20:46.018 "state": "enabled", 00:20:46.018 "thread": "nvmf_tgt_poll_group_000", 00:20:46.018 "listen_address": { 00:20:46.018 "trtype": "TCP", 00:20:46.018 "adrfam": "IPv4", 00:20:46.018 "traddr": "10.0.0.2", 00:20:46.018 "trsvcid": "4420" 00:20:46.018 }, 00:20:46.018 "peer_address": { 00:20:46.018 "trtype": "TCP", 00:20:46.018 "adrfam": "IPv4", 00:20:46.018 "traddr": "10.0.0.1", 00:20:46.018 "trsvcid": "37794" 00:20:46.018 }, 00:20:46.018 "auth": { 00:20:46.018 "state": "completed", 00:20:46.018 "digest": "sha512", 00:20:46.018 "dhgroup": "ffdhe8192" 00:20:46.018 } 00:20:46.018 } 00:20:46.018 ]' 00:20:46.018 08:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:46.018 08:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:46.019 08:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:46.019 08:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:46.019 08:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:46.019 08:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:46.019 08:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:46.019 08:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:46.277 08:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:N2ViZGU5MGQ5YTE2NzZhYmMyZWFmNDYzMDE5ZGY5MDc1ZTBkNjJkMTQ0ZmM5NDBl8yaDbQ==: --dhchap-ctrl-secret DHHC-1:03:MjViZDVhOTg3MDMyODkwYjVmZDI0MzQ0ZWJjMWUxM2UzN2RiN2I1YzYwNDUzYzY0MTY2ODdiMDVjNjRhYjlmOIsqtQA=: 00:20:47.651 08:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:47.651 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:47.651 08:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:47.651 08:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.651 08:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.651 08:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.651 08:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:47.651 08:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:47.651 08:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:47.651 08:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:20:47.651 08:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:47.651 08:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:47.651 08:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:47.651 08:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:47.651 08:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:47.651 08:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:47.651 08:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.651 08:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.651 08:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.651 08:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:47.651 08:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:48.585 00:20:48.585 08:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:48.585 08:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:48.585 08:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:48.843 08:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:48.844 08:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:48.844 08:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.844 08:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.844 08:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.844 08:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:48.844 { 00:20:48.844 "cntlid": 139, 00:20:48.844 "qid": 0, 00:20:48.844 "state": "enabled", 00:20:48.844 "thread": "nvmf_tgt_poll_group_000", 00:20:48.844 "listen_address": { 00:20:48.844 "trtype": "TCP", 00:20:48.844 "adrfam": "IPv4", 00:20:48.844 "traddr": "10.0.0.2", 00:20:48.844 "trsvcid": "4420" 00:20:48.844 }, 00:20:48.844 "peer_address": { 00:20:48.844 "trtype": "TCP", 00:20:48.844 "adrfam": "IPv4", 00:20:48.844 "traddr": "10.0.0.1", 00:20:48.844 "trsvcid": "37830" 00:20:48.844 }, 00:20:48.844 "auth": { 00:20:48.844 "state": "completed", 00:20:48.844 "digest": "sha512", 00:20:48.844 "dhgroup": "ffdhe8192" 00:20:48.844 } 00:20:48.844 } 00:20:48.844 ]' 00:20:48.844 08:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:48.844 08:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:48.844 08:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:48.844 08:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:48.844 08:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:48.844 08:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:48.844 08:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:48.844 08:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:49.102 08:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MzNiYjg2ZGVmMWUxZThmM2RjZmRjMDRhZTMxOTM2MGKp5Rkk: --dhchap-ctrl-secret DHHC-1:02:NjIxNGYzMzI5M2Y0Y2I3NWYyMmM1NDJlODVjNzUwYTM0NmFlZjA4ODgwY2E4YmY35CdYOA==: 00:20:50.035 08:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:50.035 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:50.035 08:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:50.035 08:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.035 08:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.035 08:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.035 08:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:50.035 08:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:50.035 08:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:50.294 08:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:20:50.294 08:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:50.294 08:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:50.294 08:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:50.294 08:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:50.294 08:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:50.294 08:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:50.294 08:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.294 08:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.294 08:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.294 08:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:50.294 08:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:51.228 00:20:51.228 08:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:51.228 08:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:51.228 08:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:51.485 08:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:51.485 08:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:51.485 08:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.485 08:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.485 08:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.485 08:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:51.485 { 00:20:51.485 "cntlid": 141, 00:20:51.485 "qid": 0, 00:20:51.485 "state": "enabled", 00:20:51.485 "thread": "nvmf_tgt_poll_group_000", 00:20:51.485 "listen_address": { 00:20:51.485 "trtype": "TCP", 00:20:51.485 "adrfam": "IPv4", 00:20:51.485 "traddr": "10.0.0.2", 00:20:51.485 "trsvcid": "4420" 00:20:51.485 }, 00:20:51.485 "peer_address": { 00:20:51.485 "trtype": "TCP", 00:20:51.485 "adrfam": "IPv4", 00:20:51.485 "traddr": "10.0.0.1", 00:20:51.485 "trsvcid": "37866" 00:20:51.485 }, 00:20:51.485 "auth": { 00:20:51.485 "state": "completed", 00:20:51.485 "digest": "sha512", 00:20:51.485 "dhgroup": "ffdhe8192" 00:20:51.485 } 00:20:51.485 } 00:20:51.485 ]' 00:20:51.485 08:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:51.486 08:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:51.486 08:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:51.486 08:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:51.486 08:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:51.744 08:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:51.744 08:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:51.744 08:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:52.002 08:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YzYyOTlmMmRmNTI4N2E0MDBiZjg1MWY0YzU0YzA5NTg1MjExYjllZjBkMzZjOTNkp9eaHA==: --dhchap-ctrl-secret DHHC-1:01:NzE2MWI2ODUzOWI0NmJlYjZhOWNiMmZlYzkyNTIwYzAg0K5o: 00:20:52.936 08:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:52.936 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:52.936 08:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:52.936 08:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.936 08:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.936 08:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.936 08:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:52.936 08:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:52.936 08:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:53.194 08:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:20:53.194 08:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:53.194 08:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:53.194 08:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:53.194 08:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:53.194 08:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:53.194 08:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:53.194 08:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.194 08:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.194 08:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.194 08:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:53.194 08:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:54.151 00:20:54.151 08:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:54.151 08:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:54.151 08:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:54.151 08:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:54.151 08:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:54.151 08:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.151 08:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.418 08:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.418 08:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:54.418 { 00:20:54.418 "cntlid": 143, 00:20:54.418 "qid": 0, 00:20:54.418 "state": "enabled", 00:20:54.418 "thread": "nvmf_tgt_poll_group_000", 00:20:54.418 "listen_address": { 00:20:54.418 "trtype": "TCP", 00:20:54.418 "adrfam": "IPv4", 00:20:54.418 "traddr": "10.0.0.2", 00:20:54.418 "trsvcid": "4420" 00:20:54.418 }, 00:20:54.418 "peer_address": { 00:20:54.418 "trtype": "TCP", 00:20:54.418 "adrfam": "IPv4", 00:20:54.418 "traddr": "10.0.0.1", 00:20:54.418 "trsvcid": "37386" 00:20:54.418 }, 00:20:54.418 "auth": { 00:20:54.418 "state": "completed", 00:20:54.418 "digest": "sha512", 00:20:54.418 "dhgroup": "ffdhe8192" 00:20:54.418 } 00:20:54.418 } 00:20:54.418 ]' 00:20:54.418 08:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:54.418 08:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:54.418 08:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:54.418 08:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:54.418 08:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:54.418 08:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:54.418 08:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:54.418 08:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:54.676 08:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:OGY0NzYyYzQzNWM2Y2U1NGRjOGY5YmVjZWQxMjM3OGYxM2FlMzM5Yjg0MzM1MjlhZmViNWVhMWU5ZWE0ZTI2NuGucvM=: 00:20:55.610 08:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:55.610 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:55.610 08:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:55.610 08:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.610 08:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.610 08:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.610 08:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:20:55.610 08:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:20:55.610 08:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:20:55.610 08:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:55.610 08:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:55.610 08:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:55.869 08:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:20:55.869 08:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:55.869 08:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:55.869 08:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:55.869 08:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:55.869 08:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:55.869 08:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:55.869 08:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.869 08:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.869 08:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.869 08:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:55.869 08:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:56.800 00:20:56.801 08:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:56.801 08:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:56.801 08:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:57.058 08:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:57.058 08:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:57.058 08:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.058 08:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.058 08:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.058 08:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:57.058 { 00:20:57.058 "cntlid": 145, 00:20:57.058 "qid": 0, 00:20:57.058 "state": "enabled", 00:20:57.058 "thread": "nvmf_tgt_poll_group_000", 00:20:57.058 "listen_address": { 00:20:57.058 "trtype": "TCP", 00:20:57.058 "adrfam": "IPv4", 00:20:57.058 "traddr": "10.0.0.2", 00:20:57.058 "trsvcid": "4420" 00:20:57.058 }, 00:20:57.058 "peer_address": { 00:20:57.058 "trtype": "TCP", 00:20:57.058 "adrfam": "IPv4", 00:20:57.058 "traddr": "10.0.0.1", 00:20:57.058 "trsvcid": "37404" 00:20:57.058 }, 00:20:57.058 "auth": { 00:20:57.058 "state": "completed", 00:20:57.058 "digest": "sha512", 00:20:57.058 "dhgroup": "ffdhe8192" 00:20:57.058 } 00:20:57.058 } 00:20:57.058 ]' 00:20:57.058 08:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:57.316 08:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:57.316 08:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:57.316 08:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:57.316 08:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:57.316 08:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:57.316 08:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:57.316 08:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:57.581 08:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:N2ViZGU5MGQ5YTE2NzZhYmMyZWFmNDYzMDE5ZGY5MDc1ZTBkNjJkMTQ0ZmM5NDBl8yaDbQ==: --dhchap-ctrl-secret DHHC-1:03:MjViZDVhOTg3MDMyODkwYjVmZDI0MzQ0ZWJjMWUxM2UzN2RiN2I1YzYwNDUzYzY0MTY2ODdiMDVjNjRhYjlmOIsqtQA=: 00:20:58.561 08:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:58.561 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:58.561 08:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:58.561 08:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.561 08:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.561 08:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.561 08:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:20:58.561 08:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.561 08:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.561 08:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.561 08:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:58.561 08:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:20:58.561 08:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:58.561 08:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:20:58.562 08:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:58.562 08:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:20:58.562 08:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:58.562 08:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:58.562 08:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:59.498 request: 00:20:59.498 { 00:20:59.498 "name": "nvme0", 00:20:59.498 "trtype": "tcp", 00:20:59.498 "traddr": "10.0.0.2", 00:20:59.498 "adrfam": "ipv4", 00:20:59.498 "trsvcid": "4420", 00:20:59.498 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:59.498 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:59.498 "prchk_reftag": false, 00:20:59.498 "prchk_guard": false, 00:20:59.498 "hdgst": false, 00:20:59.498 "ddgst": false, 00:20:59.498 "dhchap_key": "key2", 00:20:59.498 "method": "bdev_nvme_attach_controller", 00:20:59.498 "req_id": 1 00:20:59.498 } 00:20:59.498 Got JSON-RPC error response 00:20:59.498 response: 00:20:59.498 { 00:20:59.498 "code": -5, 00:20:59.498 "message": "Input/output error" 00:20:59.498 } 00:20:59.498 08:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:20:59.498 08:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:59.498 08:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:59.498 08:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:59.498 08:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:59.498 08:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.498 08:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.498 08:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.498 08:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:59.498 08:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.498 08:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.498 08:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.498 08:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:59.498 08:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:20:59.498 08:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:59.498 08:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:20:59.498 08:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:59.498 08:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:20:59.498 08:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:59.498 08:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:59.498 08:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:00.432 request: 00:21:00.432 { 00:21:00.432 "name": "nvme0", 00:21:00.432 "trtype": "tcp", 00:21:00.432 "traddr": "10.0.0.2", 00:21:00.432 "adrfam": "ipv4", 00:21:00.432 "trsvcid": "4420", 00:21:00.432 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:00.432 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:00.432 "prchk_reftag": false, 00:21:00.432 "prchk_guard": false, 00:21:00.432 "hdgst": false, 00:21:00.432 "ddgst": false, 00:21:00.432 "dhchap_key": "key1", 00:21:00.432 "dhchap_ctrlr_key": "ckey2", 00:21:00.432 "method": "bdev_nvme_attach_controller", 00:21:00.432 "req_id": 1 00:21:00.432 } 00:21:00.432 Got JSON-RPC error response 00:21:00.432 response: 00:21:00.432 { 00:21:00.432 "code": -5, 00:21:00.432 "message": "Input/output error" 00:21:00.432 } 00:21:00.432 08:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:21:00.432 08:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:00.432 08:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:00.432 08:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:00.432 08:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:00.432 08:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.432 08:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.432 08:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.432 08:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:21:00.432 08:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.432 08:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.432 08:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.432 08:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:00.432 08:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:21:00.432 08:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:00.432 08:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:21:00.432 08:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:00.432 08:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:21:00.432 08:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:00.432 08:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:00.432 08:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:01.366 request: 00:21:01.366 { 00:21:01.366 "name": "nvme0", 00:21:01.366 "trtype": "tcp", 00:21:01.366 "traddr": "10.0.0.2", 00:21:01.366 "adrfam": "ipv4", 00:21:01.366 "trsvcid": "4420", 00:21:01.366 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:01.366 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:01.366 "prchk_reftag": false, 00:21:01.366 "prchk_guard": false, 00:21:01.366 "hdgst": false, 00:21:01.366 "ddgst": false, 00:21:01.366 "dhchap_key": "key1", 00:21:01.366 "dhchap_ctrlr_key": "ckey1", 00:21:01.366 "method": "bdev_nvme_attach_controller", 00:21:01.366 "req_id": 1 00:21:01.366 } 00:21:01.366 Got JSON-RPC error response 00:21:01.366 response: 00:21:01.366 { 00:21:01.366 "code": -5, 00:21:01.366 "message": "Input/output error" 00:21:01.366 } 00:21:01.366 08:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:21:01.366 08:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:01.366 08:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:01.366 08:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:01.366 08:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:01.366 08:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.366 08:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.366 08:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.366 08:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 970247 00:21:01.366 08:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 970247 ']' 00:21:01.366 08:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 970247 00:21:01.366 08:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:21:01.366 08:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:01.366 08:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 970247 00:21:01.366 08:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:01.366 08:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:01.366 08:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 970247' 00:21:01.366 killing process with pid 970247 00:21:01.366 08:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 970247 00:21:01.366 08:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 970247 00:21:01.366 08:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:21:01.366 08:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:01.366 08:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:01.366 08:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.366 08:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=992922 00:21:01.366 08:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:21:01.367 08:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 992922 00:21:01.367 08:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 992922 ']' 00:21:01.367 08:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:01.367 08:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:01.367 08:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:01.367 08:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:01.367 08:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.625 08:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:01.625 08:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:21:01.625 08:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:01.625 08:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:01.625 08:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.625 08:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:01.625 08:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:21:01.625 08:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 992922 00:21:01.625 08:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 992922 ']' 00:21:01.625 08:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:01.625 08:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:01.625 08:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:01.625 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:01.625 08:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:01.626 08:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.883 08:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:01.883 08:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:21:01.883 08:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:21:01.883 08:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.883 08:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.142 08:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.142 08:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:21:02.142 08:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:02.142 08:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:02.142 08:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:02.142 08:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:02.142 08:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:02.142 08:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:02.142 08:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.142 08:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.142 08:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.142 08:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:02.142 08:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:03.075 00:21:03.075 08:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:03.075 08:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:03.075 08:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:03.333 08:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:03.333 08:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:03.333 08:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.333 08:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.333 08:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.333 08:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:03.333 { 00:21:03.333 "cntlid": 1, 00:21:03.333 "qid": 0, 00:21:03.333 "state": "enabled", 00:21:03.333 "thread": "nvmf_tgt_poll_group_000", 00:21:03.333 "listen_address": { 00:21:03.333 "trtype": "TCP", 00:21:03.333 "adrfam": "IPv4", 00:21:03.333 "traddr": "10.0.0.2", 00:21:03.333 "trsvcid": "4420" 00:21:03.333 }, 00:21:03.333 "peer_address": { 00:21:03.333 "trtype": "TCP", 00:21:03.333 "adrfam": "IPv4", 00:21:03.333 "traddr": "10.0.0.1", 00:21:03.333 "trsvcid": "56034" 00:21:03.333 }, 00:21:03.333 "auth": { 00:21:03.333 "state": "completed", 00:21:03.333 "digest": "sha512", 00:21:03.333 "dhgroup": "ffdhe8192" 00:21:03.333 } 00:21:03.333 } 00:21:03.333 ]' 00:21:03.333 08:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:03.333 08:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:03.333 08:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:03.333 08:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:03.333 08:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:03.591 08:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:03.591 08:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:03.591 08:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:03.850 08:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:OGY0NzYyYzQzNWM2Y2U1NGRjOGY5YmVjZWQxMjM3OGYxM2FlMzM5Yjg0MzM1MjlhZmViNWVhMWU5ZWE0ZTI2NuGucvM=: 00:21:04.784 08:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:04.784 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:04.784 08:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:04.784 08:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.784 08:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.784 08:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.784 08:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:04.784 08:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.784 08:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.784 08:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.784 08:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:21:04.784 08:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:21:05.042 08:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:05.042 08:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:21:05.042 08:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:05.042 08:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:21:05.042 08:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:05.042 08:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:21:05.042 08:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:05.042 08:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:05.043 08:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:05.300 request: 00:21:05.300 { 00:21:05.300 "name": "nvme0", 00:21:05.300 "trtype": "tcp", 00:21:05.300 "traddr": "10.0.0.2", 00:21:05.300 "adrfam": "ipv4", 00:21:05.300 "trsvcid": "4420", 00:21:05.300 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:05.300 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:05.300 "prchk_reftag": false, 00:21:05.300 "prchk_guard": false, 00:21:05.300 "hdgst": false, 00:21:05.300 "ddgst": false, 00:21:05.300 "dhchap_key": "key3", 00:21:05.300 "method": "bdev_nvme_attach_controller", 00:21:05.300 "req_id": 1 00:21:05.300 } 00:21:05.300 Got JSON-RPC error response 00:21:05.300 response: 00:21:05.300 { 00:21:05.300 "code": -5, 00:21:05.300 "message": "Input/output error" 00:21:05.300 } 00:21:05.300 08:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:21:05.300 08:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:05.300 08:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:05.300 08:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:05.300 08:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:21:05.300 08:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:21:05.300 08:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:21:05.300 08:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:21:05.558 08:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:05.558 08:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:21:05.558 08:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:05.558 08:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:21:05.558 08:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:05.558 08:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:21:05.558 08:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:05.558 08:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:05.558 08:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:05.816 request: 00:21:05.816 { 00:21:05.816 "name": "nvme0", 00:21:05.816 "trtype": "tcp", 00:21:05.816 "traddr": "10.0.0.2", 00:21:05.816 "adrfam": "ipv4", 00:21:05.816 "trsvcid": "4420", 00:21:05.816 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:05.816 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:05.816 "prchk_reftag": false, 00:21:05.816 "prchk_guard": false, 00:21:05.816 "hdgst": false, 00:21:05.816 "ddgst": false, 00:21:05.816 "dhchap_key": "key3", 00:21:05.816 "method": "bdev_nvme_attach_controller", 00:21:05.816 "req_id": 1 00:21:05.816 } 00:21:05.816 Got JSON-RPC error response 00:21:05.816 response: 00:21:05.816 { 00:21:05.816 "code": -5, 00:21:05.816 "message": "Input/output error" 00:21:05.816 } 00:21:05.816 08:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:21:05.816 08:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:05.816 08:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:05.816 08:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:05.816 08:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:21:05.816 08:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:21:05.816 08:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:21:05.816 08:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:05.816 08:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:05.816 08:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:06.074 08:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:06.074 08:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.074 08:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.074 08:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.074 08:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:06.074 08:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.074 08:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.074 08:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.074 08:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:06.074 08:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:21:06.074 08:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:06.074 08:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:21:06.074 08:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:06.074 08:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:21:06.074 08:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:06.074 08:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:06.074 08:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:06.332 request: 00:21:06.332 { 00:21:06.332 "name": "nvme0", 00:21:06.332 "trtype": "tcp", 00:21:06.332 "traddr": "10.0.0.2", 00:21:06.332 "adrfam": "ipv4", 00:21:06.332 "trsvcid": "4420", 00:21:06.332 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:06.332 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:06.332 "prchk_reftag": false, 00:21:06.332 "prchk_guard": false, 00:21:06.332 "hdgst": false, 00:21:06.332 "ddgst": false, 00:21:06.332 "dhchap_key": "key0", 00:21:06.332 "dhchap_ctrlr_key": "key1", 00:21:06.332 "method": "bdev_nvme_attach_controller", 00:21:06.332 "req_id": 1 00:21:06.332 } 00:21:06.332 Got JSON-RPC error response 00:21:06.332 response: 00:21:06.332 { 00:21:06.332 "code": -5, 00:21:06.332 "message": "Input/output error" 00:21:06.332 } 00:21:06.332 08:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:21:06.332 08:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:06.332 08:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:06.332 08:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:06.332 08:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:21:06.332 08:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:21:06.589 00:21:06.589 08:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:21:06.589 08:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:21:06.589 08:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:06.847 08:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:06.847 08:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:06.847 08:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:07.105 08:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:21:07.105 08:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:21:07.105 08:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 970266 00:21:07.105 08:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 970266 ']' 00:21:07.105 08:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 970266 00:21:07.105 08:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:21:07.105 08:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:07.105 08:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 970266 00:21:07.105 08:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:07.105 08:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:07.105 08:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 970266' 00:21:07.105 killing process with pid 970266 00:21:07.105 08:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 970266 00:21:07.105 08:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 970266 00:21:07.670 08:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:21:07.671 08:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:07.671 08:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:21:07.671 08:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:07.671 08:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:21:07.671 08:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:07.671 08:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:07.671 rmmod nvme_tcp 00:21:07.671 rmmod nvme_fabrics 00:21:07.671 rmmod nvme_keyring 00:21:07.671 08:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:07.671 08:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:21:07.671 08:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:21:07.671 08:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 992922 ']' 00:21:07.671 08:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 992922 00:21:07.671 08:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 992922 ']' 00:21:07.671 08:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 992922 00:21:07.671 08:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:21:07.671 08:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:07.671 08:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 992922 00:21:07.671 08:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:07.671 08:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:07.671 08:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 992922' 00:21:07.671 killing process with pid 992922 00:21:07.671 08:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 992922 00:21:07.671 08:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 992922 00:21:07.928 08:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:07.928 08:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:07.928 08:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:07.928 08:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:07.928 08:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:07.929 08:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:07.929 08:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:07.929 08:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:09.829 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:10.089 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.7xQ /tmp/spdk.key-sha256.U3W /tmp/spdk.key-sha384.drj /tmp/spdk.key-sha512.2VG /tmp/spdk.key-sha512.eL4 /tmp/spdk.key-sha384.BT4 /tmp/spdk.key-sha256.Siy '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:21:10.089 00:21:10.089 real 3m11.040s 00:21:10.089 user 7m25.287s 00:21:10.089 sys 0m25.225s 00:21:10.089 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:10.089 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.089 ************************************ 00:21:10.089 END TEST nvmf_auth_target 00:21:10.089 ************************************ 00:21:10.089 08:54:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:21:10.089 08:54:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:21:10.089 08:54:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:21:10.089 08:54:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:10.089 08:54:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:10.089 ************************************ 00:21:10.089 START TEST nvmf_bdevio_no_huge 00:21:10.089 ************************************ 00:21:10.089 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:21:10.089 * Looking for test storage... 00:21:10.089 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:10.089 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:10.089 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:21:10.089 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:10.089 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:10.089 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:10.089 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:10.089 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:10.089 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:10.089 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:10.089 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:10.089 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:10.089 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:10.089 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:10.089 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:10.089 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:10.089 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:10.089 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:10.089 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:10.089 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:10.089 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:10.089 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:10.089 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:10.089 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:10.089 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:10.089 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:10.089 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:21:10.089 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:10.089 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:21:10.089 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:10.089 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:10.089 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:10.089 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:10.089 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:10.089 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:10.089 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:10.089 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:10.089 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:10.089 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:10.089 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:21:10.089 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:10.089 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:10.089 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:10.089 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:10.089 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:10.089 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:10.089 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:10.089 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:10.089 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:10.089 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:10.089 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:21:10.089 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:11.991 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:11.991 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:21:11.991 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:11.991 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:11.991 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:11.991 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:11.991 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:11.992 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:21:11.992 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:11.992 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:21:11.992 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:21:11.992 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:21:11.992 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:21:11.992 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:21:11.992 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:21:11.992 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:11.992 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:11.992 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:11.992 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:11.992 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:11.992 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:11.992 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:11.992 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:11.992 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:11.992 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:11.992 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:11.992 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:11.992 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:11.992 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:11.992 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:11.992 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:11.992 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:11.992 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:11.992 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:11.992 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:11.992 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:11.992 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:11.992 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:11.992 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:11.992 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:11.992 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:11.992 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:11.992 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:11.992 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:11.992 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:11.992 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:11.992 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:11.992 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:11.992 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:11.992 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:11.992 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:11.992 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:11.992 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:11.992 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:11.992 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:11.992 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:11.992 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:11.992 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:11.992 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:11.992 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:11.992 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:11.992 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:11.992 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:11.992 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:11.992 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:11.992 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:11.992 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:11.992 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:11.992 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:11.992 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:11.992 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:11.992 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:11.992 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:21:11.992 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:11.992 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:11.992 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:11.992 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:11.992 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:11.992 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:11.992 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:11.992 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:11.992 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:11.992 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:11.992 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:11.992 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:11.992 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:11.992 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:11.992 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:11.992 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:11.992 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:11.992 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:11.992 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:11.992 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:11.992 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:11.992 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:11.992 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:12.251 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:12.251 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.135 ms 00:21:12.251 00:21:12.251 --- 10.0.0.2 ping statistics --- 00:21:12.251 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:12.251 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:21:12.251 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:12.251 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:12.251 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.135 ms 00:21:12.251 00:21:12.251 --- 10.0.0.1 ping statistics --- 00:21:12.251 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:12.251 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:21:12.251 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:12.251 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:21:12.252 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:12.252 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:12.252 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:12.252 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:12.252 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:12.252 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:12.252 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:12.252 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:21:12.252 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:12.252 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:12.252 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:12.252 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=995682 00:21:12.252 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:21:12.252 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 995682 00:21:12.252 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # '[' -z 995682 ']' 00:21:12.252 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:12.252 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:12.252 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:12.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:12.252 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:12.252 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:12.252 [2024-07-26 08:54:30.543216] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:21:12.252 [2024-07-26 08:54:30.543319] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:21:12.252 [2024-07-26 08:54:30.598419] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:12.252 [2024-07-26 08:54:30.620442] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:12.510 [2024-07-26 08:54:30.714511] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:12.510 [2024-07-26 08:54:30.714581] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:12.510 [2024-07-26 08:54:30.714598] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:12.510 [2024-07-26 08:54:30.714611] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:12.510 [2024-07-26 08:54:30.714623] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:12.510 [2024-07-26 08:54:30.715033] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:21:12.510 [2024-07-26 08:54:30.715102] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:21:12.510 [2024-07-26 08:54:30.715123] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:21:12.510 [2024-07-26 08:54:30.715126] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:12.510 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:12.510 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # return 0 00:21:12.510 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:12.510 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:12.510 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:12.510 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:12.511 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:12.511 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.511 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:12.511 [2024-07-26 08:54:30.843441] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:12.511 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.511 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:12.511 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.511 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:12.511 Malloc0 00:21:12.511 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.511 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:12.511 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.511 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:12.511 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.511 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:12.511 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.511 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:12.511 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.511 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:12.511 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.511 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:12.511 [2024-07-26 08:54:30.881244] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:12.511 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.511 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:21:12.511 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:21:12.511 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:21:12.511 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:21:12.511 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:12.511 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:12.511 { 00:21:12.511 "params": { 00:21:12.511 "name": "Nvme$subsystem", 00:21:12.511 "trtype": "$TEST_TRANSPORT", 00:21:12.511 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:12.511 "adrfam": "ipv4", 00:21:12.511 "trsvcid": "$NVMF_PORT", 00:21:12.511 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:12.511 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:12.511 "hdgst": ${hdgst:-false}, 00:21:12.511 "ddgst": ${ddgst:-false} 00:21:12.511 }, 00:21:12.511 "method": "bdev_nvme_attach_controller" 00:21:12.511 } 00:21:12.511 EOF 00:21:12.511 )") 00:21:12.511 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:21:12.511 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:21:12.511 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:21:12.511 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:12.511 "params": { 00:21:12.511 "name": "Nvme1", 00:21:12.511 "trtype": "tcp", 00:21:12.511 "traddr": "10.0.0.2", 00:21:12.511 "adrfam": "ipv4", 00:21:12.511 "trsvcid": "4420", 00:21:12.511 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:12.511 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:12.511 "hdgst": false, 00:21:12.511 "ddgst": false 00:21:12.511 }, 00:21:12.511 "method": "bdev_nvme_attach_controller" 00:21:12.511 }' 00:21:12.511 [2024-07-26 08:54:30.929035] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:21:12.511 [2024-07-26 08:54:30.929142] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid995707 ] 00:21:12.511 [2024-07-26 08:54:30.970534] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:12.769 [2024-07-26 08:54:30.990448] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:12.769 [2024-07-26 08:54:31.076241] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:12.769 [2024-07-26 08:54:31.076287] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:12.769 [2024-07-26 08:54:31.076291] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:13.027 I/O targets: 00:21:13.027 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:21:13.027 00:21:13.027 00:21:13.027 CUnit - A unit testing framework for C - Version 2.1-3 00:21:13.027 http://cunit.sourceforge.net/ 00:21:13.027 00:21:13.027 00:21:13.027 Suite: bdevio tests on: Nvme1n1 00:21:13.027 Test: blockdev write read block ...passed 00:21:13.027 Test: blockdev write zeroes read block ...passed 00:21:13.027 Test: blockdev write zeroes read no split ...passed 00:21:13.027 Test: blockdev write zeroes read split ...passed 00:21:13.027 Test: blockdev write zeroes read split partial ...passed 00:21:13.027 Test: blockdev reset ...[2024-07-26 08:54:31.484544] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:13.027 [2024-07-26 08:54:31.484649] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x197e330 (9): Bad file descriptor 00:21:13.285 [2024-07-26 08:54:31.586084] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:13.285 passed 00:21:13.285 Test: blockdev write read 8 blocks ...passed 00:21:13.285 Test: blockdev write read size > 128k ...passed 00:21:13.285 Test: blockdev write read invalid size ...passed 00:21:13.285 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:21:13.285 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:21:13.285 Test: blockdev write read max offset ...passed 00:21:13.285 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:21:13.572 Test: blockdev writev readv 8 blocks ...passed 00:21:13.572 Test: blockdev writev readv 30 x 1block ...passed 00:21:13.572 Test: blockdev writev readv block ...passed 00:21:13.572 Test: blockdev writev readv size > 128k ...passed 00:21:13.572 Test: blockdev writev readv size > 128k in two iovs ...passed 00:21:13.572 Test: blockdev comparev and writev ...[2024-07-26 08:54:31.844917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:13.572 [2024-07-26 08:54:31.844952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:13.572 [2024-07-26 08:54:31.844976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:13.572 [2024-07-26 08:54:31.844992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:13.572 [2024-07-26 08:54:31.845358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:13.572 [2024-07-26 08:54:31.845383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:13.572 [2024-07-26 08:54:31.845406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:13.572 [2024-07-26 08:54:31.845421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:13.572 [2024-07-26 08:54:31.845782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:13.572 [2024-07-26 08:54:31.845805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:13.572 [2024-07-26 08:54:31.845826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:13.572 [2024-07-26 08:54:31.845842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:13.572 [2024-07-26 08:54:31.846216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:13.572 [2024-07-26 08:54:31.846240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:13.572 [2024-07-26 08:54:31.846261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:13.573 [2024-07-26 08:54:31.846277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:13.573 passed 00:21:13.573 Test: blockdev nvme passthru rw ...passed 00:21:13.573 Test: blockdev nvme passthru vendor specific ...[2024-07-26 08:54:31.930377] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:13.573 [2024-07-26 08:54:31.930404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:13.573 [2024-07-26 08:54:31.930574] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:13.573 [2024-07-26 08:54:31.930596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:13.573 [2024-07-26 08:54:31.930761] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:13.573 [2024-07-26 08:54:31.930783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:13.573 [2024-07-26 08:54:31.930949] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:13.573 [2024-07-26 08:54:31.930972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:13.573 passed 00:21:13.573 Test: blockdev nvme admin passthru ...passed 00:21:13.573 Test: blockdev copy ...passed 00:21:13.573 00:21:13.573 Run Summary: Type Total Ran Passed Failed Inactive 00:21:13.573 suites 1 1 n/a 0 0 00:21:13.573 tests 23 23 23 0 0 00:21:13.573 asserts 152 152 152 0 n/a 00:21:13.573 00:21:13.573 Elapsed time = 1.417 seconds 00:21:14.140 08:54:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:14.140 08:54:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.140 08:54:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:14.140 08:54:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.140 08:54:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:21:14.140 08:54:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:21:14.140 08:54:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:14.140 08:54:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:21:14.140 08:54:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:14.140 08:54:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:21:14.140 08:54:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:14.140 08:54:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:14.140 rmmod nvme_tcp 00:21:14.140 rmmod nvme_fabrics 00:21:14.140 rmmod nvme_keyring 00:21:14.140 08:54:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:14.140 08:54:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:21:14.140 08:54:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:21:14.140 08:54:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 995682 ']' 00:21:14.140 08:54:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 995682 00:21:14.140 08:54:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # '[' -z 995682 ']' 00:21:14.140 08:54:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # kill -0 995682 00:21:14.140 08:54:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # uname 00:21:14.140 08:54:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:14.140 08:54:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 995682 00:21:14.140 08:54:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:21:14.140 08:54:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:21:14.140 08:54:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@968 -- # echo 'killing process with pid 995682' 00:21:14.140 killing process with pid 995682 00:21:14.140 08:54:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@969 -- # kill 995682 00:21:14.140 08:54:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@974 -- # wait 995682 00:21:14.400 08:54:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:14.400 08:54:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:14.400 08:54:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:14.400 08:54:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:14.400 08:54:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:14.400 08:54:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:14.400 08:54:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:14.400 08:54:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:16.936 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:16.936 00:21:16.936 real 0m6.491s 00:21:16.936 user 0m10.978s 00:21:16.936 sys 0m2.513s 00:21:16.936 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:16.936 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:16.936 ************************************ 00:21:16.936 END TEST nvmf_bdevio_no_huge 00:21:16.936 ************************************ 00:21:16.936 08:54:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:21:16.936 08:54:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:16.936 08:54:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:16.936 08:54:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:16.936 ************************************ 00:21:16.936 START TEST nvmf_tls 00:21:16.936 ************************************ 00:21:16.936 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:21:16.936 * Looking for test storage... 00:21:16.937 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:16.937 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:16.937 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:21:16.937 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:16.937 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:16.937 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:16.937 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:16.937 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:16.937 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:16.937 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:16.937 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:16.937 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:16.937 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:16.937 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:16.937 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:16.937 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:16.937 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:16.937 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:16.937 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:16.937 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:16.937 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:16.937 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:16.937 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:16.937 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:16.937 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:16.937 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:16.937 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:21:16.937 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:16.937 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:21:16.937 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:16.937 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:16.937 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:16.937 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:16.937 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:16.937 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:16.937 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:16.937 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:16.937 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:16.937 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:21:16.937 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:16.937 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:16.937 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:16.937 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:16.937 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:16.937 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:16.937 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:16.937 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:16.937 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:16.937 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:16.937 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:21:16.937 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:18.842 08:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:18.842 08:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:21:18.842 08:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:18.842 08:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:18.842 08:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:18.842 08:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:18.842 08:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:18.842 08:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:21:18.842 08:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:18.842 08:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:21:18.842 08:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:21:18.842 08:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:21:18.842 08:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:21:18.842 08:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:21:18.842 08:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:21:18.842 08:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:18.842 08:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:18.842 08:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:18.842 08:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:18.842 08:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:18.842 08:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:18.842 08:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:18.842 08:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:18.842 08:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:18.842 08:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:18.842 08:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:18.842 08:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:18.842 08:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:18.842 08:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:18.842 08:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:18.842 08:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:18.842 08:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:18.842 08:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:18.842 08:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:18.842 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:18.842 08:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:18.842 08:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:18.842 08:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:18.842 08:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:18.842 08:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:18.842 08:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:18.842 08:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:18.842 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:18.842 08:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:18.842 08:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:18.842 08:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:18.842 08:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:18.842 08:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:18.842 08:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:18.842 08:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:18.842 08:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:18.842 08:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:18.842 08:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:18.842 08:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:18.842 08:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:18.842 08:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:18.842 08:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:18.842 08:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:18.842 08:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:18.842 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:18.842 08:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:18.842 08:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:18.842 08:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:18.842 08:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:18.842 08:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:18.842 08:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:18.842 08:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:18.842 08:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:18.842 08:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:18.842 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:18.842 08:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:18.842 08:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:18.842 08:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:21:18.842 08:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:18.842 08:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:18.842 08:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:18.842 08:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:18.842 08:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:18.842 08:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:18.842 08:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:18.842 08:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:18.842 08:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:18.842 08:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:18.842 08:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:18.842 08:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:18.842 08:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:18.842 08:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:18.842 08:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:18.842 08:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:18.842 08:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:18.842 08:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:18.842 08:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:18.842 08:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:18.843 08:54:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:18.843 08:54:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:18.843 08:54:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:18.843 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:18.843 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.181 ms 00:21:18.843 00:21:18.843 --- 10.0.0.2 ping statistics --- 00:21:18.843 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:18.843 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:21:18.843 08:54:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:18.843 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:18.843 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:21:18.843 00:21:18.843 --- 10.0.0.1 ping statistics --- 00:21:18.843 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:18.843 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:21:18.843 08:54:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:18.843 08:54:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:21:18.843 08:54:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:18.843 08:54:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:18.843 08:54:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:18.843 08:54:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:18.843 08:54:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:18.843 08:54:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:18.843 08:54:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:18.843 08:54:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:21:18.843 08:54:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:18.843 08:54:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:18.843 08:54:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:18.843 08:54:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=997903 00:21:18.843 08:54:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:21:18.843 08:54:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 997903 00:21:18.843 08:54:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 997903 ']' 00:21:18.843 08:54:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:18.843 08:54:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:18.843 08:54:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:18.843 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:18.843 08:54:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:18.843 08:54:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:18.843 [2024-07-26 08:54:37.100244] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:21:18.843 [2024-07-26 08:54:37.100315] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:18.843 EAL: No free 2048 kB hugepages reported on node 1 00:21:18.843 [2024-07-26 08:54:37.139245] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:18.843 [2024-07-26 08:54:37.171563] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:18.843 [2024-07-26 08:54:37.263406] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:18.843 [2024-07-26 08:54:37.263487] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:18.843 [2024-07-26 08:54:37.263504] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:18.843 [2024-07-26 08:54:37.263518] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:18.843 [2024-07-26 08:54:37.263529] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:18.843 [2024-07-26 08:54:37.263559] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:19.101 08:54:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:19.101 08:54:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:19.101 08:54:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:19.101 08:54:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:19.101 08:54:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:19.101 08:54:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:19.101 08:54:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:21:19.101 08:54:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:21:19.358 true 00:21:19.358 08:54:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:19.358 08:54:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:21:19.616 08:54:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # version=0 00:21:19.616 08:54:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:21:19.616 08:54:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:21:19.875 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:19.875 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:21:20.133 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # version=13 00:21:20.133 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:21:20.133 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:21:20.391 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:20.391 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:21:20.649 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # version=7 00:21:20.649 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:21:20.649 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:20.649 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:21:20.907 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:21:20.907 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:21:20.907 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:21:21.166 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:21.166 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:21:21.424 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:21:21.424 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:21:21.424 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:21:21.682 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:21.682 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:21:21.941 08:54:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:21:21.941 08:54:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:21:21.941 08:54:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:21:21.941 08:54:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:21:21.941 08:54:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:21:21.941 08:54:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:21:21.941 08:54:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:21:21.941 08:54:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:21:21.941 08:54:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:21:21.941 08:54:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:21.941 08:54:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:21:21.941 08:54:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:21:21.941 08:54:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:21:21.941 08:54:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:21:21.941 08:54:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:21:21.941 08:54:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:21:21.941 08:54:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:21:21.941 08:54:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:21:21.941 08:54:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:21:21.941 08:54:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.jCUdoao4BL 00:21:21.941 08:54:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:21:21.941 08:54:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.iJIYgQnJ78 00:21:21.941 08:54:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:21.941 08:54:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:21:21.941 08:54:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.jCUdoao4BL 00:21:21.941 08:54:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.iJIYgQnJ78 00:21:21.941 08:54:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:21:22.199 08:54:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:21:22.457 08:54:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.jCUdoao4BL 00:21:22.457 08:54:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.jCUdoao4BL 00:21:22.457 08:54:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:22.715 [2024-07-26 08:54:41.159378] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:22.715 08:54:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:22.973 08:54:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:23.231 [2024-07-26 08:54:41.644668] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:23.231 [2024-07-26 08:54:41.644949] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:23.231 08:54:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:23.489 malloc0 00:21:23.489 08:54:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:24.056 08:54:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.jCUdoao4BL 00:21:24.056 [2024-07-26 08:54:42.470691] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:24.056 08:54:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.jCUdoao4BL 00:21:24.314 EAL: No free 2048 kB hugepages reported on node 1 00:21:34.281 Initializing NVMe Controllers 00:21:34.281 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:34.281 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:34.281 Initialization complete. Launching workers. 00:21:34.281 ======================================================== 00:21:34.281 Latency(us) 00:21:34.281 Device Information : IOPS MiB/s Average min max 00:21:34.281 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7890.98 30.82 8112.62 1230.44 9381.37 00:21:34.281 ======================================================== 00:21:34.281 Total : 7890.98 30.82 8112.62 1230.44 9381.37 00:21:34.281 00:21:34.281 08:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.jCUdoao4BL 00:21:34.281 08:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:34.281 08:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:34.281 08:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:34.281 08:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.jCUdoao4BL' 00:21:34.281 08:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:34.281 08:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=999679 00:21:34.281 08:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:34.281 08:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 999679 /var/tmp/bdevperf.sock 00:21:34.281 08:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 999679 ']' 00:21:34.281 08:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:34.281 08:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:34.281 08:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:34.281 08:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:34.281 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:34.281 08:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:34.281 08:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:34.281 [2024-07-26 08:54:52.668118] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:21:34.281 [2024-07-26 08:54:52.668190] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid999679 ] 00:21:34.281 EAL: No free 2048 kB hugepages reported on node 1 00:21:34.281 [2024-07-26 08:54:52.699193] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:34.281 [2024-07-26 08:54:52.726597] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:34.540 [2024-07-26 08:54:52.812965] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:34.540 08:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:34.540 08:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:34.540 08:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.jCUdoao4BL 00:21:34.798 [2024-07-26 08:54:53.142657] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:34.798 [2024-07-26 08:54:53.142774] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:34.798 TLSTESTn1 00:21:34.798 08:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:35.056 Running I/O for 10 seconds... 00:21:45.058 00:21:45.058 Latency(us) 00:21:45.058 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:45.058 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:45.058 Verification LBA range: start 0x0 length 0x2000 00:21:45.058 TLSTESTn1 : 10.03 3415.34 13.34 0.00 0.00 37388.83 6043.88 60972.75 00:21:45.058 =================================================================================================================== 00:21:45.058 Total : 3415.34 13.34 0.00 0.00 37388.83 6043.88 60972.75 00:21:45.058 0 00:21:45.058 08:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:45.058 08:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # killprocess 999679 00:21:45.058 08:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 999679 ']' 00:21:45.058 08:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 999679 00:21:45.058 08:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:45.058 08:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:45.058 08:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 999679 00:21:45.058 08:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:21:45.058 08:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:21:45.058 08:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 999679' 00:21:45.058 killing process with pid 999679 00:21:45.058 08:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 999679 00:21:45.058 Received shutdown signal, test time was about 10.000000 seconds 00:21:45.058 00:21:45.058 Latency(us) 00:21:45.058 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:45.058 =================================================================================================================== 00:21:45.058 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:45.058 [2024-07-26 08:55:03.437237] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:45.059 08:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 999679 00:21:45.325 08:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.iJIYgQnJ78 00:21:45.325 08:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:21:45.325 08:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.iJIYgQnJ78 00:21:45.325 08:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:21:45.326 08:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:45.326 08:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:21:45.326 08:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:45.326 08:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.iJIYgQnJ78 00:21:45.326 08:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:45.326 08:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:45.326 08:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:45.326 08:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.iJIYgQnJ78' 00:21:45.326 08:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:45.326 08:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1000993 00:21:45.326 08:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:45.326 08:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:45.326 08:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1000993 /var/tmp/bdevperf.sock 00:21:45.326 08:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1000993 ']' 00:21:45.326 08:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:45.326 08:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:45.326 08:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:45.326 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:45.326 08:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:45.326 08:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:45.326 [2024-07-26 08:55:03.711280] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:21:45.326 [2024-07-26 08:55:03.711372] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1000993 ] 00:21:45.326 EAL: No free 2048 kB hugepages reported on node 1 00:21:45.326 [2024-07-26 08:55:03.742727] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:45.326 [2024-07-26 08:55:03.769066] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:45.584 [2024-07-26 08:55:03.851902] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:45.584 08:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:45.584 08:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:45.584 08:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.iJIYgQnJ78 00:21:45.842 [2024-07-26 08:55:04.230233] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:45.842 [2024-07-26 08:55:04.230381] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:45.842 [2024-07-26 08:55:04.235731] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:45.842 [2024-07-26 08:55:04.236219] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23738d0 (107): Transport endpoint is not connected 00:21:45.842 [2024-07-26 08:55:04.237207] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23738d0 (9): Bad file descriptor 00:21:45.842 [2024-07-26 08:55:04.238206] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:45.842 [2024-07-26 08:55:04.238229] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:45.842 [2024-07-26 08:55:04.238248] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:45.842 request: 00:21:45.842 { 00:21:45.842 "name": "TLSTEST", 00:21:45.842 "trtype": "tcp", 00:21:45.842 "traddr": "10.0.0.2", 00:21:45.842 "adrfam": "ipv4", 00:21:45.842 "trsvcid": "4420", 00:21:45.842 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:45.842 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:45.842 "prchk_reftag": false, 00:21:45.842 "prchk_guard": false, 00:21:45.842 "hdgst": false, 00:21:45.842 "ddgst": false, 00:21:45.842 "psk": "/tmp/tmp.iJIYgQnJ78", 00:21:45.842 "method": "bdev_nvme_attach_controller", 00:21:45.842 "req_id": 1 00:21:45.842 } 00:21:45.842 Got JSON-RPC error response 00:21:45.842 response: 00:21:45.842 { 00:21:45.842 "code": -5, 00:21:45.842 "message": "Input/output error" 00:21:45.842 } 00:21:45.842 08:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 1000993 00:21:45.842 08:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1000993 ']' 00:21:45.842 08:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1000993 00:21:45.842 08:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:45.842 08:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:45.842 08:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1000993 00:21:45.843 08:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:21:45.843 08:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:21:45.843 08:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1000993' 00:21:45.843 killing process with pid 1000993 00:21:45.843 08:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1000993 00:21:45.843 Received shutdown signal, test time was about 10.000000 seconds 00:21:45.843 00:21:45.843 Latency(us) 00:21:45.843 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:45.843 =================================================================================================================== 00:21:45.843 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:45.843 [2024-07-26 08:55:04.287598] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:45.843 08:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1000993 00:21:46.101 08:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:21:46.101 08:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:21:46.101 08:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:46.101 08:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:46.101 08:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:46.101 08:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.jCUdoao4BL 00:21:46.101 08:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:21:46.101 08:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.jCUdoao4BL 00:21:46.101 08:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:21:46.101 08:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:46.101 08:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:21:46.101 08:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:46.101 08:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.jCUdoao4BL 00:21:46.101 08:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:46.101 08:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:46.101 08:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:21:46.101 08:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.jCUdoao4BL' 00:21:46.101 08:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:46.101 08:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1001130 00:21:46.101 08:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:46.101 08:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:46.101 08:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1001130 /var/tmp/bdevperf.sock 00:21:46.101 08:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1001130 ']' 00:21:46.101 08:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:46.101 08:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:46.101 08:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:46.101 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:46.101 08:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:46.101 08:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:46.101 [2024-07-26 08:55:04.535277] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:21:46.101 [2024-07-26 08:55:04.535373] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1001130 ] 00:21:46.360 EAL: No free 2048 kB hugepages reported on node 1 00:21:46.360 [2024-07-26 08:55:04.569878] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:46.360 [2024-07-26 08:55:04.597405] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:46.360 [2024-07-26 08:55:04.688001] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:46.360 08:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:46.360 08:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:46.360 08:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.jCUdoao4BL 00:21:46.618 [2024-07-26 08:55:05.075460] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:46.618 [2024-07-26 08:55:05.075607] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:46.876 [2024-07-26 08:55:05.085687] tcp.c: 894:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:21:46.876 [2024-07-26 08:55:05.085722] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:21:46.876 [2024-07-26 08:55:05.085778] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:46.876 [2024-07-26 08:55:05.086563] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ce8d0 (107): Transport endpoint is not connected 00:21:46.876 [2024-07-26 08:55:05.087551] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ce8d0 (9): Bad file descriptor 00:21:46.876 [2024-07-26 08:55:05.088552] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:46.876 [2024-07-26 08:55:05.088577] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:46.876 [2024-07-26 08:55:05.088611] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:46.876 request: 00:21:46.876 { 00:21:46.876 "name": "TLSTEST", 00:21:46.876 "trtype": "tcp", 00:21:46.876 "traddr": "10.0.0.2", 00:21:46.876 "adrfam": "ipv4", 00:21:46.876 "trsvcid": "4420", 00:21:46.876 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:46.876 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:46.876 "prchk_reftag": false, 00:21:46.876 "prchk_guard": false, 00:21:46.876 "hdgst": false, 00:21:46.876 "ddgst": false, 00:21:46.876 "psk": "/tmp/tmp.jCUdoao4BL", 00:21:46.876 "method": "bdev_nvme_attach_controller", 00:21:46.876 "req_id": 1 00:21:46.876 } 00:21:46.876 Got JSON-RPC error response 00:21:46.876 response: 00:21:46.876 { 00:21:46.876 "code": -5, 00:21:46.876 "message": "Input/output error" 00:21:46.877 } 00:21:46.877 08:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 1001130 00:21:46.877 08:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1001130 ']' 00:21:46.877 08:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1001130 00:21:46.877 08:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:46.877 08:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:46.877 08:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1001130 00:21:46.877 08:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:21:46.877 08:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:21:46.877 08:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1001130' 00:21:46.877 killing process with pid 1001130 00:21:46.877 08:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1001130 00:21:46.877 Received shutdown signal, test time was about 10.000000 seconds 00:21:46.877 00:21:46.877 Latency(us) 00:21:46.877 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:46.877 =================================================================================================================== 00:21:46.877 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:46.877 [2024-07-26 08:55:05.140409] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:46.877 08:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1001130 00:21:47.135 08:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:21:47.135 08:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:21:47.135 08:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:47.135 08:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:47.135 08:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:47.135 08:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.jCUdoao4BL 00:21:47.135 08:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:21:47.135 08:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.jCUdoao4BL 00:21:47.135 08:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:21:47.135 08:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:47.135 08:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:21:47.135 08:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:47.136 08:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.jCUdoao4BL 00:21:47.136 08:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:47.136 08:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:21:47.136 08:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:47.136 08:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.jCUdoao4BL' 00:21:47.136 08:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:47.136 08:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1001145 00:21:47.136 08:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:47.136 08:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:47.136 08:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1001145 /var/tmp/bdevperf.sock 00:21:47.136 08:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1001145 ']' 00:21:47.136 08:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:47.136 08:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:47.136 08:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:47.136 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:47.136 08:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:47.136 08:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:47.136 [2024-07-26 08:55:05.408794] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:21:47.136 [2024-07-26 08:55:05.408866] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1001145 ] 00:21:47.136 EAL: No free 2048 kB hugepages reported on node 1 00:21:47.136 [2024-07-26 08:55:05.444480] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:47.136 [2024-07-26 08:55:05.475009] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:47.136 [2024-07-26 08:55:05.561979] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:47.394 08:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:47.394 08:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:47.394 08:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.jCUdoao4BL 00:21:47.653 [2024-07-26 08:55:05.887718] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:47.653 [2024-07-26 08:55:05.887841] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:47.653 [2024-07-26 08:55:05.897765] tcp.c: 894:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:21:47.653 [2024-07-26 08:55:05.897799] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:21:47.653 [2024-07-26 08:55:05.897854] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:47.653 [2024-07-26 08:55:05.898732] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa4f8d0 (107): Transport endpoint is not connected 00:21:47.653 [2024-07-26 08:55:05.899722] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa4f8d0 (9): Bad file descriptor 00:21:47.653 [2024-07-26 08:55:05.900722] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:21:47.653 [2024-07-26 08:55:05.900742] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:47.653 [2024-07-26 08:55:05.900775] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:21:47.653 request: 00:21:47.653 { 00:21:47.653 "name": "TLSTEST", 00:21:47.653 "trtype": "tcp", 00:21:47.653 "traddr": "10.0.0.2", 00:21:47.653 "adrfam": "ipv4", 00:21:47.653 "trsvcid": "4420", 00:21:47.653 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:47.653 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:47.653 "prchk_reftag": false, 00:21:47.653 "prchk_guard": false, 00:21:47.653 "hdgst": false, 00:21:47.653 "ddgst": false, 00:21:47.653 "psk": "/tmp/tmp.jCUdoao4BL", 00:21:47.653 "method": "bdev_nvme_attach_controller", 00:21:47.653 "req_id": 1 00:21:47.653 } 00:21:47.653 Got JSON-RPC error response 00:21:47.653 response: 00:21:47.653 { 00:21:47.653 "code": -5, 00:21:47.653 "message": "Input/output error" 00:21:47.653 } 00:21:47.653 08:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 1001145 00:21:47.653 08:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1001145 ']' 00:21:47.653 08:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1001145 00:21:47.653 08:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:47.653 08:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:47.653 08:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1001145 00:21:47.653 08:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:21:47.653 08:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:21:47.653 08:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1001145' 00:21:47.653 killing process with pid 1001145 00:21:47.653 08:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1001145 00:21:47.653 Received shutdown signal, test time was about 10.000000 seconds 00:21:47.653 00:21:47.653 Latency(us) 00:21:47.653 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:47.653 =================================================================================================================== 00:21:47.653 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:47.653 [2024-07-26 08:55:05.950345] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:47.653 08:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1001145 00:21:47.912 08:55:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:21:47.912 08:55:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:21:47.912 08:55:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:47.912 08:55:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:47.912 08:55:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:47.912 08:55:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:21:47.912 08:55:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:21:47.912 08:55:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:21:47.912 08:55:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:21:47.912 08:55:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:47.912 08:55:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:21:47.912 08:55:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:47.912 08:55:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:21:47.912 08:55:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:47.912 08:55:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:47.912 08:55:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:47.912 08:55:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:21:47.912 08:55:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:47.912 08:55:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1001281 00:21:47.912 08:55:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:47.912 08:55:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:47.913 08:55:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1001281 /var/tmp/bdevperf.sock 00:21:47.913 08:55:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1001281 ']' 00:21:47.913 08:55:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:47.913 08:55:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:47.913 08:55:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:47.913 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:47.913 08:55:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:47.913 08:55:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:47.913 [2024-07-26 08:55:06.213385] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:21:47.913 [2024-07-26 08:55:06.213470] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1001281 ] 00:21:47.913 EAL: No free 2048 kB hugepages reported on node 1 00:21:47.913 [2024-07-26 08:55:06.255933] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:47.913 [2024-07-26 08:55:06.284568] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:48.171 [2024-07-26 08:55:06.376244] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:48.171 08:55:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:48.171 08:55:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:48.171 08:55:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:48.429 [2024-07-26 08:55:06.749985] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:48.429 [2024-07-26 08:55:06.751952] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb31de0 (9): Bad file descriptor 00:21:48.429 [2024-07-26 08:55:06.752949] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:48.429 [2024-07-26 08:55:06.752969] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:48.429 [2024-07-26 08:55:06.753000] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:48.429 request: 00:21:48.429 { 00:21:48.429 "name": "TLSTEST", 00:21:48.429 "trtype": "tcp", 00:21:48.429 "traddr": "10.0.0.2", 00:21:48.429 "adrfam": "ipv4", 00:21:48.429 "trsvcid": "4420", 00:21:48.429 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:48.429 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:48.429 "prchk_reftag": false, 00:21:48.429 "prchk_guard": false, 00:21:48.429 "hdgst": false, 00:21:48.429 "ddgst": false, 00:21:48.429 "method": "bdev_nvme_attach_controller", 00:21:48.429 "req_id": 1 00:21:48.429 } 00:21:48.429 Got JSON-RPC error response 00:21:48.429 response: 00:21:48.429 { 00:21:48.429 "code": -5, 00:21:48.429 "message": "Input/output error" 00:21:48.429 } 00:21:48.429 08:55:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 1001281 00:21:48.429 08:55:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1001281 ']' 00:21:48.429 08:55:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1001281 00:21:48.429 08:55:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:48.429 08:55:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:48.429 08:55:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1001281 00:21:48.429 08:55:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:21:48.429 08:55:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:21:48.429 08:55:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1001281' 00:21:48.429 killing process with pid 1001281 00:21:48.429 08:55:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1001281 00:21:48.429 Received shutdown signal, test time was about 10.000000 seconds 00:21:48.429 00:21:48.429 Latency(us) 00:21:48.429 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:48.429 =================================================================================================================== 00:21:48.429 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:48.429 08:55:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1001281 00:21:48.687 08:55:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:21:48.687 08:55:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:21:48.687 08:55:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:48.687 08:55:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:48.687 08:55:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:48.687 08:55:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@158 -- # killprocess 997903 00:21:48.687 08:55:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 997903 ']' 00:21:48.687 08:55:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 997903 00:21:48.687 08:55:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:48.687 08:55:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:48.687 08:55:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 997903 00:21:48.687 08:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:48.687 08:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:48.687 08:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 997903' 00:21:48.687 killing process with pid 997903 00:21:48.687 08:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 997903 00:21:48.687 [2024-07-26 08:55:07.009965] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:48.687 08:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 997903 00:21:48.945 08:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:21:48.945 08:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:21:48.945 08:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:21:48.945 08:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:21:48.945 08:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:21:48.945 08:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:21:48.945 08:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:21:48.945 08:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:21:48.945 08:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:21:48.945 08:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.5AOGPiS0a3 00:21:48.945 08:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:21:48.945 08:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.5AOGPiS0a3 00:21:48.945 08:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:21:48.945 08:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:48.945 08:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:48.945 08:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:48.945 08:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1001434 00:21:48.945 08:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:48.946 08:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1001434 00:21:48.946 08:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1001434 ']' 00:21:48.946 08:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:48.946 08:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:48.946 08:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:48.946 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:48.946 08:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:48.946 08:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:48.946 [2024-07-26 08:55:07.330844] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:21:48.946 [2024-07-26 08:55:07.330928] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:48.946 EAL: No free 2048 kB hugepages reported on node 1 00:21:48.946 [2024-07-26 08:55:07.367139] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:48.946 [2024-07-26 08:55:07.399655] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:49.204 [2024-07-26 08:55:07.486813] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:49.204 [2024-07-26 08:55:07.486880] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:49.204 [2024-07-26 08:55:07.486896] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:49.204 [2024-07-26 08:55:07.486910] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:49.204 [2024-07-26 08:55:07.486923] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:49.204 [2024-07-26 08:55:07.486953] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:49.204 08:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:49.204 08:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:49.204 08:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:49.204 08:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:49.204 08:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:49.204 08:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:49.204 08:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.5AOGPiS0a3 00:21:49.204 08:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.5AOGPiS0a3 00:21:49.204 08:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:49.462 [2024-07-26 08:55:07.863736] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:49.462 08:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:49.720 08:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:49.978 [2024-07-26 08:55:08.361095] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:49.978 [2024-07-26 08:55:08.361341] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:49.978 08:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:50.236 malloc0 00:21:50.236 08:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:50.494 08:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.5AOGPiS0a3 00:21:50.752 [2024-07-26 08:55:09.142295] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:50.752 08:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.5AOGPiS0a3 00:21:50.752 08:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:50.752 08:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:50.752 08:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:50.752 08:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.5AOGPiS0a3' 00:21:50.752 08:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:50.752 08:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1001717 00:21:50.752 08:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:50.752 08:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:50.752 08:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1001717 /var/tmp/bdevperf.sock 00:21:50.752 08:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1001717 ']' 00:21:50.752 08:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:50.752 08:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:50.752 08:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:50.752 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:50.752 08:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:50.752 08:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:50.753 [2024-07-26 08:55:09.204206] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:21:50.753 [2024-07-26 08:55:09.204284] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1001717 ] 00:21:51.011 EAL: No free 2048 kB hugepages reported on node 1 00:21:51.011 [2024-07-26 08:55:09.235347] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:51.011 [2024-07-26 08:55:09.261957] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:51.011 [2024-07-26 08:55:09.344908] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:51.011 08:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:51.011 08:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:51.011 08:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.5AOGPiS0a3 00:21:51.269 [2024-07-26 08:55:09.674826] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:51.269 [2024-07-26 08:55:09.674944] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:51.529 TLSTESTn1 00:21:51.529 08:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:51.529 Running I/O for 10 seconds... 00:22:01.490 00:22:01.490 Latency(us) 00:22:01.490 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:01.490 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:01.490 Verification LBA range: start 0x0 length 0x2000 00:22:01.490 TLSTESTn1 : 10.04 3025.58 11.82 0.00 0.00 42205.65 5655.51 74565.40 00:22:01.490 =================================================================================================================== 00:22:01.490 Total : 3025.58 11.82 0.00 0.00 42205.65 5655.51 74565.40 00:22:01.490 0 00:22:01.490 08:55:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:01.490 08:55:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # killprocess 1001717 00:22:01.490 08:55:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1001717 ']' 00:22:01.490 08:55:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1001717 00:22:01.490 08:55:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:01.490 08:55:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:01.748 08:55:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1001717 00:22:01.748 08:55:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:22:01.748 08:55:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:22:01.748 08:55:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1001717' 00:22:01.748 killing process with pid 1001717 00:22:01.748 08:55:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1001717 00:22:01.748 Received shutdown signal, test time was about 10.000000 seconds 00:22:01.748 00:22:01.748 Latency(us) 00:22:01.748 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:01.748 =================================================================================================================== 00:22:01.748 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:01.748 [2024-07-26 08:55:19.978191] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:01.748 08:55:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1001717 00:22:01.748 08:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.5AOGPiS0a3 00:22:01.748 08:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.5AOGPiS0a3 00:22:01.748 08:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:22:01.748 08:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.5AOGPiS0a3 00:22:01.748 08:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:22:02.008 08:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:02.008 08:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:22:02.008 08:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:02.008 08:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.5AOGPiS0a3 00:22:02.008 08:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:02.008 08:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:02.008 08:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:02.008 08:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.5AOGPiS0a3' 00:22:02.008 08:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:02.008 08:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1002913 00:22:02.008 08:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:02.008 08:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:02.008 08:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1002913 /var/tmp/bdevperf.sock 00:22:02.008 08:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1002913 ']' 00:22:02.008 08:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:02.008 08:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:02.008 08:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:02.008 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:02.008 08:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:02.008 08:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:02.008 [2024-07-26 08:55:20.257456] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:22:02.008 [2024-07-26 08:55:20.257529] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1002913 ] 00:22:02.008 EAL: No free 2048 kB hugepages reported on node 1 00:22:02.008 [2024-07-26 08:55:20.290154] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:02.008 [2024-07-26 08:55:20.318078] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:02.008 [2024-07-26 08:55:20.402319] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:02.267 08:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:02.267 08:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:02.267 08:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.5AOGPiS0a3 00:22:02.525 [2024-07-26 08:55:20.745880] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:02.525 [2024-07-26 08:55:20.745975] bdev_nvme.c:6153:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:22:02.525 [2024-07-26 08:55:20.745991] bdev_nvme.c:6258:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.5AOGPiS0a3 00:22:02.525 request: 00:22:02.525 { 00:22:02.525 "name": "TLSTEST", 00:22:02.525 "trtype": "tcp", 00:22:02.525 "traddr": "10.0.0.2", 00:22:02.525 "adrfam": "ipv4", 00:22:02.525 "trsvcid": "4420", 00:22:02.525 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:02.525 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:02.525 "prchk_reftag": false, 00:22:02.525 "prchk_guard": false, 00:22:02.525 "hdgst": false, 00:22:02.525 "ddgst": false, 00:22:02.525 "psk": "/tmp/tmp.5AOGPiS0a3", 00:22:02.525 "method": "bdev_nvme_attach_controller", 00:22:02.525 "req_id": 1 00:22:02.525 } 00:22:02.525 Got JSON-RPC error response 00:22:02.525 response: 00:22:02.525 { 00:22:02.525 "code": -1, 00:22:02.525 "message": "Operation not permitted" 00:22:02.525 } 00:22:02.525 08:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 1002913 00:22:02.525 08:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1002913 ']' 00:22:02.525 08:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1002913 00:22:02.525 08:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:02.525 08:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:02.525 08:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1002913 00:22:02.525 08:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:22:02.525 08:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:22:02.525 08:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1002913' 00:22:02.525 killing process with pid 1002913 00:22:02.525 08:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1002913 00:22:02.525 Received shutdown signal, test time was about 10.000000 seconds 00:22:02.525 00:22:02.525 Latency(us) 00:22:02.525 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:02.526 =================================================================================================================== 00:22:02.526 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:02.526 08:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1002913 00:22:02.784 08:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:02.784 08:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:22:02.784 08:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:02.784 08:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:02.784 08:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:02.784 08:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@174 -- # killprocess 1001434 00:22:02.784 08:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1001434 ']' 00:22:02.784 08:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1001434 00:22:02.784 08:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:02.784 08:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:02.784 08:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1001434 00:22:02.784 08:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:02.784 08:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:02.784 08:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1001434' 00:22:02.784 killing process with pid 1001434 00:22:02.784 08:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1001434 00:22:02.784 [2024-07-26 08:55:21.044190] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:02.784 08:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1001434 00:22:03.042 08:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:22:03.043 08:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:03.043 08:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:03.043 08:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:03.043 08:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1003057 00:22:03.043 08:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:03.043 08:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1003057 00:22:03.043 08:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1003057 ']' 00:22:03.043 08:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:03.043 08:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:03.043 08:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:03.043 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:03.043 08:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:03.043 08:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:03.043 [2024-07-26 08:55:21.347828] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:22:03.043 [2024-07-26 08:55:21.347907] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:03.043 EAL: No free 2048 kB hugepages reported on node 1 00:22:03.043 [2024-07-26 08:55:21.385865] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:03.043 [2024-07-26 08:55:21.418647] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:03.303 [2024-07-26 08:55:21.512373] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:03.303 [2024-07-26 08:55:21.512429] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:03.303 [2024-07-26 08:55:21.512445] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:03.303 [2024-07-26 08:55:21.512459] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:03.303 [2024-07-26 08:55:21.512471] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:03.303 [2024-07-26 08:55:21.512502] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:03.303 08:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:03.303 08:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:03.303 08:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:03.303 08:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:03.303 08:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:03.303 08:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:03.303 08:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.5AOGPiS0a3 00:22:03.303 08:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:22:03.303 08:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.5AOGPiS0a3 00:22:03.303 08:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:22:03.303 08:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:03.303 08:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:22:03.303 08:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:03.303 08:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /tmp/tmp.5AOGPiS0a3 00:22:03.303 08:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.5AOGPiS0a3 00:22:03.303 08:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:03.596 [2024-07-26 08:55:21.893108] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:03.596 08:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:03.854 08:55:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:04.112 [2024-07-26 08:55:22.478628] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:04.112 [2024-07-26 08:55:22.478884] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:04.112 08:55:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:04.371 malloc0 00:22:04.371 08:55:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:04.629 08:55:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.5AOGPiS0a3 00:22:04.888 [2024-07-26 08:55:23.219133] tcp.c:3635:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:22:04.888 [2024-07-26 08:55:23.219177] tcp.c:3721:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:22:04.888 [2024-07-26 08:55:23.219216] subsystem.c:1052:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:22:04.888 request: 00:22:04.888 { 00:22:04.888 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:04.888 "host": "nqn.2016-06.io.spdk:host1", 00:22:04.888 "psk": "/tmp/tmp.5AOGPiS0a3", 00:22:04.888 "method": "nvmf_subsystem_add_host", 00:22:04.888 "req_id": 1 00:22:04.888 } 00:22:04.888 Got JSON-RPC error response 00:22:04.888 response: 00:22:04.888 { 00:22:04.888 "code": -32603, 00:22:04.888 "message": "Internal error" 00:22:04.888 } 00:22:04.888 08:55:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:22:04.888 08:55:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:04.888 08:55:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:04.888 08:55:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:04.888 08:55:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@180 -- # killprocess 1003057 00:22:04.888 08:55:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1003057 ']' 00:22:04.888 08:55:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1003057 00:22:04.888 08:55:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:04.888 08:55:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:04.888 08:55:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1003057 00:22:04.888 08:55:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:04.888 08:55:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:04.888 08:55:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1003057' 00:22:04.888 killing process with pid 1003057 00:22:04.888 08:55:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1003057 00:22:04.888 08:55:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1003057 00:22:05.146 08:55:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.5AOGPiS0a3 00:22:05.146 08:55:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:22:05.146 08:55:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:05.146 08:55:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:05.146 08:55:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:05.146 08:55:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1003352 00:22:05.146 08:55:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:05.146 08:55:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1003352 00:22:05.146 08:55:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1003352 ']' 00:22:05.146 08:55:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:05.146 08:55:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:05.146 08:55:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:05.146 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:05.146 08:55:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:05.146 08:55:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:05.146 [2024-07-26 08:55:23.580723] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:22:05.146 [2024-07-26 08:55:23.580807] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:05.404 EAL: No free 2048 kB hugepages reported on node 1 00:22:05.404 [2024-07-26 08:55:23.618382] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:05.404 [2024-07-26 08:55:23.650712] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:05.404 [2024-07-26 08:55:23.745929] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:05.404 [2024-07-26 08:55:23.745991] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:05.404 [2024-07-26 08:55:23.746008] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:05.404 [2024-07-26 08:55:23.746021] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:05.404 [2024-07-26 08:55:23.746033] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:05.404 [2024-07-26 08:55:23.746071] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:05.404 08:55:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:05.404 08:55:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:05.404 08:55:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:05.404 08:55:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:05.404 08:55:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:05.663 08:55:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:05.663 08:55:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.5AOGPiS0a3 00:22:05.663 08:55:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.5AOGPiS0a3 00:22:05.663 08:55:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:05.922 [2024-07-26 08:55:24.157879] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:05.922 08:55:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:06.180 08:55:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:06.438 [2024-07-26 08:55:24.747471] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:06.438 [2024-07-26 08:55:24.747738] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:06.438 08:55:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:06.696 malloc0 00:22:06.696 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:06.955 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.5AOGPiS0a3 00:22:07.213 [2024-07-26 08:55:25.565438] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:07.213 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=1003637 00:22:07.213 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:07.213 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:07.213 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 1003637 /var/tmp/bdevperf.sock 00:22:07.213 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1003637 ']' 00:22:07.213 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:07.213 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:07.213 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:07.213 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:07.213 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:07.213 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:07.213 [2024-07-26 08:55:25.622441] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:22:07.213 [2024-07-26 08:55:25.622528] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1003637 ] 00:22:07.213 EAL: No free 2048 kB hugepages reported on node 1 00:22:07.213 [2024-07-26 08:55:25.654438] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:07.472 [2024-07-26 08:55:25.680960] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:07.472 [2024-07-26 08:55:25.766216] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:07.472 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:07.472 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:07.472 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.5AOGPiS0a3 00:22:07.730 [2024-07-26 08:55:26.134584] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:07.730 [2024-07-26 08:55:26.134698] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:07.987 TLSTESTn1 00:22:07.987 08:55:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:22:08.245 08:55:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:22:08.245 "subsystems": [ 00:22:08.246 { 00:22:08.246 "subsystem": "keyring", 00:22:08.246 "config": [] 00:22:08.246 }, 00:22:08.246 { 00:22:08.246 "subsystem": "iobuf", 00:22:08.246 "config": [ 00:22:08.246 { 00:22:08.246 "method": "iobuf_set_options", 00:22:08.246 "params": { 00:22:08.246 "small_pool_count": 8192, 00:22:08.246 "large_pool_count": 1024, 00:22:08.246 "small_bufsize": 8192, 00:22:08.246 "large_bufsize": 135168 00:22:08.246 } 00:22:08.246 } 00:22:08.246 ] 00:22:08.246 }, 00:22:08.246 { 00:22:08.246 "subsystem": "sock", 00:22:08.246 "config": [ 00:22:08.246 { 00:22:08.246 "method": "sock_set_default_impl", 00:22:08.246 "params": { 00:22:08.246 "impl_name": "posix" 00:22:08.246 } 00:22:08.246 }, 00:22:08.246 { 00:22:08.246 "method": "sock_impl_set_options", 00:22:08.246 "params": { 00:22:08.246 "impl_name": "ssl", 00:22:08.246 "recv_buf_size": 4096, 00:22:08.246 "send_buf_size": 4096, 00:22:08.246 "enable_recv_pipe": true, 00:22:08.246 "enable_quickack": false, 00:22:08.246 "enable_placement_id": 0, 00:22:08.246 "enable_zerocopy_send_server": true, 00:22:08.246 "enable_zerocopy_send_client": false, 00:22:08.246 "zerocopy_threshold": 0, 00:22:08.246 "tls_version": 0, 00:22:08.246 "enable_ktls": false 00:22:08.246 } 00:22:08.246 }, 00:22:08.246 { 00:22:08.246 "method": "sock_impl_set_options", 00:22:08.246 "params": { 00:22:08.246 "impl_name": "posix", 00:22:08.246 "recv_buf_size": 2097152, 00:22:08.246 "send_buf_size": 2097152, 00:22:08.246 "enable_recv_pipe": true, 00:22:08.246 "enable_quickack": false, 00:22:08.246 "enable_placement_id": 0, 00:22:08.246 "enable_zerocopy_send_server": true, 00:22:08.246 "enable_zerocopy_send_client": false, 00:22:08.246 "zerocopy_threshold": 0, 00:22:08.246 "tls_version": 0, 00:22:08.246 "enable_ktls": false 00:22:08.246 } 00:22:08.246 } 00:22:08.246 ] 00:22:08.246 }, 00:22:08.246 { 00:22:08.246 "subsystem": "vmd", 00:22:08.246 "config": [] 00:22:08.246 }, 00:22:08.246 { 00:22:08.246 "subsystem": "accel", 00:22:08.246 "config": [ 00:22:08.246 { 00:22:08.246 "method": "accel_set_options", 00:22:08.246 "params": { 00:22:08.246 "small_cache_size": 128, 00:22:08.246 "large_cache_size": 16, 00:22:08.246 "task_count": 2048, 00:22:08.246 "sequence_count": 2048, 00:22:08.246 "buf_count": 2048 00:22:08.246 } 00:22:08.246 } 00:22:08.246 ] 00:22:08.246 }, 00:22:08.246 { 00:22:08.246 "subsystem": "bdev", 00:22:08.246 "config": [ 00:22:08.246 { 00:22:08.246 "method": "bdev_set_options", 00:22:08.246 "params": { 00:22:08.246 "bdev_io_pool_size": 65535, 00:22:08.246 "bdev_io_cache_size": 256, 00:22:08.246 "bdev_auto_examine": true, 00:22:08.246 "iobuf_small_cache_size": 128, 00:22:08.246 "iobuf_large_cache_size": 16 00:22:08.246 } 00:22:08.246 }, 00:22:08.246 { 00:22:08.246 "method": "bdev_raid_set_options", 00:22:08.246 "params": { 00:22:08.246 "process_window_size_kb": 1024, 00:22:08.246 "process_max_bandwidth_mb_sec": 0 00:22:08.246 } 00:22:08.246 }, 00:22:08.246 { 00:22:08.246 "method": "bdev_iscsi_set_options", 00:22:08.246 "params": { 00:22:08.246 "timeout_sec": 30 00:22:08.246 } 00:22:08.246 }, 00:22:08.246 { 00:22:08.246 "method": "bdev_nvme_set_options", 00:22:08.246 "params": { 00:22:08.246 "action_on_timeout": "none", 00:22:08.246 "timeout_us": 0, 00:22:08.246 "timeout_admin_us": 0, 00:22:08.246 "keep_alive_timeout_ms": 10000, 00:22:08.246 "arbitration_burst": 0, 00:22:08.246 "low_priority_weight": 0, 00:22:08.246 "medium_priority_weight": 0, 00:22:08.246 "high_priority_weight": 0, 00:22:08.246 "nvme_adminq_poll_period_us": 10000, 00:22:08.246 "nvme_ioq_poll_period_us": 0, 00:22:08.246 "io_queue_requests": 0, 00:22:08.246 "delay_cmd_submit": true, 00:22:08.246 "transport_retry_count": 4, 00:22:08.246 "bdev_retry_count": 3, 00:22:08.246 "transport_ack_timeout": 0, 00:22:08.246 "ctrlr_loss_timeout_sec": 0, 00:22:08.246 "reconnect_delay_sec": 0, 00:22:08.246 "fast_io_fail_timeout_sec": 0, 00:22:08.246 "disable_auto_failback": false, 00:22:08.246 "generate_uuids": false, 00:22:08.246 "transport_tos": 0, 00:22:08.246 "nvme_error_stat": false, 00:22:08.246 "rdma_srq_size": 0, 00:22:08.246 "io_path_stat": false, 00:22:08.246 "allow_accel_sequence": false, 00:22:08.246 "rdma_max_cq_size": 0, 00:22:08.246 "rdma_cm_event_timeout_ms": 0, 00:22:08.246 "dhchap_digests": [ 00:22:08.246 "sha256", 00:22:08.246 "sha384", 00:22:08.246 "sha512" 00:22:08.246 ], 00:22:08.246 "dhchap_dhgroups": [ 00:22:08.246 "null", 00:22:08.246 "ffdhe2048", 00:22:08.246 "ffdhe3072", 00:22:08.246 "ffdhe4096", 00:22:08.246 "ffdhe6144", 00:22:08.246 "ffdhe8192" 00:22:08.246 ] 00:22:08.246 } 00:22:08.246 }, 00:22:08.246 { 00:22:08.246 "method": "bdev_nvme_set_hotplug", 00:22:08.246 "params": { 00:22:08.246 "period_us": 100000, 00:22:08.246 "enable": false 00:22:08.246 } 00:22:08.246 }, 00:22:08.246 { 00:22:08.246 "method": "bdev_malloc_create", 00:22:08.246 "params": { 00:22:08.246 "name": "malloc0", 00:22:08.246 "num_blocks": 8192, 00:22:08.246 "block_size": 4096, 00:22:08.246 "physical_block_size": 4096, 00:22:08.246 "uuid": "d6560298-7675-45b2-b20a-fba7d3a1882a", 00:22:08.246 "optimal_io_boundary": 0, 00:22:08.246 "md_size": 0, 00:22:08.246 "dif_type": 0, 00:22:08.246 "dif_is_head_of_md": false, 00:22:08.246 "dif_pi_format": 0 00:22:08.246 } 00:22:08.246 }, 00:22:08.246 { 00:22:08.246 "method": "bdev_wait_for_examine" 00:22:08.246 } 00:22:08.246 ] 00:22:08.246 }, 00:22:08.246 { 00:22:08.246 "subsystem": "nbd", 00:22:08.246 "config": [] 00:22:08.246 }, 00:22:08.246 { 00:22:08.246 "subsystem": "scheduler", 00:22:08.246 "config": [ 00:22:08.246 { 00:22:08.246 "method": "framework_set_scheduler", 00:22:08.246 "params": { 00:22:08.246 "name": "static" 00:22:08.246 } 00:22:08.246 } 00:22:08.246 ] 00:22:08.246 }, 00:22:08.246 { 00:22:08.246 "subsystem": "nvmf", 00:22:08.246 "config": [ 00:22:08.246 { 00:22:08.246 "method": "nvmf_set_config", 00:22:08.246 "params": { 00:22:08.246 "discovery_filter": "match_any", 00:22:08.246 "admin_cmd_passthru": { 00:22:08.246 "identify_ctrlr": false 00:22:08.246 } 00:22:08.246 } 00:22:08.246 }, 00:22:08.246 { 00:22:08.246 "method": "nvmf_set_max_subsystems", 00:22:08.246 "params": { 00:22:08.246 "max_subsystems": 1024 00:22:08.246 } 00:22:08.246 }, 00:22:08.246 { 00:22:08.246 "method": "nvmf_set_crdt", 00:22:08.246 "params": { 00:22:08.246 "crdt1": 0, 00:22:08.246 "crdt2": 0, 00:22:08.246 "crdt3": 0 00:22:08.246 } 00:22:08.246 }, 00:22:08.246 { 00:22:08.246 "method": "nvmf_create_transport", 00:22:08.246 "params": { 00:22:08.246 "trtype": "TCP", 00:22:08.246 "max_queue_depth": 128, 00:22:08.246 "max_io_qpairs_per_ctrlr": 127, 00:22:08.246 "in_capsule_data_size": 4096, 00:22:08.246 "max_io_size": 131072, 00:22:08.246 "io_unit_size": 131072, 00:22:08.246 "max_aq_depth": 128, 00:22:08.246 "num_shared_buffers": 511, 00:22:08.246 "buf_cache_size": 4294967295, 00:22:08.246 "dif_insert_or_strip": false, 00:22:08.246 "zcopy": false, 00:22:08.246 "c2h_success": false, 00:22:08.246 "sock_priority": 0, 00:22:08.247 "abort_timeout_sec": 1, 00:22:08.247 "ack_timeout": 0, 00:22:08.247 "data_wr_pool_size": 0 00:22:08.247 } 00:22:08.247 }, 00:22:08.247 { 00:22:08.247 "method": "nvmf_create_subsystem", 00:22:08.247 "params": { 00:22:08.247 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:08.247 "allow_any_host": false, 00:22:08.247 "serial_number": "SPDK00000000000001", 00:22:08.247 "model_number": "SPDK bdev Controller", 00:22:08.247 "max_namespaces": 10, 00:22:08.247 "min_cntlid": 1, 00:22:08.247 "max_cntlid": 65519, 00:22:08.247 "ana_reporting": false 00:22:08.247 } 00:22:08.247 }, 00:22:08.247 { 00:22:08.247 "method": "nvmf_subsystem_add_host", 00:22:08.247 "params": { 00:22:08.247 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:08.247 "host": "nqn.2016-06.io.spdk:host1", 00:22:08.247 "psk": "/tmp/tmp.5AOGPiS0a3" 00:22:08.247 } 00:22:08.247 }, 00:22:08.247 { 00:22:08.247 "method": "nvmf_subsystem_add_ns", 00:22:08.247 "params": { 00:22:08.247 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:08.247 "namespace": { 00:22:08.247 "nsid": 1, 00:22:08.247 "bdev_name": "malloc0", 00:22:08.247 "nguid": "D6560298767545B2B20AFBA7D3A1882A", 00:22:08.247 "uuid": "d6560298-7675-45b2-b20a-fba7d3a1882a", 00:22:08.247 "no_auto_visible": false 00:22:08.247 } 00:22:08.247 } 00:22:08.247 }, 00:22:08.247 { 00:22:08.247 "method": "nvmf_subsystem_add_listener", 00:22:08.247 "params": { 00:22:08.247 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:08.247 "listen_address": { 00:22:08.247 "trtype": "TCP", 00:22:08.247 "adrfam": "IPv4", 00:22:08.247 "traddr": "10.0.0.2", 00:22:08.247 "trsvcid": "4420" 00:22:08.247 }, 00:22:08.247 "secure_channel": true 00:22:08.247 } 00:22:08.247 } 00:22:08.247 ] 00:22:08.247 } 00:22:08.247 ] 00:22:08.247 }' 00:22:08.247 08:55:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:22:08.505 08:55:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:22:08.505 "subsystems": [ 00:22:08.505 { 00:22:08.505 "subsystem": "keyring", 00:22:08.505 "config": [] 00:22:08.505 }, 00:22:08.505 { 00:22:08.505 "subsystem": "iobuf", 00:22:08.505 "config": [ 00:22:08.505 { 00:22:08.505 "method": "iobuf_set_options", 00:22:08.505 "params": { 00:22:08.505 "small_pool_count": 8192, 00:22:08.505 "large_pool_count": 1024, 00:22:08.505 "small_bufsize": 8192, 00:22:08.505 "large_bufsize": 135168 00:22:08.505 } 00:22:08.505 } 00:22:08.505 ] 00:22:08.505 }, 00:22:08.505 { 00:22:08.505 "subsystem": "sock", 00:22:08.505 "config": [ 00:22:08.505 { 00:22:08.505 "method": "sock_set_default_impl", 00:22:08.505 "params": { 00:22:08.505 "impl_name": "posix" 00:22:08.505 } 00:22:08.505 }, 00:22:08.505 { 00:22:08.505 "method": "sock_impl_set_options", 00:22:08.505 "params": { 00:22:08.505 "impl_name": "ssl", 00:22:08.505 "recv_buf_size": 4096, 00:22:08.505 "send_buf_size": 4096, 00:22:08.505 "enable_recv_pipe": true, 00:22:08.505 "enable_quickack": false, 00:22:08.505 "enable_placement_id": 0, 00:22:08.505 "enable_zerocopy_send_server": true, 00:22:08.505 "enable_zerocopy_send_client": false, 00:22:08.505 "zerocopy_threshold": 0, 00:22:08.505 "tls_version": 0, 00:22:08.505 "enable_ktls": false 00:22:08.505 } 00:22:08.505 }, 00:22:08.505 { 00:22:08.505 "method": "sock_impl_set_options", 00:22:08.505 "params": { 00:22:08.505 "impl_name": "posix", 00:22:08.505 "recv_buf_size": 2097152, 00:22:08.505 "send_buf_size": 2097152, 00:22:08.505 "enable_recv_pipe": true, 00:22:08.506 "enable_quickack": false, 00:22:08.506 "enable_placement_id": 0, 00:22:08.506 "enable_zerocopy_send_server": true, 00:22:08.506 "enable_zerocopy_send_client": false, 00:22:08.506 "zerocopy_threshold": 0, 00:22:08.506 "tls_version": 0, 00:22:08.506 "enable_ktls": false 00:22:08.506 } 00:22:08.506 } 00:22:08.506 ] 00:22:08.506 }, 00:22:08.506 { 00:22:08.506 "subsystem": "vmd", 00:22:08.506 "config": [] 00:22:08.506 }, 00:22:08.506 { 00:22:08.506 "subsystem": "accel", 00:22:08.506 "config": [ 00:22:08.506 { 00:22:08.506 "method": "accel_set_options", 00:22:08.506 "params": { 00:22:08.506 "small_cache_size": 128, 00:22:08.506 "large_cache_size": 16, 00:22:08.506 "task_count": 2048, 00:22:08.506 "sequence_count": 2048, 00:22:08.506 "buf_count": 2048 00:22:08.506 } 00:22:08.506 } 00:22:08.506 ] 00:22:08.506 }, 00:22:08.506 { 00:22:08.506 "subsystem": "bdev", 00:22:08.506 "config": [ 00:22:08.506 { 00:22:08.506 "method": "bdev_set_options", 00:22:08.506 "params": { 00:22:08.506 "bdev_io_pool_size": 65535, 00:22:08.506 "bdev_io_cache_size": 256, 00:22:08.506 "bdev_auto_examine": true, 00:22:08.506 "iobuf_small_cache_size": 128, 00:22:08.506 "iobuf_large_cache_size": 16 00:22:08.506 } 00:22:08.506 }, 00:22:08.506 { 00:22:08.506 "method": "bdev_raid_set_options", 00:22:08.506 "params": { 00:22:08.506 "process_window_size_kb": 1024, 00:22:08.506 "process_max_bandwidth_mb_sec": 0 00:22:08.506 } 00:22:08.506 }, 00:22:08.506 { 00:22:08.506 "method": "bdev_iscsi_set_options", 00:22:08.506 "params": { 00:22:08.506 "timeout_sec": 30 00:22:08.506 } 00:22:08.506 }, 00:22:08.506 { 00:22:08.506 "method": "bdev_nvme_set_options", 00:22:08.506 "params": { 00:22:08.506 "action_on_timeout": "none", 00:22:08.506 "timeout_us": 0, 00:22:08.506 "timeout_admin_us": 0, 00:22:08.506 "keep_alive_timeout_ms": 10000, 00:22:08.506 "arbitration_burst": 0, 00:22:08.506 "low_priority_weight": 0, 00:22:08.506 "medium_priority_weight": 0, 00:22:08.506 "high_priority_weight": 0, 00:22:08.506 "nvme_adminq_poll_period_us": 10000, 00:22:08.506 "nvme_ioq_poll_period_us": 0, 00:22:08.506 "io_queue_requests": 512, 00:22:08.506 "delay_cmd_submit": true, 00:22:08.506 "transport_retry_count": 4, 00:22:08.506 "bdev_retry_count": 3, 00:22:08.506 "transport_ack_timeout": 0, 00:22:08.506 "ctrlr_loss_timeout_sec": 0, 00:22:08.506 "reconnect_delay_sec": 0, 00:22:08.506 "fast_io_fail_timeout_sec": 0, 00:22:08.506 "disable_auto_failback": false, 00:22:08.506 "generate_uuids": false, 00:22:08.506 "transport_tos": 0, 00:22:08.506 "nvme_error_stat": false, 00:22:08.506 "rdma_srq_size": 0, 00:22:08.506 "io_path_stat": false, 00:22:08.506 "allow_accel_sequence": false, 00:22:08.506 "rdma_max_cq_size": 0, 00:22:08.506 "rdma_cm_event_timeout_ms": 0, 00:22:08.506 "dhchap_digests": [ 00:22:08.506 "sha256", 00:22:08.506 "sha384", 00:22:08.506 "sha512" 00:22:08.506 ], 00:22:08.506 "dhchap_dhgroups": [ 00:22:08.506 "null", 00:22:08.506 "ffdhe2048", 00:22:08.506 "ffdhe3072", 00:22:08.506 "ffdhe4096", 00:22:08.506 "ffdhe6144", 00:22:08.506 "ffdhe8192" 00:22:08.506 ] 00:22:08.506 } 00:22:08.506 }, 00:22:08.506 { 00:22:08.506 "method": "bdev_nvme_attach_controller", 00:22:08.506 "params": { 00:22:08.506 "name": "TLSTEST", 00:22:08.506 "trtype": "TCP", 00:22:08.506 "adrfam": "IPv4", 00:22:08.506 "traddr": "10.0.0.2", 00:22:08.506 "trsvcid": "4420", 00:22:08.506 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:08.506 "prchk_reftag": false, 00:22:08.506 "prchk_guard": false, 00:22:08.506 "ctrlr_loss_timeout_sec": 0, 00:22:08.506 "reconnect_delay_sec": 0, 00:22:08.506 "fast_io_fail_timeout_sec": 0, 00:22:08.506 "psk": "/tmp/tmp.5AOGPiS0a3", 00:22:08.506 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:08.506 "hdgst": false, 00:22:08.506 "ddgst": false 00:22:08.506 } 00:22:08.506 }, 00:22:08.506 { 00:22:08.506 "method": "bdev_nvme_set_hotplug", 00:22:08.506 "params": { 00:22:08.506 "period_us": 100000, 00:22:08.506 "enable": false 00:22:08.506 } 00:22:08.506 }, 00:22:08.506 { 00:22:08.506 "method": "bdev_wait_for_examine" 00:22:08.506 } 00:22:08.506 ] 00:22:08.506 }, 00:22:08.506 { 00:22:08.506 "subsystem": "nbd", 00:22:08.506 "config": [] 00:22:08.506 } 00:22:08.506 ] 00:22:08.506 }' 00:22:08.506 08:55:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # killprocess 1003637 00:22:08.506 08:55:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1003637 ']' 00:22:08.506 08:55:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1003637 00:22:08.506 08:55:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:08.506 08:55:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:08.506 08:55:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1003637 00:22:08.506 08:55:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:22:08.506 08:55:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:22:08.506 08:55:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1003637' 00:22:08.506 killing process with pid 1003637 00:22:08.506 08:55:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1003637 00:22:08.506 Received shutdown signal, test time was about 10.000000 seconds 00:22:08.506 00:22:08.506 Latency(us) 00:22:08.506 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:08.506 =================================================================================================================== 00:22:08.506 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:08.506 [2024-07-26 08:55:26.894199] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:08.506 08:55:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1003637 00:22:08.764 08:55:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@200 -- # killprocess 1003352 00:22:08.764 08:55:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1003352 ']' 00:22:08.764 08:55:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1003352 00:22:08.764 08:55:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:08.764 08:55:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:08.764 08:55:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1003352 00:22:08.764 08:55:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:08.764 08:55:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:08.764 08:55:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1003352' 00:22:08.764 killing process with pid 1003352 00:22:08.764 08:55:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1003352 00:22:08.764 [2024-07-26 08:55:27.136729] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:08.764 08:55:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1003352 00:22:09.023 08:55:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:22:09.023 08:55:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:09.023 08:55:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:22:09.023 "subsystems": [ 00:22:09.023 { 00:22:09.023 "subsystem": "keyring", 00:22:09.023 "config": [] 00:22:09.023 }, 00:22:09.023 { 00:22:09.023 "subsystem": "iobuf", 00:22:09.023 "config": [ 00:22:09.023 { 00:22:09.023 "method": "iobuf_set_options", 00:22:09.023 "params": { 00:22:09.023 "small_pool_count": 8192, 00:22:09.023 "large_pool_count": 1024, 00:22:09.023 "small_bufsize": 8192, 00:22:09.023 "large_bufsize": 135168 00:22:09.023 } 00:22:09.023 } 00:22:09.023 ] 00:22:09.023 }, 00:22:09.023 { 00:22:09.023 "subsystem": "sock", 00:22:09.023 "config": [ 00:22:09.023 { 00:22:09.023 "method": "sock_set_default_impl", 00:22:09.023 "params": { 00:22:09.023 "impl_name": "posix" 00:22:09.023 } 00:22:09.023 }, 00:22:09.023 { 00:22:09.023 "method": "sock_impl_set_options", 00:22:09.023 "params": { 00:22:09.023 "impl_name": "ssl", 00:22:09.023 "recv_buf_size": 4096, 00:22:09.023 "send_buf_size": 4096, 00:22:09.023 "enable_recv_pipe": true, 00:22:09.023 "enable_quickack": false, 00:22:09.023 "enable_placement_id": 0, 00:22:09.023 "enable_zerocopy_send_server": true, 00:22:09.023 "enable_zerocopy_send_client": false, 00:22:09.023 "zerocopy_threshold": 0, 00:22:09.023 "tls_version": 0, 00:22:09.023 "enable_ktls": false 00:22:09.023 } 00:22:09.023 }, 00:22:09.023 { 00:22:09.023 "method": "sock_impl_set_options", 00:22:09.023 "params": { 00:22:09.023 "impl_name": "posix", 00:22:09.023 "recv_buf_size": 2097152, 00:22:09.023 "send_buf_size": 2097152, 00:22:09.023 "enable_recv_pipe": true, 00:22:09.023 "enable_quickack": false, 00:22:09.023 "enable_placement_id": 0, 00:22:09.023 "enable_zerocopy_send_server": true, 00:22:09.023 "enable_zerocopy_send_client": false, 00:22:09.023 "zerocopy_threshold": 0, 00:22:09.023 "tls_version": 0, 00:22:09.023 "enable_ktls": false 00:22:09.023 } 00:22:09.023 } 00:22:09.023 ] 00:22:09.023 }, 00:22:09.023 { 00:22:09.023 "subsystem": "vmd", 00:22:09.023 "config": [] 00:22:09.023 }, 00:22:09.023 { 00:22:09.023 "subsystem": "accel", 00:22:09.023 "config": [ 00:22:09.023 { 00:22:09.023 "method": "accel_set_options", 00:22:09.023 "params": { 00:22:09.023 "small_cache_size": 128, 00:22:09.023 "large_cache_size": 16, 00:22:09.023 "task_count": 2048, 00:22:09.023 "sequence_count": 2048, 00:22:09.023 "buf_count": 2048 00:22:09.023 } 00:22:09.023 } 00:22:09.023 ] 00:22:09.023 }, 00:22:09.023 { 00:22:09.023 "subsystem": "bdev", 00:22:09.023 "config": [ 00:22:09.023 { 00:22:09.023 "method": "bdev_set_options", 00:22:09.023 "params": { 00:22:09.023 "bdev_io_pool_size": 65535, 00:22:09.023 "bdev_io_cache_size": 256, 00:22:09.023 "bdev_auto_examine": true, 00:22:09.023 "iobuf_small_cache_size": 128, 00:22:09.023 "iobuf_large_cache_size": 16 00:22:09.023 } 00:22:09.023 }, 00:22:09.023 { 00:22:09.023 "method": "bdev_raid_set_options", 00:22:09.023 "params": { 00:22:09.023 "process_window_size_kb": 1024, 00:22:09.023 "process_max_bandwidth_mb_sec": 0 00:22:09.023 } 00:22:09.023 }, 00:22:09.023 { 00:22:09.023 "method": "bdev_iscsi_set_options", 00:22:09.023 "params": { 00:22:09.023 "timeout_sec": 30 00:22:09.023 } 00:22:09.023 }, 00:22:09.023 { 00:22:09.023 "method": "bdev_nvme_set_options", 00:22:09.023 "params": { 00:22:09.023 "action_on_timeout": "none", 00:22:09.023 "timeout_us": 0, 00:22:09.023 "timeout_admin_us": 0, 00:22:09.023 "keep_alive_timeout_ms": 10000, 00:22:09.023 "arbitration_burst": 0, 00:22:09.023 "low_priority_weight": 0, 00:22:09.023 "medium_priority_weight": 0, 00:22:09.023 "high_priority_weight": 0, 00:22:09.023 "nvme_adminq_poll_period_us": 10000, 00:22:09.023 "nvme_ioq_poll_period_us": 0, 00:22:09.023 "io_queue_requests": 0, 00:22:09.023 "delay_cmd_submit": true, 00:22:09.023 "transport_retry_count": 4, 00:22:09.023 "bdev_retry_count": 3, 00:22:09.023 "transport_ack_timeout": 0, 00:22:09.023 "ctrlr_loss_timeout_sec": 0, 00:22:09.023 "reconnect_delay_sec": 0, 00:22:09.023 "fast_io_fail_timeout_sec": 0, 00:22:09.023 "disable_auto_failback": false, 00:22:09.023 "generate_uuids": false, 00:22:09.023 "transport_tos": 0, 00:22:09.023 "nvme_error_stat": false, 00:22:09.023 "rdma_srq_size": 0, 00:22:09.023 "io_path_stat": false, 00:22:09.023 "allow_accel_sequence": false, 00:22:09.023 "rdma_max_cq_size": 0, 00:22:09.024 "rdma_cm_event_timeout_ms": 0, 00:22:09.024 "dhchap_digests": [ 00:22:09.024 "sha256", 00:22:09.024 "sha384", 00:22:09.024 "sha512" 00:22:09.024 ], 00:22:09.024 "dhchap_dhgroups": [ 00:22:09.024 "null", 00:22:09.024 "ffdhe2048", 00:22:09.024 "ffdhe3072", 00:22:09.024 "ffdhe4096", 00:22:09.024 "ffdhe6144", 00:22:09.024 "ffdhe8192" 00:22:09.024 ] 00:22:09.024 } 00:22:09.024 }, 00:22:09.024 { 00:22:09.024 "method": "bdev_nvme_set_hotplug", 00:22:09.024 "params": { 00:22:09.024 "period_us": 100000, 00:22:09.024 "enable": false 00:22:09.024 } 00:22:09.024 }, 00:22:09.024 { 00:22:09.024 "method": "bdev_malloc_create", 00:22:09.024 "params": { 00:22:09.024 "name": "malloc0", 00:22:09.024 "num_blocks": 8192, 00:22:09.024 "block_size": 4096, 00:22:09.024 "physical_block_size": 4096, 00:22:09.024 "uuid": "d6560298-7675-45b2-b20a-fba7d3a1882a", 00:22:09.024 "optimal_io_boundary": 0, 00:22:09.024 "md_size": 0, 00:22:09.024 "dif_type": 0, 00:22:09.024 "dif_is_head_of_md": false, 00:22:09.024 "dif_pi_format": 0 00:22:09.024 } 00:22:09.024 }, 00:22:09.024 { 00:22:09.024 "method": "bdev_wait_for_examine" 00:22:09.024 } 00:22:09.024 ] 00:22:09.024 }, 00:22:09.024 { 00:22:09.024 "subsystem": "nbd", 00:22:09.024 "config": [] 00:22:09.024 }, 00:22:09.024 { 00:22:09.024 "subsystem": "scheduler", 00:22:09.024 "config": [ 00:22:09.024 { 00:22:09.024 "method": "framework_set_scheduler", 00:22:09.024 "params": { 00:22:09.024 "name": "static" 00:22:09.024 } 00:22:09.024 } 00:22:09.024 ] 00:22:09.024 }, 00:22:09.024 { 00:22:09.024 "subsystem": "nvmf", 00:22:09.024 "config": [ 00:22:09.024 { 00:22:09.024 "method": "nvmf_set_config", 00:22:09.024 "params": { 00:22:09.024 "discovery_filter": "match_any", 00:22:09.024 "admin_cmd_passthru": { 00:22:09.024 "identify_ctrlr": false 00:22:09.024 } 00:22:09.024 } 00:22:09.024 }, 00:22:09.024 { 00:22:09.024 "method": "nvmf_set_max_subsystems", 00:22:09.024 "params": { 00:22:09.024 "max_subsystems": 1024 00:22:09.024 } 00:22:09.024 }, 00:22:09.024 { 00:22:09.024 "method": "nvmf_set_crdt", 00:22:09.024 "params": { 00:22:09.024 "crdt1": 0, 00:22:09.024 "crdt2": 0, 00:22:09.024 "crdt3": 0 00:22:09.024 } 00:22:09.024 }, 00:22:09.024 { 00:22:09.024 "method": "nvmf_create_transport", 00:22:09.024 "params": { 00:22:09.024 "trtype": "TCP", 00:22:09.024 "max_queue_depth": 128, 00:22:09.024 "max_io_qpairs_per_ctrlr": 127, 00:22:09.024 "in_capsule_data_size": 4096, 00:22:09.024 "max_io_size": 131072, 00:22:09.024 "io_unit_size": 131072, 00:22:09.024 "max_aq_depth": 128, 00:22:09.024 "num_shared_buffers": 511, 00:22:09.024 "buf_cache_size": 4294967295, 00:22:09.024 "dif_insert_or_strip": false, 00:22:09.024 "zcopy": false, 00:22:09.024 "c2h_success": false, 00:22:09.024 "sock_priority": 0, 00:22:09.024 "abort_timeout_sec": 1, 00:22:09.024 "ack_timeout": 0, 00:22:09.024 "data_wr_pool_size": 0 00:22:09.024 } 00:22:09.024 }, 00:22:09.024 { 00:22:09.024 "method": "nvmf_create_subsystem", 00:22:09.024 "params": { 00:22:09.024 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:09.024 "allow_any_host": false, 00:22:09.024 "serial_number": "SPDK00000000000001", 00:22:09.024 "model_number": "SPDK bdev Controller", 00:22:09.024 "max_namespaces": 10, 00:22:09.024 "min_cntlid": 1, 00:22:09.024 "max_cntlid": 65519, 00:22:09.024 "ana_reporting": false 00:22:09.024 } 00:22:09.024 }, 00:22:09.024 { 00:22:09.024 "method": "nvmf_subsystem_add_host", 00:22:09.024 "params": { 00:22:09.024 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:09.024 "host": "nqn.2016-06.io.spdk:host1", 00:22:09.024 "psk": "/tmp/tmp.5AOGPiS0a3" 00:22:09.024 } 00:22:09.024 }, 00:22:09.024 { 00:22:09.024 "method": "nvmf_subsystem_add_ns", 00:22:09.024 "params": { 00:22:09.024 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:09.024 "namespace": { 00:22:09.024 "nsid": 1, 00:22:09.024 "bdev_name": "malloc0", 00:22:09.024 "nguid": "D6560298767545B2B20AFBA7D3A1882A", 00:22:09.024 "uuid": "d6560298-7675-45b2-b20a-fba7d3a1882a", 00:22:09.024 "no_auto_visible": false 00:22:09.024 } 00:22:09.024 } 00:22:09.024 }, 00:22:09.024 { 00:22:09.024 "method": "nvmf_subsystem_add_listener", 00:22:09.024 "params": { 00:22:09.024 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:09.024 "listen_address": { 00:22:09.024 "trtype": "TCP", 00:22:09.024 "adrfam": "IPv4", 00:22:09.024 "traddr": "10.0.0.2", 00:22:09.024 "trsvcid": "4420" 00:22:09.024 }, 00:22:09.024 "secure_channel": true 00:22:09.024 } 00:22:09.024 } 00:22:09.024 ] 00:22:09.024 } 00:22:09.024 ] 00:22:09.024 }' 00:22:09.024 08:55:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:09.024 08:55:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:09.024 08:55:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1003910 00:22:09.024 08:55:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:22:09.024 08:55:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1003910 00:22:09.024 08:55:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1003910 ']' 00:22:09.024 08:55:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:09.024 08:55:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:09.024 08:55:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:09.024 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:09.024 08:55:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:09.024 08:55:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:09.024 [2024-07-26 08:55:27.440773] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:22:09.024 [2024-07-26 08:55:27.440881] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:09.024 EAL: No free 2048 kB hugepages reported on node 1 00:22:09.024 [2024-07-26 08:55:27.479739] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:09.283 [2024-07-26 08:55:27.507808] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:09.283 [2024-07-26 08:55:27.595899] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:09.283 [2024-07-26 08:55:27.595994] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:09.283 [2024-07-26 08:55:27.596008] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:09.283 [2024-07-26 08:55:27.596019] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:09.283 [2024-07-26 08:55:27.596029] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:09.283 [2024-07-26 08:55:27.596141] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:09.542 [2024-07-26 08:55:27.833429] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:09.542 [2024-07-26 08:55:27.862598] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:09.542 [2024-07-26 08:55:27.878669] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:09.542 [2024-07-26 08:55:27.878932] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:10.109 08:55:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:10.109 08:55:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:10.109 08:55:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:10.109 08:55:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:10.109 08:55:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:10.109 08:55:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:10.109 08:55:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=1004061 00:22:10.109 08:55:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 1004061 /var/tmp/bdevperf.sock 00:22:10.109 08:55:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1004061 ']' 00:22:10.109 08:55:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:10.109 08:55:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:22:10.109 08:55:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:10.109 08:55:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:22:10.109 "subsystems": [ 00:22:10.109 { 00:22:10.109 "subsystem": "keyring", 00:22:10.109 "config": [] 00:22:10.109 }, 00:22:10.109 { 00:22:10.109 "subsystem": "iobuf", 00:22:10.109 "config": [ 00:22:10.109 { 00:22:10.109 "method": "iobuf_set_options", 00:22:10.109 "params": { 00:22:10.109 "small_pool_count": 8192, 00:22:10.109 "large_pool_count": 1024, 00:22:10.109 "small_bufsize": 8192, 00:22:10.109 "large_bufsize": 135168 00:22:10.109 } 00:22:10.109 } 00:22:10.109 ] 00:22:10.109 }, 00:22:10.109 { 00:22:10.109 "subsystem": "sock", 00:22:10.109 "config": [ 00:22:10.109 { 00:22:10.109 "method": "sock_set_default_impl", 00:22:10.109 "params": { 00:22:10.109 "impl_name": "posix" 00:22:10.109 } 00:22:10.109 }, 00:22:10.109 { 00:22:10.109 "method": "sock_impl_set_options", 00:22:10.109 "params": { 00:22:10.109 "impl_name": "ssl", 00:22:10.109 "recv_buf_size": 4096, 00:22:10.109 "send_buf_size": 4096, 00:22:10.109 "enable_recv_pipe": true, 00:22:10.109 "enable_quickack": false, 00:22:10.109 "enable_placement_id": 0, 00:22:10.109 "enable_zerocopy_send_server": true, 00:22:10.109 "enable_zerocopy_send_client": false, 00:22:10.109 "zerocopy_threshold": 0, 00:22:10.109 "tls_version": 0, 00:22:10.109 "enable_ktls": false 00:22:10.109 } 00:22:10.109 }, 00:22:10.109 { 00:22:10.109 "method": "sock_impl_set_options", 00:22:10.109 "params": { 00:22:10.109 "impl_name": "posix", 00:22:10.109 "recv_buf_size": 2097152, 00:22:10.109 "send_buf_size": 2097152, 00:22:10.109 "enable_recv_pipe": true, 00:22:10.109 "enable_quickack": false, 00:22:10.109 "enable_placement_id": 0, 00:22:10.109 "enable_zerocopy_send_server": true, 00:22:10.109 "enable_zerocopy_send_client": false, 00:22:10.109 "zerocopy_threshold": 0, 00:22:10.109 "tls_version": 0, 00:22:10.109 "enable_ktls": false 00:22:10.109 } 00:22:10.109 } 00:22:10.109 ] 00:22:10.109 }, 00:22:10.109 { 00:22:10.109 "subsystem": "vmd", 00:22:10.109 "config": [] 00:22:10.109 }, 00:22:10.109 { 00:22:10.109 "subsystem": "accel", 00:22:10.109 "config": [ 00:22:10.109 { 00:22:10.109 "method": "accel_set_options", 00:22:10.109 "params": { 00:22:10.109 "small_cache_size": 128, 00:22:10.109 "large_cache_size": 16, 00:22:10.109 "task_count": 2048, 00:22:10.109 "sequence_count": 2048, 00:22:10.109 "buf_count": 2048 00:22:10.110 } 00:22:10.110 } 00:22:10.110 ] 00:22:10.110 }, 00:22:10.110 { 00:22:10.110 "subsystem": "bdev", 00:22:10.110 "config": [ 00:22:10.110 { 00:22:10.110 "method": "bdev_set_options", 00:22:10.110 "params": { 00:22:10.110 "bdev_io_pool_size": 65535, 00:22:10.110 "bdev_io_cache_size": 256, 00:22:10.110 "bdev_auto_examine": true, 00:22:10.110 "iobuf_small_cache_size": 128, 00:22:10.110 "iobuf_large_cache_size": 16 00:22:10.110 } 00:22:10.110 }, 00:22:10.110 { 00:22:10.110 "method": "bdev_raid_set_options", 00:22:10.110 "params": { 00:22:10.110 "process_window_size_kb": 1024, 00:22:10.110 "process_max_bandwidth_mb_sec": 0 00:22:10.110 } 00:22:10.110 }, 00:22:10.110 { 00:22:10.110 "method": "bdev_iscsi_set_options", 00:22:10.110 "params": { 00:22:10.110 "timeout_sec": 30 00:22:10.110 } 00:22:10.110 }, 00:22:10.110 { 00:22:10.110 "method": "bdev_nvme_set_options", 00:22:10.110 "params": { 00:22:10.110 "action_on_timeout": "none", 00:22:10.110 "timeout_us": 0, 00:22:10.110 "timeout_admin_us": 0, 00:22:10.110 "keep_alive_timeout_ms": 10000, 00:22:10.110 "arbitration_burst": 0, 00:22:10.110 "low_priority_weight": 0, 00:22:10.110 "medium_priority_weight": 0, 00:22:10.110 "high_priority_weight": 0, 00:22:10.110 "nvme_adminq_poll_period_us": 10000, 00:22:10.110 "nvme_ioq_poll_period_us": 0, 00:22:10.110 "io_queue_requests": 512, 00:22:10.110 "delay_cmd_submit": true, 00:22:10.110 "transport_retry_count": 4, 00:22:10.110 "bdev_retry_count": 3, 00:22:10.110 "transport_ack_timeout": 0, 00:22:10.110 "ctrlr_loss_timeout_sec": 0, 00:22:10.110 "reconnect_delay_sec": 0, 00:22:10.110 "fast_io_fail_timeout_sec": 0, 00:22:10.110 "disable_auto_failback": false, 00:22:10.110 "generate_uuids": false, 00:22:10.110 "transport_tos": 0, 00:22:10.110 "nvme_error_stat": false, 00:22:10.110 "rdma_srq_size": 0, 00:22:10.110 "io_path_stat": false, 00:22:10.110 "allow_accel_sequence": false, 00:22:10.110 "rdma_max_cq_size": 0, 00:22:10.110 "rdma_cm_event_timeout_ms": 0, 00:22:10.110 "dhchap_digests": [ 00:22:10.110 "sha256", 00:22:10.110 "sha384", 00:22:10.110 "sha512" 00:22:10.110 ], 00:22:10.110 "dhchap_dhgroups": [ 00:22:10.110 "null", 00:22:10.110 "ffdhe2048", 00:22:10.110 "ffdhe3072", 00:22:10.110 "ffdhe4096", 00:22:10.110 "ffdhe6144", 00:22:10.110 "ffdhe8192" 00:22:10.110 ] 00:22:10.110 } 00:22:10.110 }, 00:22:10.110 { 00:22:10.110 "method": "bdev_nvme_attach_controller", 00:22:10.110 "params": { 00:22:10.110 "name": "TLSTEST", 00:22:10.110 "trtype": "TCP", 00:22:10.110 "adrfam": "IPv4", 00:22:10.110 "traddr": "10.0.0.2", 00:22:10.110 "trsvcid": "4420", 00:22:10.110 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:10.110 "prchk_reftag": false, 00:22:10.110 "prchk_guard": false, 00:22:10.110 "ctrlr_loss_timeout_sec": 0, 00:22:10.110 "reconnect_delay_sec": 0, 00:22:10.110 "fast_io_fail_timeout_sec": 0, 00:22:10.110 "psk": "/tmp/tmp.5AOGPiS0a3", 00:22:10.110 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:10.110 "hdgst": false, 00:22:10.110 "ddgst": false 00:22:10.110 } 00:22:10.110 }, 00:22:10.110 { 00:22:10.110 "method": "bdev_nvme_set_hotplug", 00:22:10.110 "params": { 00:22:10.110 "period_us": 100000, 00:22:10.110 "enable": false 00:22:10.110 } 00:22:10.110 }, 00:22:10.110 { 00:22:10.110 "method": "bdev_wait_for_examine" 00:22:10.110 } 00:22:10.110 ] 00:22:10.110 }, 00:22:10.110 { 00:22:10.110 "subsystem": "nbd", 00:22:10.110 "config": [] 00:22:10.110 } 00:22:10.110 ] 00:22:10.110 }' 00:22:10.110 08:55:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:10.110 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:10.110 08:55:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:10.110 08:55:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:10.110 [2024-07-26 08:55:28.513315] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:22:10.110 [2024-07-26 08:55:28.513390] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1004061 ] 00:22:10.110 EAL: No free 2048 kB hugepages reported on node 1 00:22:10.110 [2024-07-26 08:55:28.546202] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:10.369 [2024-07-26 08:55:28.574301] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:10.369 [2024-07-26 08:55:28.657835] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:10.369 [2024-07-26 08:55:28.823735] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:10.369 [2024-07-26 08:55:28.823881] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:11.304 08:55:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:11.304 08:55:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:11.304 08:55:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:11.304 Running I/O for 10 seconds... 00:22:21.269 00:22:21.269 Latency(us) 00:22:21.269 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:21.269 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:21.269 Verification LBA range: start 0x0 length 0x2000 00:22:21.269 TLSTESTn1 : 10.03 3441.03 13.44 0.00 0.00 37110.94 10582.85 55924.05 00:22:21.269 =================================================================================================================== 00:22:21.269 Total : 3441.03 13.44 0.00 0.00 37110.94 10582.85 55924.05 00:22:21.269 0 00:22:21.269 08:55:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:21.269 08:55:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@214 -- # killprocess 1004061 00:22:21.269 08:55:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1004061 ']' 00:22:21.269 08:55:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1004061 00:22:21.269 08:55:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:21.269 08:55:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:21.269 08:55:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1004061 00:22:21.269 08:55:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:22:21.269 08:55:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:22:21.269 08:55:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1004061' 00:22:21.269 killing process with pid 1004061 00:22:21.269 08:55:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1004061 00:22:21.269 Received shutdown signal, test time was about 10.000000 seconds 00:22:21.269 00:22:21.269 Latency(us) 00:22:21.269 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:21.269 =================================================================================================================== 00:22:21.269 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:21.269 [2024-07-26 08:55:39.712647] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:21.269 08:55:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1004061 00:22:21.527 08:55:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # killprocess 1003910 00:22:21.527 08:55:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1003910 ']' 00:22:21.527 08:55:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1003910 00:22:21.527 08:55:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:21.527 08:55:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:21.527 08:55:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1003910 00:22:21.527 08:55:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:21.527 08:55:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:21.527 08:55:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1003910' 00:22:21.527 killing process with pid 1003910 00:22:21.527 08:55:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1003910 00:22:21.527 [2024-07-26 08:55:39.968288] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:21.527 08:55:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1003910 00:22:21.785 08:55:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:22:21.785 08:55:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:21.785 08:55:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:21.785 08:55:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:21.785 08:55:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1005397 00:22:21.785 08:55:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:22:21.785 08:55:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1005397 00:22:21.785 08:55:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1005397 ']' 00:22:21.785 08:55:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:21.785 08:55:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:21.786 08:55:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:21.786 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:21.786 08:55:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:21.786 08:55:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:22.044 [2024-07-26 08:55:40.250197] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:22:22.044 [2024-07-26 08:55:40.250273] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:22.044 EAL: No free 2048 kB hugepages reported on node 1 00:22:22.044 [2024-07-26 08:55:40.286478] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:22.044 [2024-07-26 08:55:40.324211] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:22.044 [2024-07-26 08:55:40.420553] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:22.044 [2024-07-26 08:55:40.420632] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:22.044 [2024-07-26 08:55:40.420667] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:22.044 [2024-07-26 08:55:40.420684] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:22.044 [2024-07-26 08:55:40.420698] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:22.044 [2024-07-26 08:55:40.420740] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:22.302 08:55:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:22.302 08:55:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:22.302 08:55:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:22.302 08:55:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:22.302 08:55:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:22.302 08:55:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:22.302 08:55:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.5AOGPiS0a3 00:22:22.302 08:55:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.5AOGPiS0a3 00:22:22.302 08:55:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:22.560 [2024-07-26 08:55:40.845524] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:22.560 08:55:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:22.818 08:55:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:23.076 [2024-07-26 08:55:41.431087] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:23.076 [2024-07-26 08:55:41.431354] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:23.076 08:55:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:23.334 malloc0 00:22:23.334 08:55:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:23.592 08:55:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.5AOGPiS0a3 00:22:23.850 [2024-07-26 08:55:42.281124] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:23.850 08:55:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=1005676 00:22:23.850 08:55:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:22:23.850 08:55:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:23.850 08:55:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 1005676 /var/tmp/bdevperf.sock 00:22:23.850 08:55:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1005676 ']' 00:22:23.850 08:55:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:23.850 08:55:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:23.850 08:55:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:23.850 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:23.850 08:55:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:23.850 08:55:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:24.108 [2024-07-26 08:55:42.337311] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:22:24.108 [2024-07-26 08:55:42.337390] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1005676 ] 00:22:24.108 EAL: No free 2048 kB hugepages reported on node 1 00:22:24.108 [2024-07-26 08:55:42.368964] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:24.108 [2024-07-26 08:55:42.397344] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:24.108 [2024-07-26 08:55:42.484506] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:24.392 08:55:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:24.392 08:55:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:24.392 08:55:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.5AOGPiS0a3 00:22:24.392 08:55:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:22:24.655 [2024-07-26 08:55:43.063191] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:24.920 nvme0n1 00:22:24.920 08:55:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:24.920 Running I/O for 1 seconds... 00:22:25.855 00:22:25.855 Latency(us) 00:22:25.855 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:25.855 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:25.855 Verification LBA range: start 0x0 length 0x2000 00:22:25.855 nvme0n1 : 1.03 3385.06 13.22 0.00 0.00 37198.07 8058.50 67574.90 00:22:25.855 =================================================================================================================== 00:22:25.855 Total : 3385.06 13.22 0.00 0.00 37198.07 8058.50 67574.90 00:22:25.855 0 00:22:26.114 08:55:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # killprocess 1005676 00:22:26.114 08:55:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1005676 ']' 00:22:26.114 08:55:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1005676 00:22:26.114 08:55:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:26.114 08:55:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:26.114 08:55:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1005676 00:22:26.114 08:55:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:26.114 08:55:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:26.114 08:55:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1005676' 00:22:26.114 killing process with pid 1005676 00:22:26.114 08:55:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1005676 00:22:26.114 Received shutdown signal, test time was about 1.000000 seconds 00:22:26.114 00:22:26.114 Latency(us) 00:22:26.114 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:26.114 =================================================================================================================== 00:22:26.114 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:26.114 08:55:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1005676 00:22:26.114 08:55:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@235 -- # killprocess 1005397 00:22:26.114 08:55:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1005397 ']' 00:22:26.114 08:55:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1005397 00:22:26.114 08:55:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:26.372 08:55:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:26.372 08:55:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1005397 00:22:26.372 08:55:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:26.372 08:55:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:26.372 08:55:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1005397' 00:22:26.372 killing process with pid 1005397 00:22:26.372 08:55:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1005397 00:22:26.372 [2024-07-26 08:55:44.601894] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:26.372 08:55:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1005397 00:22:26.631 08:55:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@240 -- # nvmfappstart 00:22:26.631 08:55:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:26.631 08:55:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:26.631 08:55:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:26.631 08:55:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1005958 00:22:26.631 08:55:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:22:26.631 08:55:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1005958 00:22:26.631 08:55:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1005958 ']' 00:22:26.631 08:55:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:26.631 08:55:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:26.631 08:55:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:26.631 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:26.631 08:55:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:26.631 08:55:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:26.631 [2024-07-26 08:55:44.914869] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:22:26.631 [2024-07-26 08:55:44.914946] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:26.631 EAL: No free 2048 kB hugepages reported on node 1 00:22:26.631 [2024-07-26 08:55:44.952220] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:26.631 [2024-07-26 08:55:44.988791] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:26.631 [2024-07-26 08:55:45.078403] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:26.631 [2024-07-26 08:55:45.078451] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:26.631 [2024-07-26 08:55:45.078478] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:26.631 [2024-07-26 08:55:45.078488] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:26.631 [2024-07-26 08:55:45.078498] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:26.631 [2024-07-26 08:55:45.078522] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:26.890 08:55:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:26.890 08:55:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:26.890 08:55:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:26.890 08:55:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:26.890 08:55:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:26.890 08:55:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:26.890 08:55:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@241 -- # rpc_cmd 00:22:26.890 08:55:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:26.890 08:55:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:26.890 [2024-07-26 08:55:45.218865] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:26.890 malloc0 00:22:26.890 [2024-07-26 08:55:45.251021] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:26.890 [2024-07-26 08:55:45.257285] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:26.890 08:55:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:26.890 08:55:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # bdevperf_pid=1006045 00:22:26.890 08:55:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@252 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:22:26.890 08:55:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # waitforlisten 1006045 /var/tmp/bdevperf.sock 00:22:26.890 08:55:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1006045 ']' 00:22:26.890 08:55:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:26.890 08:55:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:26.890 08:55:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:26.890 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:26.890 08:55:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:26.890 08:55:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:26.890 [2024-07-26 08:55:45.324879] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:22:26.890 [2024-07-26 08:55:45.324960] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1006045 ] 00:22:27.149 EAL: No free 2048 kB hugepages reported on node 1 00:22:27.149 [2024-07-26 08:55:45.358294] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:27.149 [2024-07-26 08:55:45.388340] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:27.149 [2024-07-26 08:55:45.478680] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:27.149 08:55:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:27.149 08:55:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:27.149 08:55:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@257 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.5AOGPiS0a3 00:22:27.713 08:55:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:22:27.713 [2024-07-26 08:55:46.119671] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:27.971 nvme0n1 00:22:27.971 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@262 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:27.971 Running I/O for 1 seconds... 00:22:28.905 00:22:28.905 Latency(us) 00:22:28.905 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:28.905 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:28.905 Verification LBA range: start 0x0 length 0x2000 00:22:28.906 nvme0n1 : 1.04 3218.80 12.57 0.00 0.00 39114.08 7233.23 63691.28 00:22:28.906 =================================================================================================================== 00:22:28.906 Total : 3218.80 12.57 0.00 0.00 39114.08 7233.23 63691.28 00:22:28.906 0 00:22:28.906 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@265 -- # rpc_cmd save_config 00:22:28.906 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:28.906 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:29.164 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:29.164 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@265 -- # tgtcfg='{ 00:22:29.164 "subsystems": [ 00:22:29.164 { 00:22:29.164 "subsystem": "keyring", 00:22:29.164 "config": [ 00:22:29.164 { 00:22:29.164 "method": "keyring_file_add_key", 00:22:29.164 "params": { 00:22:29.164 "name": "key0", 00:22:29.164 "path": "/tmp/tmp.5AOGPiS0a3" 00:22:29.164 } 00:22:29.164 } 00:22:29.164 ] 00:22:29.164 }, 00:22:29.164 { 00:22:29.164 "subsystem": "iobuf", 00:22:29.164 "config": [ 00:22:29.164 { 00:22:29.164 "method": "iobuf_set_options", 00:22:29.164 "params": { 00:22:29.164 "small_pool_count": 8192, 00:22:29.164 "large_pool_count": 1024, 00:22:29.164 "small_bufsize": 8192, 00:22:29.164 "large_bufsize": 135168 00:22:29.164 } 00:22:29.164 } 00:22:29.164 ] 00:22:29.164 }, 00:22:29.164 { 00:22:29.164 "subsystem": "sock", 00:22:29.164 "config": [ 00:22:29.164 { 00:22:29.164 "method": "sock_set_default_impl", 00:22:29.164 "params": { 00:22:29.164 "impl_name": "posix" 00:22:29.164 } 00:22:29.164 }, 00:22:29.164 { 00:22:29.164 "method": "sock_impl_set_options", 00:22:29.164 "params": { 00:22:29.164 "impl_name": "ssl", 00:22:29.164 "recv_buf_size": 4096, 00:22:29.164 "send_buf_size": 4096, 00:22:29.164 "enable_recv_pipe": true, 00:22:29.164 "enable_quickack": false, 00:22:29.164 "enable_placement_id": 0, 00:22:29.164 "enable_zerocopy_send_server": true, 00:22:29.164 "enable_zerocopy_send_client": false, 00:22:29.164 "zerocopy_threshold": 0, 00:22:29.164 "tls_version": 0, 00:22:29.164 "enable_ktls": false 00:22:29.164 } 00:22:29.164 }, 00:22:29.164 { 00:22:29.164 "method": "sock_impl_set_options", 00:22:29.164 "params": { 00:22:29.164 "impl_name": "posix", 00:22:29.164 "recv_buf_size": 2097152, 00:22:29.164 "send_buf_size": 2097152, 00:22:29.164 "enable_recv_pipe": true, 00:22:29.164 "enable_quickack": false, 00:22:29.164 "enable_placement_id": 0, 00:22:29.164 "enable_zerocopy_send_server": true, 00:22:29.164 "enable_zerocopy_send_client": false, 00:22:29.164 "zerocopy_threshold": 0, 00:22:29.164 "tls_version": 0, 00:22:29.164 "enable_ktls": false 00:22:29.164 } 00:22:29.164 } 00:22:29.164 ] 00:22:29.164 }, 00:22:29.164 { 00:22:29.164 "subsystem": "vmd", 00:22:29.164 "config": [] 00:22:29.164 }, 00:22:29.164 { 00:22:29.164 "subsystem": "accel", 00:22:29.164 "config": [ 00:22:29.164 { 00:22:29.164 "method": "accel_set_options", 00:22:29.164 "params": { 00:22:29.164 "small_cache_size": 128, 00:22:29.164 "large_cache_size": 16, 00:22:29.164 "task_count": 2048, 00:22:29.164 "sequence_count": 2048, 00:22:29.164 "buf_count": 2048 00:22:29.164 } 00:22:29.164 } 00:22:29.164 ] 00:22:29.164 }, 00:22:29.164 { 00:22:29.164 "subsystem": "bdev", 00:22:29.164 "config": [ 00:22:29.164 { 00:22:29.164 "method": "bdev_set_options", 00:22:29.164 "params": { 00:22:29.164 "bdev_io_pool_size": 65535, 00:22:29.164 "bdev_io_cache_size": 256, 00:22:29.164 "bdev_auto_examine": true, 00:22:29.164 "iobuf_small_cache_size": 128, 00:22:29.165 "iobuf_large_cache_size": 16 00:22:29.165 } 00:22:29.165 }, 00:22:29.165 { 00:22:29.165 "method": "bdev_raid_set_options", 00:22:29.165 "params": { 00:22:29.165 "process_window_size_kb": 1024, 00:22:29.165 "process_max_bandwidth_mb_sec": 0 00:22:29.165 } 00:22:29.165 }, 00:22:29.165 { 00:22:29.165 "method": "bdev_iscsi_set_options", 00:22:29.165 "params": { 00:22:29.165 "timeout_sec": 30 00:22:29.165 } 00:22:29.165 }, 00:22:29.165 { 00:22:29.165 "method": "bdev_nvme_set_options", 00:22:29.165 "params": { 00:22:29.165 "action_on_timeout": "none", 00:22:29.165 "timeout_us": 0, 00:22:29.165 "timeout_admin_us": 0, 00:22:29.165 "keep_alive_timeout_ms": 10000, 00:22:29.165 "arbitration_burst": 0, 00:22:29.165 "low_priority_weight": 0, 00:22:29.165 "medium_priority_weight": 0, 00:22:29.165 "high_priority_weight": 0, 00:22:29.165 "nvme_adminq_poll_period_us": 10000, 00:22:29.165 "nvme_ioq_poll_period_us": 0, 00:22:29.165 "io_queue_requests": 0, 00:22:29.165 "delay_cmd_submit": true, 00:22:29.165 "transport_retry_count": 4, 00:22:29.165 "bdev_retry_count": 3, 00:22:29.165 "transport_ack_timeout": 0, 00:22:29.165 "ctrlr_loss_timeout_sec": 0, 00:22:29.165 "reconnect_delay_sec": 0, 00:22:29.165 "fast_io_fail_timeout_sec": 0, 00:22:29.165 "disable_auto_failback": false, 00:22:29.165 "generate_uuids": false, 00:22:29.165 "transport_tos": 0, 00:22:29.165 "nvme_error_stat": false, 00:22:29.165 "rdma_srq_size": 0, 00:22:29.165 "io_path_stat": false, 00:22:29.165 "allow_accel_sequence": false, 00:22:29.165 "rdma_max_cq_size": 0, 00:22:29.165 "rdma_cm_event_timeout_ms": 0, 00:22:29.165 "dhchap_digests": [ 00:22:29.165 "sha256", 00:22:29.165 "sha384", 00:22:29.165 "sha512" 00:22:29.165 ], 00:22:29.165 "dhchap_dhgroups": [ 00:22:29.165 "null", 00:22:29.165 "ffdhe2048", 00:22:29.165 "ffdhe3072", 00:22:29.165 "ffdhe4096", 00:22:29.165 "ffdhe6144", 00:22:29.165 "ffdhe8192" 00:22:29.165 ] 00:22:29.165 } 00:22:29.165 }, 00:22:29.165 { 00:22:29.165 "method": "bdev_nvme_set_hotplug", 00:22:29.165 "params": { 00:22:29.165 "period_us": 100000, 00:22:29.165 "enable": false 00:22:29.165 } 00:22:29.165 }, 00:22:29.165 { 00:22:29.165 "method": "bdev_malloc_create", 00:22:29.165 "params": { 00:22:29.165 "name": "malloc0", 00:22:29.165 "num_blocks": 8192, 00:22:29.165 "block_size": 4096, 00:22:29.165 "physical_block_size": 4096, 00:22:29.165 "uuid": "e3c2259a-9d0f-4931-be6c-912a3e479ebd", 00:22:29.165 "optimal_io_boundary": 0, 00:22:29.165 "md_size": 0, 00:22:29.165 "dif_type": 0, 00:22:29.165 "dif_is_head_of_md": false, 00:22:29.165 "dif_pi_format": 0 00:22:29.165 } 00:22:29.165 }, 00:22:29.165 { 00:22:29.165 "method": "bdev_wait_for_examine" 00:22:29.165 } 00:22:29.165 ] 00:22:29.165 }, 00:22:29.165 { 00:22:29.165 "subsystem": "nbd", 00:22:29.165 "config": [] 00:22:29.165 }, 00:22:29.165 { 00:22:29.165 "subsystem": "scheduler", 00:22:29.165 "config": [ 00:22:29.165 { 00:22:29.165 "method": "framework_set_scheduler", 00:22:29.165 "params": { 00:22:29.165 "name": "static" 00:22:29.165 } 00:22:29.165 } 00:22:29.165 ] 00:22:29.165 }, 00:22:29.165 { 00:22:29.165 "subsystem": "nvmf", 00:22:29.165 "config": [ 00:22:29.165 { 00:22:29.165 "method": "nvmf_set_config", 00:22:29.165 "params": { 00:22:29.165 "discovery_filter": "match_any", 00:22:29.165 "admin_cmd_passthru": { 00:22:29.165 "identify_ctrlr": false 00:22:29.165 } 00:22:29.165 } 00:22:29.165 }, 00:22:29.165 { 00:22:29.165 "method": "nvmf_set_max_subsystems", 00:22:29.165 "params": { 00:22:29.165 "max_subsystems": 1024 00:22:29.165 } 00:22:29.165 }, 00:22:29.165 { 00:22:29.165 "method": "nvmf_set_crdt", 00:22:29.165 "params": { 00:22:29.165 "crdt1": 0, 00:22:29.165 "crdt2": 0, 00:22:29.165 "crdt3": 0 00:22:29.165 } 00:22:29.165 }, 00:22:29.165 { 00:22:29.165 "method": "nvmf_create_transport", 00:22:29.165 "params": { 00:22:29.165 "trtype": "TCP", 00:22:29.165 "max_queue_depth": 128, 00:22:29.165 "max_io_qpairs_per_ctrlr": 127, 00:22:29.165 "in_capsule_data_size": 4096, 00:22:29.165 "max_io_size": 131072, 00:22:29.165 "io_unit_size": 131072, 00:22:29.165 "max_aq_depth": 128, 00:22:29.165 "num_shared_buffers": 511, 00:22:29.165 "buf_cache_size": 4294967295, 00:22:29.165 "dif_insert_or_strip": false, 00:22:29.165 "zcopy": false, 00:22:29.165 "c2h_success": false, 00:22:29.165 "sock_priority": 0, 00:22:29.165 "abort_timeout_sec": 1, 00:22:29.165 "ack_timeout": 0, 00:22:29.165 "data_wr_pool_size": 0 00:22:29.165 } 00:22:29.165 }, 00:22:29.165 { 00:22:29.165 "method": "nvmf_create_subsystem", 00:22:29.165 "params": { 00:22:29.165 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:29.165 "allow_any_host": false, 00:22:29.165 "serial_number": "00000000000000000000", 00:22:29.165 "model_number": "SPDK bdev Controller", 00:22:29.165 "max_namespaces": 32, 00:22:29.165 "min_cntlid": 1, 00:22:29.165 "max_cntlid": 65519, 00:22:29.165 "ana_reporting": false 00:22:29.165 } 00:22:29.165 }, 00:22:29.165 { 00:22:29.165 "method": "nvmf_subsystem_add_host", 00:22:29.165 "params": { 00:22:29.165 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:29.165 "host": "nqn.2016-06.io.spdk:host1", 00:22:29.165 "psk": "key0" 00:22:29.165 } 00:22:29.165 }, 00:22:29.165 { 00:22:29.165 "method": "nvmf_subsystem_add_ns", 00:22:29.165 "params": { 00:22:29.165 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:29.165 "namespace": { 00:22:29.165 "nsid": 1, 00:22:29.165 "bdev_name": "malloc0", 00:22:29.165 "nguid": "E3C2259A9D0F4931BE6C912A3E479EBD", 00:22:29.165 "uuid": "e3c2259a-9d0f-4931-be6c-912a3e479ebd", 00:22:29.165 "no_auto_visible": false 00:22:29.165 } 00:22:29.165 } 00:22:29.165 }, 00:22:29.165 { 00:22:29.165 "method": "nvmf_subsystem_add_listener", 00:22:29.165 "params": { 00:22:29.165 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:29.165 "listen_address": { 00:22:29.165 "trtype": "TCP", 00:22:29.165 "adrfam": "IPv4", 00:22:29.165 "traddr": "10.0.0.2", 00:22:29.165 "trsvcid": "4420" 00:22:29.165 }, 00:22:29.165 "secure_channel": false, 00:22:29.165 "sock_impl": "ssl" 00:22:29.165 } 00:22:29.165 } 00:22:29.165 ] 00:22:29.165 } 00:22:29.165 ] 00:22:29.165 }' 00:22:29.165 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@266 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:22:29.424 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@266 -- # bperfcfg='{ 00:22:29.424 "subsystems": [ 00:22:29.424 { 00:22:29.424 "subsystem": "keyring", 00:22:29.424 "config": [ 00:22:29.424 { 00:22:29.424 "method": "keyring_file_add_key", 00:22:29.424 "params": { 00:22:29.424 "name": "key0", 00:22:29.424 "path": "/tmp/tmp.5AOGPiS0a3" 00:22:29.424 } 00:22:29.424 } 00:22:29.424 ] 00:22:29.424 }, 00:22:29.424 { 00:22:29.424 "subsystem": "iobuf", 00:22:29.424 "config": [ 00:22:29.424 { 00:22:29.424 "method": "iobuf_set_options", 00:22:29.424 "params": { 00:22:29.424 "small_pool_count": 8192, 00:22:29.424 "large_pool_count": 1024, 00:22:29.424 "small_bufsize": 8192, 00:22:29.424 "large_bufsize": 135168 00:22:29.424 } 00:22:29.424 } 00:22:29.424 ] 00:22:29.424 }, 00:22:29.424 { 00:22:29.424 "subsystem": "sock", 00:22:29.424 "config": [ 00:22:29.424 { 00:22:29.424 "method": "sock_set_default_impl", 00:22:29.424 "params": { 00:22:29.424 "impl_name": "posix" 00:22:29.424 } 00:22:29.424 }, 00:22:29.424 { 00:22:29.424 "method": "sock_impl_set_options", 00:22:29.424 "params": { 00:22:29.424 "impl_name": "ssl", 00:22:29.424 "recv_buf_size": 4096, 00:22:29.424 "send_buf_size": 4096, 00:22:29.424 "enable_recv_pipe": true, 00:22:29.424 "enable_quickack": false, 00:22:29.424 "enable_placement_id": 0, 00:22:29.424 "enable_zerocopy_send_server": true, 00:22:29.424 "enable_zerocopy_send_client": false, 00:22:29.424 "zerocopy_threshold": 0, 00:22:29.424 "tls_version": 0, 00:22:29.424 "enable_ktls": false 00:22:29.424 } 00:22:29.424 }, 00:22:29.424 { 00:22:29.424 "method": "sock_impl_set_options", 00:22:29.424 "params": { 00:22:29.424 "impl_name": "posix", 00:22:29.424 "recv_buf_size": 2097152, 00:22:29.424 "send_buf_size": 2097152, 00:22:29.424 "enable_recv_pipe": true, 00:22:29.424 "enable_quickack": false, 00:22:29.424 "enable_placement_id": 0, 00:22:29.424 "enable_zerocopy_send_server": true, 00:22:29.424 "enable_zerocopy_send_client": false, 00:22:29.424 "zerocopy_threshold": 0, 00:22:29.424 "tls_version": 0, 00:22:29.424 "enable_ktls": false 00:22:29.424 } 00:22:29.424 } 00:22:29.424 ] 00:22:29.424 }, 00:22:29.424 { 00:22:29.424 "subsystem": "vmd", 00:22:29.424 "config": [] 00:22:29.424 }, 00:22:29.424 { 00:22:29.424 "subsystem": "accel", 00:22:29.424 "config": [ 00:22:29.424 { 00:22:29.424 "method": "accel_set_options", 00:22:29.424 "params": { 00:22:29.424 "small_cache_size": 128, 00:22:29.424 "large_cache_size": 16, 00:22:29.424 "task_count": 2048, 00:22:29.424 "sequence_count": 2048, 00:22:29.424 "buf_count": 2048 00:22:29.424 } 00:22:29.424 } 00:22:29.424 ] 00:22:29.424 }, 00:22:29.424 { 00:22:29.424 "subsystem": "bdev", 00:22:29.424 "config": [ 00:22:29.424 { 00:22:29.424 "method": "bdev_set_options", 00:22:29.424 "params": { 00:22:29.424 "bdev_io_pool_size": 65535, 00:22:29.424 "bdev_io_cache_size": 256, 00:22:29.424 "bdev_auto_examine": true, 00:22:29.424 "iobuf_small_cache_size": 128, 00:22:29.424 "iobuf_large_cache_size": 16 00:22:29.424 } 00:22:29.424 }, 00:22:29.424 { 00:22:29.424 "method": "bdev_raid_set_options", 00:22:29.424 "params": { 00:22:29.424 "process_window_size_kb": 1024, 00:22:29.424 "process_max_bandwidth_mb_sec": 0 00:22:29.424 } 00:22:29.424 }, 00:22:29.424 { 00:22:29.424 "method": "bdev_iscsi_set_options", 00:22:29.424 "params": { 00:22:29.424 "timeout_sec": 30 00:22:29.424 } 00:22:29.424 }, 00:22:29.424 { 00:22:29.424 "method": "bdev_nvme_set_options", 00:22:29.424 "params": { 00:22:29.424 "action_on_timeout": "none", 00:22:29.424 "timeout_us": 0, 00:22:29.424 "timeout_admin_us": 0, 00:22:29.424 "keep_alive_timeout_ms": 10000, 00:22:29.424 "arbitration_burst": 0, 00:22:29.424 "low_priority_weight": 0, 00:22:29.424 "medium_priority_weight": 0, 00:22:29.424 "high_priority_weight": 0, 00:22:29.424 "nvme_adminq_poll_period_us": 10000, 00:22:29.424 "nvme_ioq_poll_period_us": 0, 00:22:29.424 "io_queue_requests": 512, 00:22:29.424 "delay_cmd_submit": true, 00:22:29.424 "transport_retry_count": 4, 00:22:29.424 "bdev_retry_count": 3, 00:22:29.424 "transport_ack_timeout": 0, 00:22:29.424 "ctrlr_loss_timeout_sec": 0, 00:22:29.424 "reconnect_delay_sec": 0, 00:22:29.424 "fast_io_fail_timeout_sec": 0, 00:22:29.424 "disable_auto_failback": false, 00:22:29.424 "generate_uuids": false, 00:22:29.424 "transport_tos": 0, 00:22:29.424 "nvme_error_stat": false, 00:22:29.424 "rdma_srq_size": 0, 00:22:29.425 "io_path_stat": false, 00:22:29.425 "allow_accel_sequence": false, 00:22:29.425 "rdma_max_cq_size": 0, 00:22:29.425 "rdma_cm_event_timeout_ms": 0, 00:22:29.425 "dhchap_digests": [ 00:22:29.425 "sha256", 00:22:29.425 "sha384", 00:22:29.425 "sha512" 00:22:29.425 ], 00:22:29.425 "dhchap_dhgroups": [ 00:22:29.425 "null", 00:22:29.425 "ffdhe2048", 00:22:29.425 "ffdhe3072", 00:22:29.425 "ffdhe4096", 00:22:29.425 "ffdhe6144", 00:22:29.425 "ffdhe8192" 00:22:29.425 ] 00:22:29.425 } 00:22:29.425 }, 00:22:29.425 { 00:22:29.425 "method": "bdev_nvme_attach_controller", 00:22:29.425 "params": { 00:22:29.425 "name": "nvme0", 00:22:29.425 "trtype": "TCP", 00:22:29.425 "adrfam": "IPv4", 00:22:29.425 "traddr": "10.0.0.2", 00:22:29.425 "trsvcid": "4420", 00:22:29.425 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:29.425 "prchk_reftag": false, 00:22:29.425 "prchk_guard": false, 00:22:29.425 "ctrlr_loss_timeout_sec": 0, 00:22:29.425 "reconnect_delay_sec": 0, 00:22:29.425 "fast_io_fail_timeout_sec": 0, 00:22:29.425 "psk": "key0", 00:22:29.425 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:29.425 "hdgst": false, 00:22:29.425 "ddgst": false 00:22:29.425 } 00:22:29.425 }, 00:22:29.425 { 00:22:29.425 "method": "bdev_nvme_set_hotplug", 00:22:29.425 "params": { 00:22:29.425 "period_us": 100000, 00:22:29.425 "enable": false 00:22:29.425 } 00:22:29.425 }, 00:22:29.425 { 00:22:29.425 "method": "bdev_enable_histogram", 00:22:29.425 "params": { 00:22:29.425 "name": "nvme0n1", 00:22:29.425 "enable": true 00:22:29.425 } 00:22:29.425 }, 00:22:29.425 { 00:22:29.425 "method": "bdev_wait_for_examine" 00:22:29.425 } 00:22:29.425 ] 00:22:29.425 }, 00:22:29.425 { 00:22:29.425 "subsystem": "nbd", 00:22:29.425 "config": [] 00:22:29.425 } 00:22:29.425 ] 00:22:29.425 }' 00:22:29.425 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # killprocess 1006045 00:22:29.425 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1006045 ']' 00:22:29.425 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1006045 00:22:29.425 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:29.425 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:29.425 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1006045 00:22:29.425 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:29.425 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:29.425 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1006045' 00:22:29.425 killing process with pid 1006045 00:22:29.425 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1006045 00:22:29.425 Received shutdown signal, test time was about 1.000000 seconds 00:22:29.425 00:22:29.425 Latency(us) 00:22:29.425 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:29.425 =================================================================================================================== 00:22:29.425 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:29.425 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1006045 00:22:29.683 08:55:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@269 -- # killprocess 1005958 00:22:29.683 08:55:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1005958 ']' 00:22:29.683 08:55:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1005958 00:22:29.683 08:55:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:29.683 08:55:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:29.683 08:55:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1005958 00:22:29.683 08:55:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:29.683 08:55:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:29.683 08:55:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1005958' 00:22:29.683 killing process with pid 1005958 00:22:29.683 08:55:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1005958 00:22:29.683 08:55:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1005958 00:22:29.942 08:55:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # nvmfappstart -c /dev/fd/62 00:22:29.942 08:55:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:29.942 08:55:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # echo '{ 00:22:29.942 "subsystems": [ 00:22:29.942 { 00:22:29.942 "subsystem": "keyring", 00:22:29.942 "config": [ 00:22:29.942 { 00:22:29.942 "method": "keyring_file_add_key", 00:22:29.942 "params": { 00:22:29.942 "name": "key0", 00:22:29.942 "path": "/tmp/tmp.5AOGPiS0a3" 00:22:29.942 } 00:22:29.942 } 00:22:29.942 ] 00:22:29.942 }, 00:22:29.942 { 00:22:29.942 "subsystem": "iobuf", 00:22:29.942 "config": [ 00:22:29.942 { 00:22:29.942 "method": "iobuf_set_options", 00:22:29.942 "params": { 00:22:29.942 "small_pool_count": 8192, 00:22:29.942 "large_pool_count": 1024, 00:22:29.942 "small_bufsize": 8192, 00:22:29.942 "large_bufsize": 135168 00:22:29.942 } 00:22:29.942 } 00:22:29.942 ] 00:22:29.942 }, 00:22:29.942 { 00:22:29.942 "subsystem": "sock", 00:22:29.942 "config": [ 00:22:29.942 { 00:22:29.942 "method": "sock_set_default_impl", 00:22:29.942 "params": { 00:22:29.942 "impl_name": "posix" 00:22:29.942 } 00:22:29.942 }, 00:22:29.942 { 00:22:29.942 "method": "sock_impl_set_options", 00:22:29.942 "params": { 00:22:29.942 "impl_name": "ssl", 00:22:29.942 "recv_buf_size": 4096, 00:22:29.942 "send_buf_size": 4096, 00:22:29.942 "enable_recv_pipe": true, 00:22:29.942 "enable_quickack": false, 00:22:29.942 "enable_placement_id": 0, 00:22:29.942 "enable_zerocopy_send_server": true, 00:22:29.942 "enable_zerocopy_send_client": false, 00:22:29.942 "zerocopy_threshold": 0, 00:22:29.942 "tls_version": 0, 00:22:29.942 "enable_ktls": false 00:22:29.942 } 00:22:29.942 }, 00:22:29.942 { 00:22:29.942 "method": "sock_impl_set_options", 00:22:29.942 "params": { 00:22:29.942 "impl_name": "posix", 00:22:29.942 "recv_buf_size": 2097152, 00:22:29.942 "send_buf_size": 2097152, 00:22:29.942 "enable_recv_pipe": true, 00:22:29.942 "enable_quickack": false, 00:22:29.942 "enable_placement_id": 0, 00:22:29.942 "enable_zerocopy_send_server": true, 00:22:29.942 "enable_zerocopy_send_client": false, 00:22:29.942 "zerocopy_threshold": 0, 00:22:29.942 "tls_version": 0, 00:22:29.942 "enable_ktls": false 00:22:29.942 } 00:22:29.942 } 00:22:29.942 ] 00:22:29.942 }, 00:22:29.942 { 00:22:29.942 "subsystem": "vmd", 00:22:29.942 "config": [] 00:22:29.942 }, 00:22:29.942 { 00:22:29.942 "subsystem": "accel", 00:22:29.942 "config": [ 00:22:29.942 { 00:22:29.942 "method": "accel_set_options", 00:22:29.942 "params": { 00:22:29.942 "small_cache_size": 128, 00:22:29.942 "large_cache_size": 16, 00:22:29.942 "task_count": 2048, 00:22:29.942 "sequence_count": 2048, 00:22:29.942 "buf_count": 2048 00:22:29.942 } 00:22:29.942 } 00:22:29.942 ] 00:22:29.942 }, 00:22:29.942 { 00:22:29.942 "subsystem": "bdev", 00:22:29.942 "config": [ 00:22:29.942 { 00:22:29.942 "method": "bdev_set_options", 00:22:29.942 "params": { 00:22:29.942 "bdev_io_pool_size": 65535, 00:22:29.942 "bdev_io_cache_size": 256, 00:22:29.942 "bdev_auto_examine": true, 00:22:29.942 "iobuf_small_cache_size": 128, 00:22:29.942 "iobuf_large_cache_size": 16 00:22:29.942 } 00:22:29.942 }, 00:22:29.942 { 00:22:29.942 "method": "bdev_raid_set_options", 00:22:29.942 "params": { 00:22:29.942 "process_window_size_kb": 1024, 00:22:29.943 "process_max_bandwidth_mb_sec": 0 00:22:29.943 } 00:22:29.943 }, 00:22:29.943 { 00:22:29.943 "method": "bdev_iscsi_set_options", 00:22:29.943 "params": { 00:22:29.943 "timeout_sec": 30 00:22:29.943 } 00:22:29.943 }, 00:22:29.943 { 00:22:29.943 "method": "bdev_nvme_set_options", 00:22:29.943 "params": { 00:22:29.943 "action_on_timeout": "none", 00:22:29.943 "timeout_us": 0, 00:22:29.943 "timeout_admin_us": 0, 00:22:29.943 "keep_alive_timeout_ms": 10000, 00:22:29.943 "arbitration_burst": 0, 00:22:29.943 "low_priority_weight": 0, 00:22:29.943 "medium_priority_weight": 0, 00:22:29.943 "high_priority_weight": 0, 00:22:29.943 "nvme_adminq_poll_period_us": 10000, 00:22:29.943 "nvme_ioq_poll_period_us": 0, 00:22:29.943 "io_queue_requests": 0, 00:22:29.943 "delay_cmd_submit": true, 00:22:29.943 "transport_retry_count": 4, 00:22:29.943 "bdev_retry_count": 3, 00:22:29.943 "transport_ack_timeout": 0, 00:22:29.943 "ctrlr_loss_timeout_sec": 0, 00:22:29.943 "reconnect_delay_sec": 0, 00:22:29.943 "fast_io_fail_timeout_sec": 0, 00:22:29.943 "disable_auto_failback": false, 00:22:29.943 "generate_uuids": false, 00:22:29.943 "transport_tos": 0, 00:22:29.943 "nvme_error_stat": false, 00:22:29.943 "rdma_srq_size": 0, 00:22:29.943 "io_path_stat": false, 00:22:29.943 "allow_accel_sequence": false, 00:22:29.943 "rdma_max_cq_size": 0, 00:22:29.943 "rdma_cm_event_timeout_ms": 0, 00:22:29.943 "dhchap_digests": [ 00:22:29.943 "sha256", 00:22:29.943 "sha384", 00:22:29.943 "sha512" 00:22:29.943 ], 00:22:29.943 "dhchap_dhgroups": [ 00:22:29.943 "null", 00:22:29.943 "ffdhe2048", 00:22:29.943 "ffdhe3072", 00:22:29.943 "ffdhe4096", 00:22:29.943 "ffdhe6144", 00:22:29.943 "ffdhe8192" 00:22:29.943 ] 00:22:29.943 } 00:22:29.943 }, 00:22:29.943 { 00:22:29.943 "method": "bdev_nvme_set_hotplug", 00:22:29.943 "params": { 00:22:29.943 "period_us": 100000, 00:22:29.943 "enable": false 00:22:29.943 } 00:22:29.943 }, 00:22:29.943 { 00:22:29.943 "method": "bdev_malloc_create", 00:22:29.943 "params": { 00:22:29.943 "name": "malloc0", 00:22:29.943 "num_blocks": 8192, 00:22:29.943 "block_size": 4096, 00:22:29.943 "physical_block_size": 4096, 00:22:29.943 "uuid": "e3c2259a-9d0f-4931-be6c-912a3e479ebd", 00:22:29.943 "optimal_io_boundary": 0, 00:22:29.943 "md_size": 0, 00:22:29.943 "dif_type": 0, 00:22:29.943 "dif_is_head_of_md": false, 00:22:29.943 "dif_pi_format": 0 00:22:29.943 } 00:22:29.943 }, 00:22:29.943 { 00:22:29.943 "method": "bdev_wait_for_examine" 00:22:29.943 } 00:22:29.943 ] 00:22:29.943 }, 00:22:29.943 { 00:22:29.943 "subsystem": "nbd", 00:22:29.943 "config": [] 00:22:29.943 }, 00:22:29.943 { 00:22:29.943 "subsystem": "scheduler", 00:22:29.943 "config": [ 00:22:29.943 { 00:22:29.943 "method": "framework_set_scheduler", 00:22:29.943 "params": { 00:22:29.943 "name": "static" 00:22:29.943 } 00:22:29.943 } 00:22:29.943 ] 00:22:29.943 }, 00:22:29.943 { 00:22:29.943 "subsystem": "nvmf", 00:22:29.943 "config": [ 00:22:29.943 { 00:22:29.943 "method": "nvmf_set_config", 00:22:29.943 "params": { 00:22:29.943 "discovery_filter": "match_any", 00:22:29.943 "admin_cmd_passthru": { 00:22:29.943 "identify_ctrlr": false 00:22:29.943 } 00:22:29.943 } 00:22:29.943 }, 00:22:29.943 { 00:22:29.943 "method": "nvmf_set_max_subsystems", 00:22:29.943 "params": { 00:22:29.943 "max_subsystems": 1024 00:22:29.943 } 00:22:29.943 }, 00:22:29.943 { 00:22:29.943 "method": "nvmf_set_crdt", 00:22:29.943 "params": { 00:22:29.943 "crdt1": 0, 00:22:29.943 "crdt2": 0, 00:22:29.943 "crdt3": 0 00:22:29.943 } 00:22:29.943 }, 00:22:29.943 { 00:22:29.943 "method": "nvmf_create_transport", 00:22:29.943 "params": { 00:22:29.943 "trtype": "TCP", 00:22:29.943 "max_queue_depth": 128, 00:22:29.943 "max_io_qpairs_per_ctrlr": 127, 00:22:29.943 "in_capsule_data_size": 4096, 00:22:29.943 "max_io_size": 131072, 00:22:29.943 "io_unit_size": 131072, 00:22:29.943 "max_aq_depth": 128, 00:22:29.943 "num_shared_buffers": 511, 00:22:29.943 "buf_cache_size": 4294967295, 00:22:29.943 "dif_insert_or_strip": false, 00:22:29.943 "zcopy": false, 00:22:29.943 "c2h_success": false, 00:22:29.943 "sock_priority": 0, 00:22:29.943 "abort_timeout_sec": 1, 00:22:29.943 "ack_timeout": 0, 00:22:29.943 "data_wr_pool_size": 0 00:22:29.943 } 00:22:29.943 }, 00:22:29.943 { 00:22:29.943 "method": "nvmf_create_subsystem", 00:22:29.943 "params": { 00:22:29.943 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:29.943 "allow_any_host": false, 00:22:29.943 "serial_number": "00000000000000000000", 00:22:29.943 "model_number": "SPDK bdev Controller", 00:22:29.943 "max_namespaces": 32, 00:22:29.943 "min_cntlid": 1, 00:22:29.943 "max_cntlid": 65519, 00:22:29.943 "ana_reporting": false 00:22:29.943 } 00:22:29.943 }, 00:22:29.943 { 00:22:29.943 "method": "nvmf_subsystem_add_host", 00:22:29.943 "params": { 00:22:29.943 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:29.943 "host": "nqn.2016-06.io.spdk:host1", 00:22:29.943 "psk": "key0" 00:22:29.943 } 00:22:29.943 }, 00:22:29.943 { 00:22:29.943 "method": "nvmf_subsystem_add_ns", 00:22:29.943 "params": { 00:22:29.943 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:29.943 "namespace": { 00:22:29.943 "nsid": 1, 00:22:29.943 "bdev_name": "malloc0", 00:22:29.943 "nguid": "E3C2259A9D0F4931BE6C912A3E479EBD", 00:22:29.943 "uuid": "e3c2259a-9d0f-4931-be6c-912a3e479ebd", 00:22:29.943 "no_auto_visible": false 00:22:29.943 } 00:22:29.943 } 00:22:29.943 }, 00:22:29.943 { 00:22:29.943 "method": "nvmf_subsystem_add_listener", 00:22:29.943 "params": { 00:22:29.943 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:29.943 "listen_address": { 00:22:29.943 "trtype": "TCP", 00:22:29.943 "adrfam": "IPv4", 00:22:29.943 "traddr": "10.0.0.2", 00:22:29.943 "trsvcid": "4420" 00:22:29.943 }, 00:22:29.943 "secure_channel": false, 00:22:29.943 "sock_impl": "ssl" 00:22:29.943 } 00:22:29.943 } 00:22:29.943 ] 00:22:29.943 } 00:22:29.943 ] 00:22:29.943 }' 00:22:29.943 08:55:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:29.943 08:55:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:29.943 08:55:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1006393 00:22:29.943 08:55:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:22:29.943 08:55:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1006393 00:22:29.943 08:55:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1006393 ']' 00:22:29.943 08:55:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:29.943 08:55:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:29.943 08:55:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:29.943 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:29.943 08:55:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:29.943 08:55:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:30.202 [2024-07-26 08:55:48.417662] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:22:30.202 [2024-07-26 08:55:48.417741] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:30.202 EAL: No free 2048 kB hugepages reported on node 1 00:22:30.202 [2024-07-26 08:55:48.452891] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:30.202 [2024-07-26 08:55:48.484477] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:30.202 [2024-07-26 08:55:48.573469] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:30.202 [2024-07-26 08:55:48.573527] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:30.202 [2024-07-26 08:55:48.573543] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:30.202 [2024-07-26 08:55:48.573557] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:30.202 [2024-07-26 08:55:48.573569] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:30.202 [2024-07-26 08:55:48.573648] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:30.460 [2024-07-26 08:55:48.816510] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:30.460 [2024-07-26 08:55:48.860938] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:30.460 [2024-07-26 08:55:48.861228] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:31.025 08:55:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:31.025 08:55:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:31.025 08:55:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:31.025 08:55:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:31.025 08:55:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:31.025 08:55:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:31.025 08:55:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # bdevperf_pid=1006542 00:22:31.025 08:55:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@275 -- # waitforlisten 1006542 /var/tmp/bdevperf.sock 00:22:31.025 08:55:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1006542 ']' 00:22:31.025 08:55:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:31.025 08:55:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:31.025 08:55:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@272 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:22:31.025 08:55:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:31.025 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:31.025 08:55:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@272 -- # echo '{ 00:22:31.025 "subsystems": [ 00:22:31.025 { 00:22:31.025 "subsystem": "keyring", 00:22:31.025 "config": [ 00:22:31.025 { 00:22:31.025 "method": "keyring_file_add_key", 00:22:31.025 "params": { 00:22:31.025 "name": "key0", 00:22:31.025 "path": "/tmp/tmp.5AOGPiS0a3" 00:22:31.025 } 00:22:31.025 } 00:22:31.025 ] 00:22:31.025 }, 00:22:31.025 { 00:22:31.025 "subsystem": "iobuf", 00:22:31.025 "config": [ 00:22:31.025 { 00:22:31.025 "method": "iobuf_set_options", 00:22:31.025 "params": { 00:22:31.025 "small_pool_count": 8192, 00:22:31.025 "large_pool_count": 1024, 00:22:31.025 "small_bufsize": 8192, 00:22:31.025 "large_bufsize": 135168 00:22:31.025 } 00:22:31.025 } 00:22:31.025 ] 00:22:31.025 }, 00:22:31.025 { 00:22:31.025 "subsystem": "sock", 00:22:31.025 "config": [ 00:22:31.025 { 00:22:31.025 "method": "sock_set_default_impl", 00:22:31.025 "params": { 00:22:31.025 "impl_name": "posix" 00:22:31.025 } 00:22:31.025 }, 00:22:31.025 { 00:22:31.025 "method": "sock_impl_set_options", 00:22:31.025 "params": { 00:22:31.025 "impl_name": "ssl", 00:22:31.025 "recv_buf_size": 4096, 00:22:31.025 "send_buf_size": 4096, 00:22:31.025 "enable_recv_pipe": true, 00:22:31.025 "enable_quickack": false, 00:22:31.025 "enable_placement_id": 0, 00:22:31.025 "enable_zerocopy_send_server": true, 00:22:31.025 "enable_zerocopy_send_client": false, 00:22:31.025 "zerocopy_threshold": 0, 00:22:31.025 "tls_version": 0, 00:22:31.025 "enable_ktls": false 00:22:31.025 } 00:22:31.025 }, 00:22:31.025 { 00:22:31.025 "method": "sock_impl_set_options", 00:22:31.025 "params": { 00:22:31.025 "impl_name": "posix", 00:22:31.025 "recv_buf_size": 2097152, 00:22:31.025 "send_buf_size": 2097152, 00:22:31.025 "enable_recv_pipe": true, 00:22:31.025 "enable_quickack": false, 00:22:31.025 "enable_placement_id": 0, 00:22:31.025 "enable_zerocopy_send_server": true, 00:22:31.025 "enable_zerocopy_send_client": false, 00:22:31.025 "zerocopy_threshold": 0, 00:22:31.025 "tls_version": 0, 00:22:31.025 "enable_ktls": false 00:22:31.025 } 00:22:31.025 } 00:22:31.025 ] 00:22:31.025 }, 00:22:31.025 { 00:22:31.025 "subsystem": "vmd", 00:22:31.025 "config": [] 00:22:31.025 }, 00:22:31.025 { 00:22:31.025 "subsystem": "accel", 00:22:31.025 "config": [ 00:22:31.025 { 00:22:31.025 "method": "accel_set_options", 00:22:31.025 "params": { 00:22:31.026 "small_cache_size": 128, 00:22:31.026 "large_cache_size": 16, 00:22:31.026 "task_count": 2048, 00:22:31.026 "sequence_count": 2048, 00:22:31.026 "buf_count": 2048 00:22:31.026 } 00:22:31.026 } 00:22:31.026 ] 00:22:31.026 }, 00:22:31.026 { 00:22:31.026 "subsystem": "bdev", 00:22:31.026 "config": [ 00:22:31.026 { 00:22:31.026 "method": "bdev_set_options", 00:22:31.026 "params": { 00:22:31.026 "bdev_io_pool_size": 65535, 00:22:31.026 "bdev_io_cache_size": 256, 00:22:31.026 "bdev_auto_examine": true, 00:22:31.026 "iobuf_small_cache_size": 128, 00:22:31.026 "iobuf_large_cache_size": 16 00:22:31.026 } 00:22:31.026 }, 00:22:31.026 { 00:22:31.026 "method": "bdev_raid_set_options", 00:22:31.026 "params": { 00:22:31.026 "process_window_size_kb": 1024, 00:22:31.026 "process_max_bandwidth_mb_sec": 0 00:22:31.026 } 00:22:31.026 }, 00:22:31.026 { 00:22:31.026 "method": "bdev_iscsi_set_options", 00:22:31.026 "params": { 00:22:31.026 "timeout_sec": 30 00:22:31.026 } 00:22:31.026 }, 00:22:31.026 { 00:22:31.026 "method": "bdev_nvme_set_options", 00:22:31.026 "params": { 00:22:31.026 "action_on_timeout": "none", 00:22:31.026 "timeout_us": 0, 00:22:31.026 "timeout_admin_us": 0, 00:22:31.026 "keep_alive_timeout_ms": 10000, 00:22:31.026 "arbitration_burst": 0, 00:22:31.026 "low_priority_weight": 0, 00:22:31.026 "medium_priority_weight": 0, 00:22:31.026 "high_priority_weight": 0, 00:22:31.026 "nvme_adminq_poll_period_us": 10000, 00:22:31.026 "nvme_ioq_poll_period_us": 0, 00:22:31.026 "io_queue_requests": 512, 00:22:31.026 "delay_cmd_submit": true, 00:22:31.026 "transport_retry_count": 4, 00:22:31.026 "bdev_retry_count": 3, 00:22:31.026 "transport_ack_timeout": 0, 00:22:31.026 "ctrlr_loss_timeout_sec": 0, 00:22:31.026 "reconnect_delay_sec": 0, 00:22:31.026 "fast_io_fail_timeout_sec": 0, 00:22:31.026 "disable_auto_failback": false, 00:22:31.026 "generate_uuids": false, 00:22:31.026 "transport_tos": 0, 00:22:31.026 "nvme_error_stat": false, 00:22:31.026 "rdma_srq_size": 0, 00:22:31.026 "io_path_stat": false, 00:22:31.026 "allow_accel_sequence": false, 00:22:31.026 "rdma_max_cq_size": 0, 00:22:31.026 "rdma_cm_event_timeout_ms": 0, 00:22:31.026 "dhchap_digests": [ 00:22:31.026 08:55:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:31.026 "sha256", 00:22:31.026 "sha384", 00:22:31.026 "sha512" 00:22:31.026 ], 00:22:31.026 "dhchap_dhgroups": [ 00:22:31.026 "null", 00:22:31.026 "ffdhe2048", 00:22:31.026 "ffdhe3072", 00:22:31.026 "ffdhe4096", 00:22:31.026 "ffdhe6144", 00:22:31.026 "ffdhe8192" 00:22:31.026 ] 00:22:31.026 } 00:22:31.026 }, 00:22:31.026 { 00:22:31.026 "method": "bdev_nvme_attach_controller", 00:22:31.026 "params": { 00:22:31.026 "name": "nvme0", 00:22:31.026 "trtype": "TCP", 00:22:31.026 "adrfam": "IPv4", 00:22:31.026 "traddr": "10.0.0.2", 00:22:31.026 "trsvcid": "4420", 00:22:31.026 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:31.026 "prchk_reftag": false, 00:22:31.026 "prchk_guard": false, 00:22:31.026 "ctrlr_loss_timeout_sec": 0, 00:22:31.026 "reconnect_delay_sec": 0, 00:22:31.026 "fast_io_fail_timeout_sec": 0, 00:22:31.026 "psk": "key0", 00:22:31.026 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:31.026 "hdgst": false, 00:22:31.026 "ddgst": false 00:22:31.026 } 00:22:31.026 }, 00:22:31.026 { 00:22:31.026 "method": "bdev_nvme_set_hotplug", 00:22:31.026 "params": { 00:22:31.026 "period_us": 100000, 00:22:31.026 "enable": false 00:22:31.026 } 00:22:31.026 }, 00:22:31.026 { 00:22:31.026 "method": "bdev_enable_histogram", 00:22:31.026 "params": { 00:22:31.026 "name": "nvme0n1", 00:22:31.026 "enable": true 00:22:31.026 } 00:22:31.026 }, 00:22:31.026 { 00:22:31.026 "method": "bdev_wait_for_examine" 00:22:31.026 } 00:22:31.026 ] 00:22:31.026 }, 00:22:31.026 { 00:22:31.026 "subsystem": "nbd", 00:22:31.026 "config": [] 00:22:31.026 } 00:22:31.026 ] 00:22:31.026 }' 00:22:31.026 08:55:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:31.026 [2024-07-26 08:55:49.463163] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:22:31.026 [2024-07-26 08:55:49.463250] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1006542 ] 00:22:31.285 EAL: No free 2048 kB hugepages reported on node 1 00:22:31.285 [2024-07-26 08:55:49.496335] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:31.285 [2024-07-26 08:55:49.526247] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:31.285 [2024-07-26 08:55:49.616945] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:31.543 [2024-07-26 08:55:49.795982] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:31.543 08:55:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:31.543 08:55:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:31.543 08:55:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:31.543 08:55:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # jq -r '.[].name' 00:22:31.800 08:55:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:31.800 08:55:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@278 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:32.058 Running I/O for 1 seconds... 00:22:32.991 00:22:32.991 Latency(us) 00:22:32.991 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:32.991 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:32.991 Verification LBA range: start 0x0 length 0x2000 00:22:32.991 nvme0n1 : 1.03 3251.38 12.70 0.00 0.00 38738.85 6165.24 58254.22 00:22:32.991 =================================================================================================================== 00:22:32.991 Total : 3251.38 12.70 0.00 0.00 38738.85 6165.24 58254.22 00:22:32.991 0 00:22:32.991 08:55:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # trap - SIGINT SIGTERM EXIT 00:22:32.991 08:55:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@281 -- # cleanup 00:22:32.991 08:55:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:22:32.991 08:55:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@808 -- # type=--id 00:22:32.991 08:55:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@809 -- # id=0 00:22:32.991 08:55:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:22:32.991 08:55:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:22:32.991 08:55:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:22:32.991 08:55:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:22:32.991 08:55:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # for n in $shm_files 00:22:32.991 08:55:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:22:32.991 nvmf_trace.0 00:22:32.991 08:55:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # return 0 00:22:32.991 08:55:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 1006542 00:22:32.991 08:55:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1006542 ']' 00:22:32.991 08:55:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1006542 00:22:32.991 08:55:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:32.991 08:55:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:32.991 08:55:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1006542 00:22:32.991 08:55:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:32.991 08:55:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:32.991 08:55:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1006542' 00:22:32.991 killing process with pid 1006542 00:22:32.991 08:55:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1006542 00:22:32.991 Received shutdown signal, test time was about 1.000000 seconds 00:22:32.991 00:22:32.991 Latency(us) 00:22:32.991 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:32.991 =================================================================================================================== 00:22:32.991 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:32.991 08:55:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1006542 00:22:33.249 08:55:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:22:33.249 08:55:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:33.249 08:55:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:22:33.249 08:55:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:33.249 08:55:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:22:33.249 08:55:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:33.249 08:55:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:33.249 rmmod nvme_tcp 00:22:33.249 rmmod nvme_fabrics 00:22:33.507 rmmod nvme_keyring 00:22:33.507 08:55:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:33.507 08:55:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:22:33.507 08:55:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:22:33.507 08:55:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 1006393 ']' 00:22:33.507 08:55:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 1006393 00:22:33.507 08:55:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1006393 ']' 00:22:33.507 08:55:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1006393 00:22:33.507 08:55:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:33.507 08:55:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:33.507 08:55:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1006393 00:22:33.507 08:55:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:33.507 08:55:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:33.507 08:55:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1006393' 00:22:33.507 killing process with pid 1006393 00:22:33.507 08:55:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1006393 00:22:33.507 08:55:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1006393 00:22:33.765 08:55:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:33.765 08:55:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:33.765 08:55:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:33.765 08:55:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:33.765 08:55:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:33.765 08:55:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:33.765 08:55:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:33.765 08:55:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:35.661 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:35.661 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.jCUdoao4BL /tmp/tmp.iJIYgQnJ78 /tmp/tmp.5AOGPiS0a3 00:22:35.661 00:22:35.661 real 1m19.161s 00:22:35.661 user 2m7.357s 00:22:35.661 sys 0m26.857s 00:22:35.661 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:35.661 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:35.661 ************************************ 00:22:35.661 END TEST nvmf_tls 00:22:35.661 ************************************ 00:22:35.661 08:55:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:22:35.661 08:55:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:35.661 08:55:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:35.661 08:55:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:35.661 ************************************ 00:22:35.661 START TEST nvmf_fips 00:22:35.661 ************************************ 00:22:35.662 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:22:35.921 * Looking for test storage... 00:22:35.921 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:22:35.921 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:35.921 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:22:35.921 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:35.921 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:35.921 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:35.921 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:35.921 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:35.921 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:35.921 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:35.921 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:35.921 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:35.921 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:35.921 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:35.921 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:35.921 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:35.921 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:35.921 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:35.921 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:35.921 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:35.921 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:35.921 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:35.921 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:35.921 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:35.921 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:35.921 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:35.921 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:22:35.921 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:35.921 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:22:35.921 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:35.921 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:35.921 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:35.921 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:35.921 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:35.921 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:35.921 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:35.921 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:35.921 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:35.921 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:22:35.921 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:22:35.921 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:22:35.921 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:22:35.921 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:22:35.921 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:22:35.921 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:22:35.921 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:22:35.921 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:22:35.921 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:22:35.921 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:22:35.921 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:22:35.921 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:22:35.921 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:22:35.921 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:22:35.921 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:22:35.921 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:22:35.921 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:22:35.922 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:22:35.922 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:35.922 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:22:35.922 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:22:35.922 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:22:35.922 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:22:35.922 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:22:35.922 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:22:35.922 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:22:35.922 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:22:35.922 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:22:35.922 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:22:35.922 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:22:35.922 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:22:35.922 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:22:35.922 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:35.922 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:22:35.922 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:22:35.922 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:22:35.922 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:22:35.922 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:22:35.922 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:22:35.922 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:22:35.922 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:22:35.922 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:22:35.922 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:22:35.922 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:22:35.922 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:22:35.922 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:22:35.922 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:35.922 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:22:35.922 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:22:35.922 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:22:35.922 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:22:35.922 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:22:35.922 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:22:35.922 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:22:35.922 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:22:35.922 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:22:35.922 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:22:35.922 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:22:35.922 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:22:35.922 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:22:35.922 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:22:35.922 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:22:35.922 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:22:35.922 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:22:35.922 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:22:35.922 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:22:35.922 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:22:35.922 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@37 -- # cat 00:22:35.922 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:22:35.922 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:22:35.922 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:22:35.922 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:22:35.922 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:22:35.922 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:22:35.922 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:22:35.922 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:22:35.922 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:22:35.922 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:22:35.922 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:22:35.922 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@127 -- # : 00:22:35.922 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # local es=0 00:22:35.922 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:22:35.922 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # local arg=openssl 00:22:35.922 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:35.922 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -t openssl 00:22:35.922 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:35.922 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -P openssl 00:22:35.922 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:35.922 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:22:35.922 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:22:35.922 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:22:35.922 Error setting digest 00:22:35.922 00223ECDAF7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:22:35.922 00223ECDAF7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:22:35.922 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # es=1 00:22:35.922 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:35.922 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:35.922 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:35.922 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:22:35.922 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:35.922 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:35.922 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:35.922 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:35.922 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:35.922 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:35.922 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:35.922 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:35.922 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:35.922 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:35.922 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:22:35.922 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:37.823 08:55:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:37.823 08:55:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:22:37.823 08:55:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:37.823 08:55:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:37.823 08:55:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:37.823 08:55:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:37.823 08:55:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:37.823 08:55:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:22:37.823 08:55:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:37.823 08:55:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:22:37.823 08:55:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:22:37.823 08:55:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:22:37.823 08:55:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:22:37.823 08:55:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:22:37.823 08:55:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:22:37.823 08:55:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:37.823 08:55:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:37.823 08:55:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:37.823 08:55:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:37.823 08:55:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:37.823 08:55:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:37.823 08:55:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:37.823 08:55:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:37.823 08:55:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:37.823 08:55:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:37.823 08:55:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:37.823 08:55:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:37.823 08:55:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:37.823 08:55:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:37.823 08:55:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:37.823 08:55:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:37.823 08:55:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:37.823 08:55:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:37.823 08:55:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:37.823 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:37.823 08:55:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:37.823 08:55:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:37.823 08:55:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:37.823 08:55:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:37.823 08:55:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:37.823 08:55:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:37.823 08:55:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:37.823 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:37.823 08:55:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:37.823 08:55:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:37.823 08:55:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:37.823 08:55:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:37.823 08:55:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:37.823 08:55:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:37.823 08:55:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:37.823 08:55:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:37.823 08:55:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:37.823 08:55:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:37.823 08:55:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:37.823 08:55:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:37.823 08:55:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:37.823 08:55:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:37.823 08:55:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:37.823 08:55:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:37.823 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:37.823 08:55:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:37.823 08:55:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:37.823 08:55:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:37.823 08:55:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:37.823 08:55:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:37.823 08:55:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:37.823 08:55:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:37.823 08:55:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:37.823 08:55:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:37.823 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:37.823 08:55:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:37.823 08:55:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:37.823 08:55:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:22:37.823 08:55:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:37.823 08:55:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:37.823 08:55:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:37.823 08:55:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:37.823 08:55:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:37.823 08:55:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:37.823 08:55:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:37.823 08:55:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:37.823 08:55:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:37.823 08:55:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:37.823 08:55:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:37.823 08:55:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:37.823 08:55:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:37.823 08:55:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:37.823 08:55:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:37.823 08:55:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:37.823 08:55:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:37.823 08:55:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:37.823 08:55:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:37.823 08:55:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:37.823 08:55:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:37.823 08:55:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:38.082 08:55:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:38.082 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:38.082 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.218 ms 00:22:38.082 00:22:38.082 --- 10.0.0.2 ping statistics --- 00:22:38.082 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:38.082 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:22:38.082 08:55:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:38.082 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:38.082 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.162 ms 00:22:38.082 00:22:38.082 --- 10.0.0.1 ping statistics --- 00:22:38.082 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:38.082 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:22:38.082 08:55:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:38.082 08:55:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:22:38.082 08:55:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:38.082 08:55:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:38.082 08:55:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:38.082 08:55:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:38.082 08:55:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:38.082 08:55:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:38.082 08:55:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:38.082 08:55:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:22:38.082 08:55:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:38.082 08:55:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:38.082 08:55:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:38.082 08:55:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=1008772 00:22:38.082 08:55:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:38.082 08:55:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 1008772 00:22:38.082 08:55:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 1008772 ']' 00:22:38.082 08:55:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:38.082 08:55:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:38.082 08:55:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:38.082 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:38.082 08:55:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:38.082 08:55:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:38.082 [2024-07-26 08:55:56.399269] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:22:38.082 [2024-07-26 08:55:56.399351] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:38.082 EAL: No free 2048 kB hugepages reported on node 1 00:22:38.082 [2024-07-26 08:55:56.436968] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:38.082 [2024-07-26 08:55:56.471809] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:38.341 [2024-07-26 08:55:56.562115] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:38.341 [2024-07-26 08:55:56.562178] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:38.341 [2024-07-26 08:55:56.562206] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:38.341 [2024-07-26 08:55:56.562226] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:38.341 [2024-07-26 08:55:56.562236] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:38.341 [2024-07-26 08:55:56.562269] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:38.907 08:55:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:38.907 08:55:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:22:38.907 08:55:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:38.907 08:55:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:38.907 08:55:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:38.907 08:55:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:38.907 08:55:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:22:38.907 08:55:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:22:38.907 08:55:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:38.907 08:55:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:22:38.907 08:55:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:38.907 08:55:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:38.907 08:55:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:38.907 08:55:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:39.166 [2024-07-26 08:55:57.578825] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:39.166 [2024-07-26 08:55:57.594825] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:39.166 [2024-07-26 08:55:57.595092] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:39.424 [2024-07-26 08:55:57.626716] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:39.424 malloc0 00:22:39.424 08:55:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:39.424 08:55:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=1008930 00:22:39.424 08:55:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:39.424 08:55:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 1008930 /var/tmp/bdevperf.sock 00:22:39.424 08:55:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 1008930 ']' 00:22:39.424 08:55:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:39.424 08:55:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:39.424 08:55:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:39.424 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:39.424 08:55:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:39.424 08:55:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:39.424 [2024-07-26 08:55:57.720026] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:22:39.424 [2024-07-26 08:55:57.720146] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1008930 ] 00:22:39.424 EAL: No free 2048 kB hugepages reported on node 1 00:22:39.424 [2024-07-26 08:55:57.753206] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:39.424 [2024-07-26 08:55:57.781573] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:39.424 [2024-07-26 08:55:57.865697] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:39.684 08:55:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:39.684 08:55:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:22:39.684 08:55:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:39.976 [2024-07-26 08:55:58.248427] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:39.976 [2024-07-26 08:55:58.248556] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:39.976 TLSTESTn1 00:22:39.976 08:55:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:40.234 Running I/O for 10 seconds... 00:22:50.197 00:22:50.197 Latency(us) 00:22:50.197 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:50.197 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:50.197 Verification LBA range: start 0x0 length 0x2000 00:22:50.197 TLSTESTn1 : 10.05 2393.49 9.35 0.00 0.00 53340.92 9903.22 73788.68 00:22:50.197 =================================================================================================================== 00:22:50.197 Total : 2393.49 9.35 0.00 0.00 53340.92 9903.22 73788.68 00:22:50.197 0 00:22:50.197 08:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:22:50.197 08:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:22:50.197 08:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@808 -- # type=--id 00:22:50.197 08:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@809 -- # id=0 00:22:50.197 08:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:22:50.197 08:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:22:50.197 08:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:22:50.197 08:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:22:50.197 08:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # for n in $shm_files 00:22:50.197 08:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:22:50.197 nvmf_trace.0 00:22:50.197 08:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # return 0 00:22:50.197 08:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 1008930 00:22:50.197 08:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 1008930 ']' 00:22:50.197 08:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 1008930 00:22:50.197 08:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:22:50.197 08:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:50.197 08:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1008930 00:22:50.197 08:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:22:50.197 08:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:22:50.197 08:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1008930' 00:22:50.197 killing process with pid 1008930 00:22:50.197 08:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 1008930 00:22:50.197 Received shutdown signal, test time was about 10.000000 seconds 00:22:50.197 00:22:50.197 Latency(us) 00:22:50.197 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:50.197 =================================================================================================================== 00:22:50.197 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:50.197 [2024-07-26 08:56:08.652301] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:50.197 08:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 1008930 00:22:50.456 08:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:22:50.456 08:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:50.456 08:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:22:50.456 08:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:50.456 08:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:22:50.456 08:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:50.456 08:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:50.456 rmmod nvme_tcp 00:22:50.456 rmmod nvme_fabrics 00:22:50.715 rmmod nvme_keyring 00:22:50.715 08:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:50.715 08:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:22:50.715 08:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:22:50.715 08:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 1008772 ']' 00:22:50.715 08:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 1008772 00:22:50.715 08:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 1008772 ']' 00:22:50.715 08:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 1008772 00:22:50.715 08:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:22:50.715 08:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:50.715 08:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1008772 00:22:50.715 08:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:50.715 08:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:50.715 08:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1008772' 00:22:50.715 killing process with pid 1008772 00:22:50.715 08:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 1008772 00:22:50.715 [2024-07-26 08:56:08.973172] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:50.715 08:56:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 1008772 00:22:50.974 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:50.974 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:50.974 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:50.974 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:50.974 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:50.974 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:50.974 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:50.974 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:52.878 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:52.878 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:52.878 00:22:52.878 real 0m17.154s 00:22:52.878 user 0m21.349s 00:22:52.878 sys 0m6.411s 00:22:52.878 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:52.878 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:52.878 ************************************ 00:22:52.878 END TEST nvmf_fips 00:22:52.878 ************************************ 00:22:52.878 08:56:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@45 -- # '[' 1 -eq 1 ']' 00:22:52.878 08:56:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@46 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:22:52.878 08:56:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:52.878 08:56:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:52.878 08:56:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:52.878 ************************************ 00:22:52.878 START TEST nvmf_fuzz 00:22:52.878 ************************************ 00:22:52.878 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:22:52.878 * Looking for test storage... 00:22:53.137 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:53.137 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:53.137 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:22:53.137 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:53.137 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:53.137 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:53.137 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:53.137 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:53.137 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:53.138 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:53.138 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:53.138 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:53.138 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:53.138 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:53.138 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:53.138 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:53.138 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:53.138 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:53.138 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:53.138 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:53.138 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:53.138 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:53.138 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:53.138 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:53.138 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:53.138 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:53.138 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:22:53.138 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:53.138 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@47 -- # : 0 00:22:53.138 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:53.138 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:53.138 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:53.138 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:53.138 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:53.138 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:53.138 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:53.138 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:53.138 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:22:53.138 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:53.138 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:53.138 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:53.138 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:53.138 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:53.138 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:53.138 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:53.138 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:53.138 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:53.138 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:53.138 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@285 -- # xtrace_disable 00:22:53.138 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:22:55.059 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:55.059 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@291 -- # pci_devs=() 00:22:55.059 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:55.059 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:55.059 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:55.059 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:55.059 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:55.059 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@295 -- # net_devs=() 00:22:55.059 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:55.059 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@296 -- # e810=() 00:22:55.059 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@296 -- # local -ga e810 00:22:55.059 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@297 -- # x722=() 00:22:55.059 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@297 -- # local -ga x722 00:22:55.059 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@298 -- # mlx=() 00:22:55.059 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@298 -- # local -ga mlx 00:22:55.059 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:55.059 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:55.059 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:55.059 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:55.059 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:55.059 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:55.059 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:55.059 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:55.059 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:55.059 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:55.059 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:55.059 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:55.059 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:55.059 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:55.059 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:55.060 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:55.060 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:55.060 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:55.060 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:55.060 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:55.060 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:55.060 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:55.060 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:55.060 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:55.060 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:55.060 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:55.060 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:55.060 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:55.060 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:55.060 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:55.060 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:55.060 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:55.060 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:55.060 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:55.060 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:55.060 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:55.060 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:55.060 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:55.060 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:55.060 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:55.060 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:55.060 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:55.060 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:55.060 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:55.060 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:55.060 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:55.060 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:55.060 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:55.060 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:55.060 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:55.060 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:55.060 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:55.060 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:55.060 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:55.060 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:55.060 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:55.060 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:55.060 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@414 -- # is_hw=yes 00:22:55.060 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:55.060 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:55.060 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:55.060 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:55.060 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:55.060 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:55.060 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:55.060 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:55.060 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:55.060 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:55.060 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:55.060 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:55.060 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:55.060 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:55.060 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:55.060 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:55.060 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:55.060 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:55.060 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:55.060 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:55.060 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:55.060 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:55.060 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:55.060 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:55.060 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.242 ms 00:22:55.060 00:22:55.060 --- 10.0.0.2 ping statistics --- 00:22:55.060 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:55.060 rtt min/avg/max/mdev = 0.242/0.242/0.242/0.000 ms 00:22:55.060 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:55.060 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:55.060 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.113 ms 00:22:55.060 00:22:55.060 --- 10.0.0.1 ping statistics --- 00:22:55.060 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:55.060 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:22:55.060 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:55.060 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@422 -- # return 0 00:22:55.060 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:55.060 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:55.060 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:55.060 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:55.060 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:55.060 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:55.060 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:55.060 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=1012173 00:22:55.060 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:22:55.060 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:22:55.060 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 1012173 00:22:55.060 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@831 -- # '[' -z 1012173 ']' 00:22:55.060 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:55.060 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:55.060 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:55.060 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:55.060 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:55.060 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:22:55.317 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:55.317 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@864 -- # return 0 00:22:55.317 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:55.317 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:55.317 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:22:55.317 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:55.317 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:22:55.317 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:55.317 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:22:55.317 Malloc0 00:22:55.317 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:55.317 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:55.318 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:55.318 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:22:55.318 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:55.318 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:55.318 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:55.318 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:22:55.318 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:55.318 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:55.318 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:55.318 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:22:55.318 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:55.318 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:22:55.318 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:23:27.382 Fuzzing completed. Shutting down the fuzz application 00:23:27.382 00:23:27.382 Dumping successful admin opcodes: 00:23:27.382 8, 9, 10, 24, 00:23:27.382 Dumping successful io opcodes: 00:23:27.382 0, 9, 00:23:27.382 NS: 0x200003aeff00 I/O qp, Total commands completed: 469860, total successful commands: 2711, random_seed: 3312030528 00:23:27.382 NS: 0x200003aeff00 admin qp, Total commands completed: 57696, total successful commands: 462, random_seed: 976220736 00:23:27.382 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:23:27.382 Fuzzing completed. Shutting down the fuzz application 00:23:27.382 00:23:27.382 Dumping successful admin opcodes: 00:23:27.382 24, 00:23:27.382 Dumping successful io opcodes: 00:23:27.382 00:23:27.382 NS: 0x200003aeff00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 2977709577 00:23:27.382 NS: 0x200003aeff00 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 2977836517 00:23:27.382 08:56:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:27.382 08:56:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.382 08:56:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:27.382 08:56:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.382 08:56:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:23:27.382 08:56:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:23:27.382 08:56:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:27.382 08:56:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # sync 00:23:27.382 08:56:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:27.382 08:56:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@120 -- # set +e 00:23:27.382 08:56:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:27.382 08:56:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:27.382 rmmod nvme_tcp 00:23:27.382 rmmod nvme_fabrics 00:23:27.382 rmmod nvme_keyring 00:23:27.382 08:56:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:27.382 08:56:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@124 -- # set -e 00:23:27.382 08:56:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@125 -- # return 0 00:23:27.382 08:56:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@489 -- # '[' -n 1012173 ']' 00:23:27.382 08:56:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@490 -- # killprocess 1012173 00:23:27.382 08:56:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@950 -- # '[' -z 1012173 ']' 00:23:27.382 08:56:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@954 -- # kill -0 1012173 00:23:27.382 08:56:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@955 -- # uname 00:23:27.383 08:56:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:27.383 08:56:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1012173 00:23:27.383 08:56:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:27.383 08:56:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:27.383 08:56:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1012173' 00:23:27.383 killing process with pid 1012173 00:23:27.383 08:56:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@969 -- # kill 1012173 00:23:27.383 08:56:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@974 -- # wait 1012173 00:23:27.642 08:56:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:27.642 08:56:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:27.642 08:56:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:27.642 08:56:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:27.642 08:56:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:27.642 08:56:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:27.642 08:56:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:27.642 08:56:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:29.546 08:56:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:29.546 08:56:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:23:29.804 00:23:29.804 real 0m36.723s 00:23:29.804 user 0m51.372s 00:23:29.804 sys 0m14.813s 00:23:29.804 08:56:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:29.804 08:56:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:29.804 ************************************ 00:23:29.804 END TEST nvmf_fuzz 00:23:29.804 ************************************ 00:23:29.804 08:56:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:23:29.804 08:56:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:29.804 08:56:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:29.804 08:56:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:29.804 ************************************ 00:23:29.804 START TEST nvmf_multiconnection 00:23:29.804 ************************************ 00:23:29.804 08:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:23:29.804 * Looking for test storage... 00:23:29.804 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:29.804 08:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:29.804 08:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:23:29.804 08:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:29.804 08:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:29.804 08:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:29.804 08:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:29.804 08:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:29.804 08:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:29.804 08:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:29.804 08:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:29.804 08:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:29.804 08:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:29.804 08:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:29.805 08:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:29.805 08:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:29.805 08:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:29.805 08:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:29.805 08:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:29.805 08:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:29.805 08:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:29.805 08:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:29.805 08:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:29.805 08:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:29.805 08:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:29.805 08:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:29.805 08:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:23:29.805 08:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:29.805 08:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@47 -- # : 0 00:23:29.805 08:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:29.805 08:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:29.805 08:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:29.805 08:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:29.805 08:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:29.805 08:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:29.805 08:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:29.805 08:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:29.805 08:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:29.805 08:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:29.805 08:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:23:29.805 08:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:23:29.805 08:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:29.805 08:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:29.805 08:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:29.805 08:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:29.805 08:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:29.805 08:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:29.805 08:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:29.805 08:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:29.805 08:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:29.805 08:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:29.805 08:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@285 -- # xtrace_disable 00:23:29.805 08:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:31.733 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:31.733 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@291 -- # pci_devs=() 00:23:31.733 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:31.733 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:31.733 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:31.733 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:31.733 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:31.733 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@295 -- # net_devs=() 00:23:31.733 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:31.733 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@296 -- # e810=() 00:23:31.733 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@296 -- # local -ga e810 00:23:31.733 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@297 -- # x722=() 00:23:31.733 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@297 -- # local -ga x722 00:23:31.733 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@298 -- # mlx=() 00:23:31.733 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@298 -- # local -ga mlx 00:23:31.733 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:31.733 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:31.733 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:31.733 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:31.733 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:31.733 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:31.733 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:31.733 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:31.733 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:31.733 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:31.733 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:31.733 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:31.733 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:31.733 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:31.733 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:31.733 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:31.733 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:31.733 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:31.733 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:23:31.733 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:23:31.733 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:31.733 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:31.733 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:31.733 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:31.733 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:31.733 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:31.733 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:23:31.733 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:23:31.733 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:31.733 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:31.733 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:31.733 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:31.733 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:31.733 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:31.733 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:31.733 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:31.733 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:31.733 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:31.733 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:31.733 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:31.734 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:31.734 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:31.734 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:31.734 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:23:31.734 Found net devices under 0000:0a:00.0: cvl_0_0 00:23:31.734 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:31.734 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:31.734 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:31.734 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:31.734 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:31.734 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:31.734 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:31.734 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:31.734 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:23:31.734 Found net devices under 0000:0a:00.1: cvl_0_1 00:23:31.734 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:31.734 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:31.734 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@414 -- # is_hw=yes 00:23:31.734 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:31.734 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:31.734 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:31.734 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:31.734 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:31.734 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:31.734 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:31.734 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:31.734 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:31.734 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:31.734 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:31.734 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:31.734 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:31.734 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:31.734 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:31.734 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:31.734 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:31.734 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:31.734 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:31.734 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:31.734 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:31.734 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:31.734 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:31.734 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:31.734 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.249 ms 00:23:31.734 00:23:31.734 --- 10.0.0.2 ping statistics --- 00:23:31.734 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:31.734 rtt min/avg/max/mdev = 0.249/0.249/0.249/0.000 ms 00:23:31.734 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:31.992 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:31.992 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.131 ms 00:23:31.992 00:23:31.992 --- 10.0.0.1 ping statistics --- 00:23:31.992 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:31.992 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:23:31.992 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:31.992 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@422 -- # return 0 00:23:31.992 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:31.992 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:31.992 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:31.992 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:31.992 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:31.992 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:31.992 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:31.992 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:23:31.992 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:31.992 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:31.992 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:31.992 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@481 -- # nvmfpid=1017801 00:23:31.992 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:31.992 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@482 -- # waitforlisten 1017801 00:23:31.992 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@831 -- # '[' -z 1017801 ']' 00:23:31.992 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:31.992 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:31.992 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:31.992 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:31.992 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:31.992 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:31.992 [2024-07-26 08:56:50.264713] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:23:31.992 [2024-07-26 08:56:50.264793] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:31.992 EAL: No free 2048 kB hugepages reported on node 1 00:23:31.992 [2024-07-26 08:56:50.301742] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:23:31.992 [2024-07-26 08:56:50.329831] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:31.992 [2024-07-26 08:56:50.422165] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:31.992 [2024-07-26 08:56:50.422228] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:31.992 [2024-07-26 08:56:50.422257] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:31.992 [2024-07-26 08:56:50.422268] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:31.992 [2024-07-26 08:56:50.422278] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:31.992 [2024-07-26 08:56:50.422337] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:31.992 [2024-07-26 08:56:50.424080] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:31.992 [2024-07-26 08:56:50.424157] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:31.992 [2024-07-26 08:56:50.424161] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:32.250 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:32.250 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@864 -- # return 0 00:23:32.250 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:32.250 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:32.250 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:32.250 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:32.250 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:32.250 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.250 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:32.250 [2024-07-26 08:56:50.583550] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:32.250 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.250 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:23:32.250 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:32.251 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:32.251 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.251 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:32.251 Malloc1 00:23:32.251 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.251 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:23:32.251 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.251 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:32.251 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.251 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:32.251 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.251 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:32.251 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.251 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:32.251 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.251 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:32.251 [2024-07-26 08:56:50.640867] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:32.251 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.251 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:32.251 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:23:32.251 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.251 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:32.251 Malloc2 00:23:32.251 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.251 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:23:32.251 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.251 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:32.251 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.251 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:23:32.251 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.251 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:32.251 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.251 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:23:32.251 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.251 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:32.251 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.251 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:32.251 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:23:32.251 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.251 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:32.510 Malloc3 00:23:32.510 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.510 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:23:32.510 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.510 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:32.510 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.510 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:23:32.510 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.510 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:32.510 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.510 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:23:32.510 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.510 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:32.510 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.510 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:32.510 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:23:32.510 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.510 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:32.510 Malloc4 00:23:32.510 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.510 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:23:32.510 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.510 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:32.510 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.510 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:23:32.510 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.510 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:32.510 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.510 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:23:32.510 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.510 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:32.510 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.510 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:32.510 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:23:32.510 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.510 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:32.510 Malloc5 00:23:32.510 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.510 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:23:32.510 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.510 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:32.510 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.510 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:23:32.510 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.510 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:32.510 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.510 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:23:32.510 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.510 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:32.510 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.510 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:32.510 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:23:32.510 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.510 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:32.510 Malloc6 00:23:32.510 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.510 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:23:32.510 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.510 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:32.510 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.510 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:23:32.510 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.510 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:32.510 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.510 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:23:32.510 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.510 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:32.510 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.510 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:32.510 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:23:32.510 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.510 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:32.510 Malloc7 00:23:32.510 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.510 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:23:32.510 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.510 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:32.510 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.510 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:23:32.510 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.510 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:32.510 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.510 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:23:32.510 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.510 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:32.510 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.510 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:32.510 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:23:32.510 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.510 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:32.510 Malloc8 00:23:32.510 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.510 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:23:32.510 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.510 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:32.769 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.769 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:23:32.769 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.769 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:32.769 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.769 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:23:32.769 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.769 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:32.769 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.769 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:32.769 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:23:32.769 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.769 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:32.769 Malloc9 00:23:32.769 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.769 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:23:32.769 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.769 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:32.769 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.769 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:23:32.769 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.769 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:32.769 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.769 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:23:32.769 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.769 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:32.769 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.769 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:32.769 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:23:32.769 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.769 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:32.769 Malloc10 00:23:32.769 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.769 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:23:32.769 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.769 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:32.769 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.769 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:23:32.769 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.769 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:32.769 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.769 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:23:32.769 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.769 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:32.769 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.769 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:32.769 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:23:32.769 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.769 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:32.769 Malloc11 00:23:32.769 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.769 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:23:32.769 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.769 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:32.769 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.769 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:23:32.769 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.769 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:32.770 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.770 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:23:32.770 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.770 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:32.770 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.770 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:23:32.770 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:32.770 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:23:33.334 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:23:33.334 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:23:33.334 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:23:33.334 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:23:33.334 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:23:35.859 08:56:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:23:35.859 08:56:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:23:35.859 08:56:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK1 00:23:35.859 08:56:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:23:35.859 08:56:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:23:35.859 08:56:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:23:35.859 08:56:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:35.859 08:56:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:23:36.116 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:23:36.116 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:23:36.116 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:23:36.116 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:23:36.116 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:23:38.013 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:23:38.013 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:23:38.013 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK2 00:23:38.013 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:23:38.013 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:23:38.013 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:23:38.013 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:38.013 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:23:38.946 08:56:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:23:38.946 08:56:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:23:38.946 08:56:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:23:38.946 08:56:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:23:38.946 08:56:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:23:40.843 08:56:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:23:40.843 08:56:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:23:40.843 08:56:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK3 00:23:40.843 08:56:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:23:40.843 08:56:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:23:40.843 08:56:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:23:40.843 08:56:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:40.843 08:56:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:23:41.408 08:56:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:23:41.409 08:56:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:23:41.409 08:56:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:23:41.409 08:56:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:23:41.409 08:56:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:23:43.305 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:23:43.305 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:23:43.305 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK4 00:23:43.305 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:23:43.306 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:23:43.306 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:23:43.306 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:43.306 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:23:44.236 08:57:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:23:44.236 08:57:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:23:44.236 08:57:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:23:44.236 08:57:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:23:44.236 08:57:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:23:46.131 08:57:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:23:46.131 08:57:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:23:46.131 08:57:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK5 00:23:46.131 08:57:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:23:46.131 08:57:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:23:46.131 08:57:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:23:46.131 08:57:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:46.131 08:57:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:23:46.696 08:57:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:23:46.696 08:57:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:23:46.696 08:57:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:23:46.696 08:57:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:23:46.696 08:57:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:23:49.225 08:57:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:23:49.225 08:57:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:23:49.225 08:57:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK6 00:23:49.225 08:57:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:23:49.225 08:57:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:23:49.225 08:57:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:23:49.225 08:57:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:49.225 08:57:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:23:49.829 08:57:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:23:49.829 08:57:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:23:49.829 08:57:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:23:49.829 08:57:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:23:49.829 08:57:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:23:51.727 08:57:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:23:51.727 08:57:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:23:51.727 08:57:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK7 00:23:51.727 08:57:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:23:51.727 08:57:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:23:51.727 08:57:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:23:51.727 08:57:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:51.727 08:57:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:23:52.664 08:57:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:23:52.664 08:57:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:23:52.664 08:57:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:23:52.664 08:57:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:23:52.664 08:57:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:23:54.565 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:23:54.565 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:23:54.565 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK8 00:23:54.565 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:23:54.565 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:23:54.565 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:23:54.565 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:54.565 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:23:55.497 08:57:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:23:55.497 08:57:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:23:55.497 08:57:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:23:55.497 08:57:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:23:55.497 08:57:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:23:57.390 08:57:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:23:57.390 08:57:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:23:57.390 08:57:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK9 00:23:57.390 08:57:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:23:57.390 08:57:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:23:57.390 08:57:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:23:57.390 08:57:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:57.390 08:57:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:23:58.322 08:57:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:23:58.322 08:57:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:23:58.322 08:57:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:23:58.322 08:57:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:23:58.322 08:57:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:24:00.847 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:24:00.847 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:24:00.847 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK10 00:24:00.847 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:24:00.847 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:24:00.847 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:24:00.847 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:00.847 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:24:01.413 08:57:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:24:01.413 08:57:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:24:01.413 08:57:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:24:01.413 08:57:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:24:01.413 08:57:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:24:03.311 08:57:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:24:03.311 08:57:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:24:03.311 08:57:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK11 00:24:03.311 08:57:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:24:03.311 08:57:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:24:03.311 08:57:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:24:03.311 08:57:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:24:03.311 [global] 00:24:03.311 thread=1 00:24:03.311 invalidate=1 00:24:03.311 rw=read 00:24:03.311 time_based=1 00:24:03.311 runtime=10 00:24:03.311 ioengine=libaio 00:24:03.311 direct=1 00:24:03.311 bs=262144 00:24:03.311 iodepth=64 00:24:03.311 norandommap=1 00:24:03.311 numjobs=1 00:24:03.311 00:24:03.311 [job0] 00:24:03.311 filename=/dev/nvme0n1 00:24:03.311 [job1] 00:24:03.311 filename=/dev/nvme10n1 00:24:03.311 [job2] 00:24:03.311 filename=/dev/nvme1n1 00:24:03.311 [job3] 00:24:03.311 filename=/dev/nvme2n1 00:24:03.311 [job4] 00:24:03.311 filename=/dev/nvme3n1 00:24:03.311 [job5] 00:24:03.311 filename=/dev/nvme4n1 00:24:03.311 [job6] 00:24:03.311 filename=/dev/nvme5n1 00:24:03.311 [job7] 00:24:03.311 filename=/dev/nvme6n1 00:24:03.311 [job8] 00:24:03.311 filename=/dev/nvme7n1 00:24:03.311 [job9] 00:24:03.311 filename=/dev/nvme8n1 00:24:03.311 [job10] 00:24:03.311 filename=/dev/nvme9n1 00:24:03.569 Could not set queue depth (nvme0n1) 00:24:03.569 Could not set queue depth (nvme10n1) 00:24:03.569 Could not set queue depth (nvme1n1) 00:24:03.569 Could not set queue depth (nvme2n1) 00:24:03.569 Could not set queue depth (nvme3n1) 00:24:03.569 Could not set queue depth (nvme4n1) 00:24:03.569 Could not set queue depth (nvme5n1) 00:24:03.569 Could not set queue depth (nvme6n1) 00:24:03.569 Could not set queue depth (nvme7n1) 00:24:03.569 Could not set queue depth (nvme8n1) 00:24:03.569 Could not set queue depth (nvme9n1) 00:24:03.569 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:03.569 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:03.569 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:03.569 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:03.569 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:03.569 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:03.569 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:03.569 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:03.569 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:03.569 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:03.569 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:03.569 fio-3.35 00:24:03.569 Starting 11 threads 00:24:15.808 00:24:15.808 job0: (groupid=0, jobs=1): err= 0: pid=1022694: Fri Jul 26 08:57:32 2024 00:24:15.808 read: IOPS=808, BW=202MiB/s (212MB/s)(2033MiB/10054msec) 00:24:15.808 slat (usec): min=9, max=146498, avg=1046.80, stdev=3649.89 00:24:15.808 clat (msec): min=2, max=284, avg=78.01, stdev=36.07 00:24:15.808 lat (msec): min=2, max=284, avg=79.06, stdev=36.51 00:24:15.808 clat percentiles (msec): 00:24:15.808 | 1.00th=[ 8], 5.00th=[ 17], 10.00th=[ 32], 20.00th=[ 54], 00:24:15.808 | 30.00th=[ 63], 40.00th=[ 70], 50.00th=[ 78], 60.00th=[ 83], 00:24:15.808 | 70.00th=[ 90], 80.00th=[ 102], 90.00th=[ 121], 95.00th=[ 138], 00:24:15.808 | 99.00th=[ 197], 99.50th=[ 211], 99.90th=[ 266], 99.95th=[ 266], 00:24:15.808 | 99.99th=[ 284] 00:24:15.808 bw ( KiB/s): min=94720, max=309760, per=11.05%, avg=206592.00, stdev=58486.19, samples=20 00:24:15.808 iops : min= 370, max= 1210, avg=807.00, stdev=228.46, samples=20 00:24:15.808 lat (msec) : 4=0.09%, 10=2.26%, 20=3.92%, 50=9.95%, 100=62.74% 00:24:15.808 lat (msec) : 250=20.73%, 500=0.31% 00:24:15.808 cpu : usr=0.52%, sys=2.66%, ctx=1689, majf=0, minf=3721 00:24:15.808 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:24:15.808 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:15.808 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:15.808 issued rwts: total=8133,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:15.808 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:15.808 job1: (groupid=0, jobs=1): err= 0: pid=1022720: Fri Jul 26 08:57:32 2024 00:24:15.808 read: IOPS=861, BW=215MiB/s (226MB/s)(2156MiB/10014msec) 00:24:15.808 slat (usec): min=9, max=91109, avg=858.32, stdev=3134.51 00:24:15.808 clat (msec): min=2, max=260, avg=73.41, stdev=45.39 00:24:15.808 lat (msec): min=2, max=260, avg=74.27, stdev=45.74 00:24:15.808 clat percentiles (msec): 00:24:15.808 | 1.00th=[ 8], 5.00th=[ 29], 10.00th=[ 32], 20.00th=[ 34], 00:24:15.808 | 30.00th=[ 37], 40.00th=[ 46], 50.00th=[ 63], 60.00th=[ 80], 00:24:15.808 | 70.00th=[ 94], 80.00th=[ 114], 90.00th=[ 142], 95.00th=[ 157], 00:24:15.808 | 99.00th=[ 203], 99.50th=[ 234], 99.90th=[ 257], 99.95th=[ 259], 00:24:15.808 | 99.99th=[ 262] 00:24:15.808 bw ( KiB/s): min=98816, max=468480, per=11.73%, avg=219161.60, stdev=121419.93, samples=20 00:24:15.808 iops : min= 386, max= 1830, avg=856.10, stdev=474.30, samples=20 00:24:15.808 lat (msec) : 4=0.37%, 10=1.29%, 20=1.98%, 50=39.75%, 100=30.52% 00:24:15.808 lat (msec) : 250=25.83%, 500=0.26% 00:24:15.808 cpu : usr=0.46%, sys=2.77%, ctx=1954, majf=0, minf=4097 00:24:15.809 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:24:15.809 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:15.809 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:15.809 issued rwts: total=8624,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:15.809 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:15.809 job2: (groupid=0, jobs=1): err= 0: pid=1022762: Fri Jul 26 08:57:32 2024 00:24:15.809 read: IOPS=843, BW=211MiB/s (221MB/s)(2120MiB/10054msec) 00:24:15.809 slat (usec): min=13, max=110544, avg=1088.89, stdev=3458.20 00:24:15.809 clat (msec): min=2, max=273, avg=74.73, stdev=36.74 00:24:15.809 lat (msec): min=2, max=273, avg=75.82, stdev=37.27 00:24:15.809 clat percentiles (msec): 00:24:15.809 | 1.00th=[ 10], 5.00th=[ 30], 10.00th=[ 31], 20.00th=[ 39], 00:24:15.809 | 30.00th=[ 52], 40.00th=[ 67], 50.00th=[ 75], 60.00th=[ 82], 00:24:15.809 | 70.00th=[ 90], 80.00th=[ 101], 90.00th=[ 120], 95.00th=[ 133], 00:24:15.809 | 99.00th=[ 194], 99.50th=[ 215], 99.90th=[ 257], 99.95th=[ 259], 00:24:15.809 | 99.99th=[ 275] 00:24:15.809 bw ( KiB/s): min=94208, max=458240, per=11.53%, avg=215510.70, stdev=92815.16, samples=20 00:24:15.809 iops : min= 368, max= 1790, avg=841.80, stdev=362.61, samples=20 00:24:15.809 lat (msec) : 4=0.31%, 10=0.71%, 20=0.99%, 50=27.72%, 100=50.25% 00:24:15.809 lat (msec) : 250=19.89%, 500=0.13% 00:24:15.809 cpu : usr=0.62%, sys=2.74%, ctx=1774, majf=0, minf=4097 00:24:15.809 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:24:15.809 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:15.809 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:15.809 issued rwts: total=8481,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:15.809 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:15.809 job3: (groupid=0, jobs=1): err= 0: pid=1022784: Fri Jul 26 08:57:32 2024 00:24:15.809 read: IOPS=581, BW=145MiB/s (153MB/s)(1465MiB/10068msec) 00:24:15.809 slat (usec): min=14, max=78904, avg=1568.96, stdev=4987.85 00:24:15.809 clat (msec): min=5, max=277, avg=108.34, stdev=44.77 00:24:15.809 lat (msec): min=5, max=277, avg=109.91, stdev=45.62 00:24:15.809 clat percentiles (msec): 00:24:15.809 | 1.00th=[ 12], 5.00th=[ 26], 10.00th=[ 50], 20.00th=[ 74], 00:24:15.809 | 30.00th=[ 87], 40.00th=[ 96], 50.00th=[ 107], 60.00th=[ 120], 00:24:15.809 | 70.00th=[ 131], 80.00th=[ 150], 90.00th=[ 167], 95.00th=[ 186], 00:24:15.809 | 99.00th=[ 203], 99.50th=[ 209], 99.90th=[ 243], 99.95th=[ 262], 00:24:15.809 | 99.99th=[ 279] 00:24:15.809 bw ( KiB/s): min=88576, max=285125, per=7.94%, avg=148305.35, stdev=44611.03, samples=20 00:24:15.809 iops : min= 346, max= 1113, avg=579.25, stdev=174.12, samples=20 00:24:15.809 lat (msec) : 10=0.58%, 20=2.97%, 50=6.59%, 100=35.01%, 250=54.78% 00:24:15.809 lat (msec) : 500=0.07% 00:24:15.809 cpu : usr=0.28%, sys=2.16%, ctx=1261, majf=0, minf=4097 00:24:15.809 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:24:15.809 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:15.809 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:15.809 issued rwts: total=5858,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:15.809 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:15.809 job4: (groupid=0, jobs=1): err= 0: pid=1022794: Fri Jul 26 08:57:32 2024 00:24:15.809 read: IOPS=666, BW=167MiB/s (175MB/s)(1684MiB/10108msec) 00:24:15.809 slat (usec): min=14, max=84806, avg=1407.90, stdev=4343.59 00:24:15.809 clat (msec): min=6, max=249, avg=94.54, stdev=44.50 00:24:15.809 lat (msec): min=6, max=249, avg=95.95, stdev=45.18 00:24:15.809 clat percentiles (msec): 00:24:15.809 | 1.00th=[ 28], 5.00th=[ 32], 10.00th=[ 41], 20.00th=[ 51], 00:24:15.809 | 30.00th=[ 62], 40.00th=[ 74], 50.00th=[ 91], 60.00th=[ 107], 00:24:15.809 | 70.00th=[ 121], 80.00th=[ 136], 90.00th=[ 159], 95.00th=[ 171], 00:24:15.809 | 99.00th=[ 190], 99.50th=[ 199], 99.90th=[ 228], 99.95th=[ 247], 00:24:15.809 | 99.99th=[ 249] 00:24:15.809 bw ( KiB/s): min=95744, max=339456, per=9.14%, avg=170828.80, stdev=70856.12, samples=20 00:24:15.809 iops : min= 374, max= 1326, avg=667.30, stdev=276.78, samples=20 00:24:15.809 lat (msec) : 10=0.04%, 20=0.56%, 50=19.06%, 100=35.48%, 250=44.85% 00:24:15.809 cpu : usr=0.41%, sys=2.34%, ctx=1415, majf=0, minf=4097 00:24:15.809 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:24:15.809 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:15.809 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:15.809 issued rwts: total=6736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:15.809 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:15.809 job5: (groupid=0, jobs=1): err= 0: pid=1022798: Fri Jul 26 08:57:32 2024 00:24:15.809 read: IOPS=540, BW=135MiB/s (142MB/s)(1365MiB/10106msec) 00:24:15.809 slat (usec): min=14, max=54563, avg=1749.29, stdev=5066.16 00:24:15.809 clat (msec): min=5, max=279, avg=116.62, stdev=45.24 00:24:15.809 lat (msec): min=5, max=279, avg=118.37, stdev=45.95 00:24:15.809 clat percentiles (msec): 00:24:15.809 | 1.00th=[ 23], 5.00th=[ 54], 10.00th=[ 63], 20.00th=[ 77], 00:24:15.809 | 30.00th=[ 88], 40.00th=[ 100], 50.00th=[ 109], 60.00th=[ 128], 00:24:15.809 | 70.00th=[ 146], 80.00th=[ 159], 90.00th=[ 180], 95.00th=[ 190], 00:24:15.809 | 99.00th=[ 220], 99.50th=[ 239], 99.90th=[ 279], 99.95th=[ 279], 00:24:15.809 | 99.99th=[ 279] 00:24:15.809 bw ( KiB/s): min=87040, max=227840, per=7.39%, avg=138163.20, stdev=44304.48, samples=20 00:24:15.809 iops : min= 340, max= 890, avg=539.70, stdev=173.06, samples=20 00:24:15.809 lat (msec) : 10=0.22%, 20=0.60%, 50=3.11%, 100=37.97%, 250=57.75% 00:24:15.809 lat (msec) : 500=0.35% 00:24:15.809 cpu : usr=0.34%, sys=1.81%, ctx=1160, majf=0, minf=4097 00:24:15.809 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:24:15.809 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:15.809 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:15.809 issued rwts: total=5460,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:15.809 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:15.809 job6: (groupid=0, jobs=1): err= 0: pid=1022805: Fri Jul 26 08:57:32 2024 00:24:15.809 read: IOPS=700, BW=175MiB/s (184MB/s)(1763MiB/10069msec) 00:24:15.809 slat (usec): min=9, max=52246, avg=1016.76, stdev=3321.59 00:24:15.809 clat (usec): min=1711, max=226983, avg=90283.20, stdev=34060.98 00:24:15.809 lat (usec): min=1730, max=227002, avg=91299.95, stdev=34265.65 00:24:15.809 clat percentiles (msec): 00:24:15.809 | 1.00th=[ 6], 5.00th=[ 32], 10.00th=[ 47], 20.00th=[ 66], 00:24:15.809 | 30.00th=[ 75], 40.00th=[ 83], 50.00th=[ 91], 60.00th=[ 99], 00:24:15.809 | 70.00th=[ 107], 80.00th=[ 118], 90.00th=[ 133], 95.00th=[ 144], 00:24:15.809 | 99.00th=[ 174], 99.50th=[ 190], 99.90th=[ 215], 99.95th=[ 220], 00:24:15.809 | 99.99th=[ 228] 00:24:15.809 bw ( KiB/s): min=128000, max=267776, per=9.57%, avg=178918.40, stdev=38536.64, samples=20 00:24:15.809 iops : min= 500, max= 1046, avg=698.90, stdev=150.53, samples=20 00:24:15.809 lat (msec) : 2=0.03%, 4=0.75%, 10=1.33%, 20=1.50%, 50=8.22% 00:24:15.809 lat (msec) : 100=50.28%, 250=37.88% 00:24:15.809 cpu : usr=0.37%, sys=2.24%, ctx=1657, majf=0, minf=4097 00:24:15.809 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:24:15.809 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:15.809 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:15.809 issued rwts: total=7052,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:15.809 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:15.809 job7: (groupid=0, jobs=1): err= 0: pid=1022809: Fri Jul 26 08:57:32 2024 00:24:15.809 read: IOPS=523, BW=131MiB/s (137MB/s)(1319MiB/10072msec) 00:24:15.809 slat (usec): min=14, max=44406, avg=1506.16, stdev=4691.86 00:24:15.809 clat (msec): min=4, max=236, avg=120.61, stdev=42.18 00:24:15.809 lat (msec): min=4, max=246, avg=122.12, stdev=43.07 00:24:15.809 clat percentiles (msec): 00:24:15.809 | 1.00th=[ 17], 5.00th=[ 45], 10.00th=[ 69], 20.00th=[ 87], 00:24:15.809 | 30.00th=[ 100], 40.00th=[ 110], 50.00th=[ 122], 60.00th=[ 133], 00:24:15.809 | 70.00th=[ 148], 80.00th=[ 157], 90.00th=[ 176], 95.00th=[ 188], 00:24:15.809 | 99.00th=[ 205], 99.50th=[ 209], 99.90th=[ 224], 99.95th=[ 236], 00:24:15.809 | 99.99th=[ 236] 00:24:15.809 bw ( KiB/s): min=86016, max=201216, per=7.14%, avg=133413.90, stdev=31487.71, samples=20 00:24:15.809 iops : min= 336, max= 786, avg=521.10, stdev=123.02, samples=20 00:24:15.809 lat (msec) : 10=0.15%, 20=1.67%, 50=3.98%, 100=24.95%, 250=69.25% 00:24:15.809 cpu : usr=0.39%, sys=1.85%, ctx=1305, majf=0, minf=4097 00:24:15.809 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:24:15.809 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:15.809 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:15.809 issued rwts: total=5275,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:15.809 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:15.809 job8: (groupid=0, jobs=1): err= 0: pid=1022810: Fri Jul 26 08:57:32 2024 00:24:15.809 read: IOPS=564, BW=141MiB/s (148MB/s)(1426MiB/10108msec) 00:24:15.809 slat (usec): min=13, max=48765, avg=1659.08, stdev=4523.33 00:24:15.809 clat (msec): min=2, max=245, avg=111.64, stdev=49.66 00:24:15.809 lat (msec): min=2, max=245, avg=113.30, stdev=50.53 00:24:15.809 clat percentiles (msec): 00:24:15.809 | 1.00th=[ 8], 5.00th=[ 33], 10.00th=[ 47], 20.00th=[ 65], 00:24:15.809 | 30.00th=[ 79], 40.00th=[ 93], 50.00th=[ 112], 60.00th=[ 132], 00:24:15.809 | 70.00th=[ 150], 80.00th=[ 159], 90.00th=[ 176], 95.00th=[ 186], 00:24:15.809 | 99.00th=[ 203], 99.50th=[ 213], 99.90th=[ 239], 99.95th=[ 239], 00:24:15.809 | 99.99th=[ 247] 00:24:15.810 bw ( KiB/s): min=82944, max=247296, per=7.73%, avg=144450.50, stdev=56505.43, samples=20 00:24:15.810 iops : min= 324, max= 966, avg=564.25, stdev=220.72, samples=20 00:24:15.810 lat (msec) : 4=0.16%, 10=1.75%, 20=1.40%, 50=8.54%, 100=32.62% 00:24:15.810 lat (msec) : 250=55.53% 00:24:15.810 cpu : usr=0.43%, sys=1.88%, ctx=1287, majf=0, minf=4097 00:24:15.810 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:24:15.810 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:15.810 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:15.810 issued rwts: total=5705,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:15.810 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:15.810 job9: (groupid=0, jobs=1): err= 0: pid=1022811: Fri Jul 26 08:57:32 2024 00:24:15.810 read: IOPS=666, BW=167MiB/s (175MB/s)(1675MiB/10055msec) 00:24:15.810 slat (usec): min=9, max=142574, avg=809.58, stdev=4022.15 00:24:15.810 clat (msec): min=2, max=246, avg=95.19, stdev=49.25 00:24:15.810 lat (msec): min=2, max=319, avg=96.00, stdev=49.81 00:24:15.810 clat percentiles (msec): 00:24:15.810 | 1.00th=[ 8], 5.00th=[ 16], 10.00th=[ 28], 20.00th=[ 44], 00:24:15.810 | 30.00th=[ 62], 40.00th=[ 83], 50.00th=[ 100], 60.00th=[ 112], 00:24:15.810 | 70.00th=[ 124], 80.00th=[ 142], 90.00th=[ 159], 95.00th=[ 180], 00:24:15.810 | 99.00th=[ 194], 99.50th=[ 197], 99.90th=[ 209], 99.95th=[ 209], 00:24:15.810 | 99.99th=[ 247] 00:24:15.810 bw ( KiB/s): min=108544, max=322048, per=9.09%, avg=169892.55, stdev=59275.57, samples=20 00:24:15.810 iops : min= 424, max= 1258, avg=663.60, stdev=231.59, samples=20 00:24:15.810 lat (msec) : 4=0.19%, 10=1.73%, 20=4.60%, 50=17.75%, 100=26.24% 00:24:15.810 lat (msec) : 250=49.48% 00:24:15.810 cpu : usr=0.28%, sys=1.85%, ctx=1734, majf=0, minf=4097 00:24:15.810 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:24:15.810 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:15.810 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:15.810 issued rwts: total=6699,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:15.810 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:15.810 job10: (groupid=0, jobs=1): err= 0: pid=1022812: Fri Jul 26 08:57:32 2024 00:24:15.810 read: IOPS=570, BW=143MiB/s (150MB/s)(1441MiB/10101msec) 00:24:15.810 slat (usec): min=12, max=126605, avg=1557.93, stdev=5145.09 00:24:15.810 clat (msec): min=2, max=276, avg=110.50, stdev=53.31 00:24:15.810 lat (msec): min=2, max=276, avg=112.05, stdev=54.20 00:24:15.810 clat percentiles (msec): 00:24:15.810 | 1.00th=[ 8], 5.00th=[ 28], 10.00th=[ 31], 20.00th=[ 44], 00:24:15.810 | 30.00th=[ 79], 40.00th=[ 101], 50.00th=[ 120], 60.00th=[ 138], 00:24:15.810 | 70.00th=[ 150], 80.00th=[ 159], 90.00th=[ 176], 95.00th=[ 186], 00:24:15.810 | 99.00th=[ 201], 99.50th=[ 207], 99.90th=[ 232], 99.95th=[ 236], 00:24:15.810 | 99.99th=[ 279] 00:24:15.810 bw ( KiB/s): min=84992, max=475648, per=7.81%, avg=145925.25, stdev=87168.05, samples=20 00:24:15.810 iops : min= 332, max= 1858, avg=570.00, stdev=340.51, samples=20 00:24:15.810 lat (msec) : 4=0.07%, 10=1.23%, 20=1.32%, 50=19.15%, 100=18.46% 00:24:15.810 lat (msec) : 250=59.74%, 500=0.03% 00:24:15.810 cpu : usr=0.43%, sys=1.70%, ctx=1236, majf=0, minf=4097 00:24:15.810 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:24:15.810 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:15.810 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:15.810 issued rwts: total=5765,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:15.810 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:15.810 00:24:15.810 Run status group 0 (all jobs): 00:24:15.810 READ: bw=1825MiB/s (1914MB/s), 131MiB/s-215MiB/s (137MB/s-226MB/s), io=18.0GiB (19.3GB), run=10014-10108msec 00:24:15.810 00:24:15.810 Disk stats (read/write): 00:24:15.810 nvme0n1: ios=15971/0, merge=0/0, ticks=1235755/0, in_queue=1235755, util=96.94% 00:24:15.810 nvme10n1: ios=16794/0, merge=0/0, ticks=1240805/0, in_queue=1240805, util=97.19% 00:24:15.810 nvme1n1: ios=16710/0, merge=0/0, ticks=1234623/0, in_queue=1234623, util=97.50% 00:24:15.810 nvme2n1: ios=11442/0, merge=0/0, ticks=1230307/0, in_queue=1230307, util=97.65% 00:24:15.810 nvme3n1: ios=13267/0, merge=0/0, ticks=1226489/0, in_queue=1226489, util=97.75% 00:24:15.810 nvme4n1: ios=10732/0, merge=0/0, ticks=1227445/0, in_queue=1227445, util=98.12% 00:24:15.810 nvme5n1: ios=13851/0, merge=0/0, ticks=1239320/0, in_queue=1239320, util=98.30% 00:24:15.810 nvme6n1: ios=10283/0, merge=0/0, ticks=1231021/0, in_queue=1231021, util=98.43% 00:24:15.810 nvme7n1: ios=11206/0, merge=0/0, ticks=1225218/0, in_queue=1225218, util=98.87% 00:24:15.810 nvme8n1: ios=13153/0, merge=0/0, ticks=1239153/0, in_queue=1239153, util=99.08% 00:24:15.810 nvme9n1: ios=11316/0, merge=0/0, ticks=1229654/0, in_queue=1229654, util=99.19% 00:24:15.810 08:57:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:24:15.810 [global] 00:24:15.810 thread=1 00:24:15.810 invalidate=1 00:24:15.810 rw=randwrite 00:24:15.810 time_based=1 00:24:15.810 runtime=10 00:24:15.810 ioengine=libaio 00:24:15.810 direct=1 00:24:15.810 bs=262144 00:24:15.810 iodepth=64 00:24:15.810 norandommap=1 00:24:15.810 numjobs=1 00:24:15.810 00:24:15.810 [job0] 00:24:15.810 filename=/dev/nvme0n1 00:24:15.810 [job1] 00:24:15.810 filename=/dev/nvme10n1 00:24:15.810 [job2] 00:24:15.810 filename=/dev/nvme1n1 00:24:15.810 [job3] 00:24:15.810 filename=/dev/nvme2n1 00:24:15.810 [job4] 00:24:15.810 filename=/dev/nvme3n1 00:24:15.810 [job5] 00:24:15.810 filename=/dev/nvme4n1 00:24:15.810 [job6] 00:24:15.810 filename=/dev/nvme5n1 00:24:15.810 [job7] 00:24:15.810 filename=/dev/nvme6n1 00:24:15.810 [job8] 00:24:15.810 filename=/dev/nvme7n1 00:24:15.810 [job9] 00:24:15.810 filename=/dev/nvme8n1 00:24:15.810 [job10] 00:24:15.810 filename=/dev/nvme9n1 00:24:15.810 Could not set queue depth (nvme0n1) 00:24:15.810 Could not set queue depth (nvme10n1) 00:24:15.810 Could not set queue depth (nvme1n1) 00:24:15.810 Could not set queue depth (nvme2n1) 00:24:15.810 Could not set queue depth (nvme3n1) 00:24:15.810 Could not set queue depth (nvme4n1) 00:24:15.810 Could not set queue depth (nvme5n1) 00:24:15.810 Could not set queue depth (nvme6n1) 00:24:15.810 Could not set queue depth (nvme7n1) 00:24:15.810 Could not set queue depth (nvme8n1) 00:24:15.810 Could not set queue depth (nvme9n1) 00:24:15.810 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:15.810 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:15.810 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:15.810 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:15.810 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:15.810 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:15.810 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:15.810 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:15.810 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:15.810 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:15.810 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:15.810 fio-3.35 00:24:15.810 Starting 11 threads 00:24:25.786 00:24:25.786 job0: (groupid=0, jobs=1): err= 0: pid=1023842: Fri Jul 26 08:57:43 2024 00:24:25.786 write: IOPS=716, BW=179MiB/s (188MB/s)(1807MiB/10082msec); 0 zone resets 00:24:25.786 slat (usec): min=18, max=47621, avg=1042.98, stdev=2788.14 00:24:25.786 clat (usec): min=1207, max=309621, avg=88181.78, stdev=55705.67 00:24:25.786 lat (usec): min=1248, max=309663, avg=89224.77, stdev=56441.78 00:24:25.786 clat percentiles (msec): 00:24:25.786 | 1.00th=[ 9], 5.00th=[ 20], 10.00th=[ 28], 20.00th=[ 41], 00:24:25.786 | 30.00th=[ 50], 40.00th=[ 68], 50.00th=[ 79], 60.00th=[ 91], 00:24:25.786 | 70.00th=[ 109], 80.00th=[ 131], 90.00th=[ 167], 95.00th=[ 203], 00:24:25.786 | 99.00th=[ 247], 99.50th=[ 296], 99.90th=[ 309], 99.95th=[ 309], 00:24:25.786 | 99.99th=[ 309] 00:24:25.786 bw ( KiB/s): min=61440, max=403161, per=12.70%, avg=183383.65, stdev=84158.41, samples=20 00:24:25.786 iops : min= 240, max= 1574, avg=716.30, stdev=328.63, samples=20 00:24:25.786 lat (msec) : 2=0.06%, 4=0.12%, 10=1.09%, 20=4.12%, 50=24.74% 00:24:25.786 lat (msec) : 100=35.63%, 250=33.30%, 500=0.94% 00:24:25.786 cpu : usr=2.04%, sys=2.15%, ctx=3778, majf=0, minf=1 00:24:25.786 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:24:25.786 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:25.786 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:25.786 issued rwts: total=0,7228,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:25.786 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:25.786 job1: (groupid=0, jobs=1): err= 0: pid=1023854: Fri Jul 26 08:57:43 2024 00:24:25.786 write: IOPS=431, BW=108MiB/s (113MB/s)(1100MiB/10206msec); 0 zone resets 00:24:25.786 slat (usec): min=20, max=40290, avg=1464.62, stdev=4469.83 00:24:25.786 clat (msec): min=2, max=406, avg=146.86, stdev=90.06 00:24:25.786 lat (msec): min=2, max=406, avg=148.33, stdev=91.38 00:24:25.786 clat percentiles (msec): 00:24:25.786 | 1.00th=[ 7], 5.00th=[ 19], 10.00th=[ 32], 20.00th=[ 55], 00:24:25.786 | 30.00th=[ 87], 40.00th=[ 116], 50.00th=[ 138], 60.00th=[ 159], 00:24:25.786 | 70.00th=[ 201], 80.00th=[ 236], 90.00th=[ 271], 95.00th=[ 296], 00:24:25.786 | 99.00th=[ 359], 99.50th=[ 372], 99.90th=[ 393], 99.95th=[ 393], 00:24:25.786 | 99.99th=[ 405] 00:24:25.786 bw ( KiB/s): min=49053, max=219136, per=7.69%, avg=111065.65, stdev=51216.70, samples=20 00:24:25.786 iops : min= 191, max= 856, avg=433.80, stdev=200.08, samples=20 00:24:25.786 lat (msec) : 4=0.39%, 10=2.00%, 20=3.39%, 50=12.11%, 100=16.77% 00:24:25.786 lat (msec) : 250=47.90%, 500=17.45% 00:24:25.786 cpu : usr=1.32%, sys=1.61%, ctx=2867, majf=0, minf=1 00:24:25.786 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:24:25.786 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:25.786 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:25.786 issued rwts: total=0,4401,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:25.786 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:25.786 job2: (groupid=0, jobs=1): err= 0: pid=1023855: Fri Jul 26 08:57:43 2024 00:24:25.786 write: IOPS=446, BW=112MiB/s (117MB/s)(1139MiB/10206msec); 0 zone resets 00:24:25.786 slat (usec): min=18, max=86362, avg=1738.56, stdev=4710.30 00:24:25.786 clat (msec): min=3, max=491, avg=141.52, stdev=83.82 00:24:25.786 lat (msec): min=3, max=491, avg=143.26, stdev=84.89 00:24:25.786 clat percentiles (msec): 00:24:25.786 | 1.00th=[ 10], 5.00th=[ 31], 10.00th=[ 46], 20.00th=[ 75], 00:24:25.786 | 30.00th=[ 96], 40.00th=[ 110], 50.00th=[ 123], 60.00th=[ 144], 00:24:25.786 | 70.00th=[ 165], 80.00th=[ 203], 90.00th=[ 255], 95.00th=[ 309], 00:24:25.786 | 99.00th=[ 414], 99.50th=[ 451], 99.90th=[ 472], 99.95th=[ 485], 00:24:25.786 | 99.99th=[ 493] 00:24:25.786 bw ( KiB/s): min=52224, max=203264, per=7.96%, avg=115002.45, stdev=44949.32, samples=20 00:24:25.786 iops : min= 204, max= 794, avg=449.20, stdev=175.61, samples=20 00:24:25.786 lat (msec) : 4=0.09%, 10=1.03%, 20=1.56%, 50=8.38%, 100=22.50% 00:24:25.786 lat (msec) : 250=55.29%, 500=11.15% 00:24:25.786 cpu : usr=1.38%, sys=1.52%, ctx=2302, majf=0, minf=1 00:24:25.786 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:24:25.786 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:25.786 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:25.786 issued rwts: total=0,4556,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:25.786 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:25.786 job3: (groupid=0, jobs=1): err= 0: pid=1023856: Fri Jul 26 08:57:43 2024 00:24:25.786 write: IOPS=410, BW=103MiB/s (108MB/s)(1042MiB/10157msec); 0 zone resets 00:24:25.786 slat (usec): min=26, max=50042, avg=2043.93, stdev=4714.54 00:24:25.786 clat (msec): min=3, max=575, avg=153.87, stdev=72.75 00:24:25.786 lat (msec): min=3, max=575, avg=155.91, stdev=73.65 00:24:25.786 clat percentiles (msec): 00:24:25.786 | 1.00th=[ 15], 5.00th=[ 45], 10.00th=[ 66], 20.00th=[ 91], 00:24:25.786 | 30.00th=[ 118], 40.00th=[ 148], 50.00th=[ 161], 60.00th=[ 169], 00:24:25.786 | 70.00th=[ 182], 80.00th=[ 194], 90.00th=[ 218], 95.00th=[ 239], 00:24:25.786 | 99.00th=[ 405], 99.50th=[ 468], 99.90th=[ 550], 99.95th=[ 550], 00:24:25.786 | 99.99th=[ 575] 00:24:25.786 bw ( KiB/s): min=36864, max=199168, per=7.27%, avg=105002.80, stdev=38933.78, samples=20 00:24:25.786 iops : min= 144, max= 778, avg=410.15, stdev=152.09, samples=20 00:24:25.786 lat (msec) : 4=0.02%, 10=0.53%, 20=1.32%, 50=3.62%, 100=19.64% 00:24:25.786 lat (msec) : 250=70.52%, 500=4.01%, 750=0.34% 00:24:25.786 cpu : usr=1.22%, sys=1.37%, ctx=1708, majf=0, minf=1 00:24:25.786 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:24:25.786 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:25.786 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:25.786 issued rwts: total=0,4166,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:25.786 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:25.786 job4: (groupid=0, jobs=1): err= 0: pid=1023857: Fri Jul 26 08:57:43 2024 00:24:25.786 write: IOPS=458, BW=115MiB/s (120MB/s)(1164MiB/10158msec); 0 zone resets 00:24:25.786 slat (usec): min=16, max=90258, avg=1615.10, stdev=4582.64 00:24:25.786 clat (msec): min=2, max=573, avg=137.93, stdev=89.78 00:24:25.786 lat (msec): min=3, max=574, avg=139.54, stdev=91.03 00:24:25.786 clat percentiles (msec): 00:24:25.786 | 1.00th=[ 9], 5.00th=[ 21], 10.00th=[ 33], 20.00th=[ 46], 00:24:25.787 | 30.00th=[ 81], 40.00th=[ 107], 50.00th=[ 133], 60.00th=[ 155], 00:24:25.787 | 70.00th=[ 180], 80.00th=[ 205], 90.00th=[ 245], 95.00th=[ 296], 00:24:25.787 | 99.00th=[ 405], 99.50th=[ 435], 99.90th=[ 550], 99.95th=[ 550], 00:24:25.787 | 99.99th=[ 575] 00:24:25.787 bw ( KiB/s): min=36864, max=338944, per=8.14%, avg=117572.40, stdev=64915.99, samples=20 00:24:25.787 iops : min= 144, max= 1324, avg=459.25, stdev=253.59, samples=20 00:24:25.787 lat (msec) : 4=0.09%, 10=1.48%, 20=3.33%, 50=16.47%, 100=15.80% 00:24:25.787 lat (msec) : 250=53.53%, 500=9.04%, 750=0.26% 00:24:25.787 cpu : usr=1.49%, sys=1.31%, ctx=2532, majf=0, minf=1 00:24:25.787 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:24:25.787 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:25.787 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:25.787 issued rwts: total=0,4657,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:25.787 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:25.787 job5: (groupid=0, jobs=1): err= 0: pid=1023858: Fri Jul 26 08:57:43 2024 00:24:25.787 write: IOPS=515, BW=129MiB/s (135MB/s)(1306MiB/10124msec); 0 zone resets 00:24:25.787 slat (usec): min=15, max=51526, avg=1145.66, stdev=3576.14 00:24:25.787 clat (usec): min=1175, max=551122, avg=122849.59, stdev=79836.42 00:24:25.787 lat (usec): min=1268, max=551176, avg=123995.25, stdev=80769.61 00:24:25.787 clat percentiles (msec): 00:24:25.787 | 1.00th=[ 4], 5.00th=[ 16], 10.00th=[ 26], 20.00th=[ 44], 00:24:25.787 | 30.00th=[ 72], 40.00th=[ 102], 50.00th=[ 122], 60.00th=[ 148], 00:24:25.787 | 70.00th=[ 161], 80.00th=[ 180], 90.00th=[ 201], 95.00th=[ 245], 00:24:25.787 | 99.00th=[ 393], 99.50th=[ 418], 99.90th=[ 502], 99.95th=[ 527], 00:24:25.787 | 99.99th=[ 550] 00:24:25.787 bw ( KiB/s): min=47104, max=251392, per=9.14%, avg=132087.20, stdev=52871.64, samples=20 00:24:25.787 iops : min= 184, max= 982, avg=515.95, stdev=206.54, samples=20 00:24:25.787 lat (msec) : 2=0.25%, 4=0.92%, 10=2.58%, 20=3.16%, 50=15.83% 00:24:25.787 lat (msec) : 100=16.62%, 250=55.96%, 500=4.50%, 750=0.17% 00:24:25.787 cpu : usr=1.52%, sys=1.68%, ctx=3406, majf=0, minf=1 00:24:25.787 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:24:25.787 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:25.787 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:25.787 issued rwts: total=0,5223,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:25.787 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:25.787 job6: (groupid=0, jobs=1): err= 0: pid=1023860: Fri Jul 26 08:57:43 2024 00:24:25.787 write: IOPS=458, BW=115MiB/s (120MB/s)(1155MiB/10080msec); 0 zone resets 00:24:25.787 slat (usec): min=23, max=78078, avg=1836.53, stdev=4695.28 00:24:25.787 clat (usec): min=1959, max=377834, avg=137770.17, stdev=79677.60 00:24:25.787 lat (msec): min=2, max=377, avg=139.61, stdev=80.80 00:24:25.787 clat percentiles (msec): 00:24:25.787 | 1.00th=[ 6], 5.00th=[ 24], 10.00th=[ 50], 20.00th=[ 71], 00:24:25.787 | 30.00th=[ 79], 40.00th=[ 96], 50.00th=[ 124], 60.00th=[ 150], 00:24:25.787 | 70.00th=[ 182], 80.00th=[ 211], 90.00th=[ 253], 95.00th=[ 288], 00:24:25.787 | 99.00th=[ 326], 99.50th=[ 342], 99.90th=[ 368], 99.95th=[ 376], 00:24:25.787 | 99.99th=[ 380] 00:24:25.787 bw ( KiB/s): min=59392, max=276992, per=8.07%, avg=116623.55, stdev=54854.64, samples=20 00:24:25.787 iops : min= 232, max= 1082, avg=455.55, stdev=214.28, samples=20 00:24:25.787 lat (msec) : 2=0.02%, 4=0.02%, 10=2.42%, 20=1.88%, 50=6.02% 00:24:25.787 lat (msec) : 100=31.02%, 250=47.85%, 500=10.76% 00:24:25.787 cpu : usr=1.41%, sys=1.45%, ctx=2013, majf=0, minf=1 00:24:25.787 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:24:25.787 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:25.787 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:25.787 issued rwts: total=0,4619,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:25.787 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:25.787 job7: (groupid=0, jobs=1): err= 0: pid=1023861: Fri Jul 26 08:57:43 2024 00:24:25.787 write: IOPS=494, BW=124MiB/s (130MB/s)(1256MiB/10159msec); 0 zone resets 00:24:25.787 slat (usec): min=19, max=157310, avg=1075.02, stdev=4906.49 00:24:25.787 clat (usec): min=1148, max=565715, avg=128231.79, stdev=92907.31 00:24:25.787 lat (usec): min=1187, max=590243, avg=129306.81, stdev=93873.74 00:24:25.787 clat percentiles (msec): 00:24:25.787 | 1.00th=[ 5], 5.00th=[ 13], 10.00th=[ 23], 20.00th=[ 46], 00:24:25.787 | 30.00th=[ 66], 40.00th=[ 99], 50.00th=[ 114], 60.00th=[ 136], 00:24:25.787 | 70.00th=[ 157], 80.00th=[ 184], 90.00th=[ 262], 95.00th=[ 317], 00:24:25.787 | 99.00th=[ 397], 99.50th=[ 435], 99.90th=[ 542], 99.95th=[ 567], 00:24:25.787 | 99.99th=[ 567] 00:24:25.787 bw ( KiB/s): min=49053, max=233984, per=8.79%, avg=126971.05, stdev=51982.34, samples=20 00:24:25.787 iops : min= 191, max= 914, avg=495.95, stdev=203.10, samples=20 00:24:25.787 lat (msec) : 2=0.18%, 4=0.60%, 10=3.48%, 20=4.76%, 50=13.50% 00:24:25.787 lat (msec) : 100=18.19%, 250=47.79%, 500=11.27%, 750=0.24% 00:24:25.787 cpu : usr=1.47%, sys=1.93%, ctx=3547, majf=0, minf=1 00:24:25.787 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:24:25.787 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:25.787 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:25.787 issued rwts: total=0,5024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:25.787 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:25.787 job8: (groupid=0, jobs=1): err= 0: pid=1023862: Fri Jul 26 08:57:43 2024 00:24:25.787 write: IOPS=430, BW=108MiB/s (113MB/s)(1097MiB/10200msec); 0 zone resets 00:24:25.787 slat (usec): min=26, max=113940, avg=1935.56, stdev=4844.57 00:24:25.787 clat (msec): min=6, max=408, avg=146.73, stdev=81.70 00:24:25.787 lat (msec): min=6, max=408, avg=148.66, stdev=82.76 00:24:25.787 clat percentiles (msec): 00:24:25.787 | 1.00th=[ 29], 5.00th=[ 42], 10.00th=[ 45], 20.00th=[ 63], 00:24:25.787 | 30.00th=[ 77], 40.00th=[ 115], 50.00th=[ 148], 60.00th=[ 167], 00:24:25.787 | 70.00th=[ 197], 80.00th=[ 224], 90.00th=[ 264], 95.00th=[ 288], 00:24:25.787 | 99.00th=[ 326], 99.50th=[ 351], 99.90th=[ 397], 99.95th=[ 397], 00:24:25.787 | 99.99th=[ 409] 00:24:25.787 bw ( KiB/s): min=55296, max=330240, per=7.66%, avg=110686.00, stdev=67341.61, samples=20 00:24:25.787 iops : min= 216, max= 1290, avg=432.35, stdev=263.06, samples=20 00:24:25.787 lat (msec) : 10=0.11%, 20=0.39%, 50=13.61%, 100=22.02%, 250=51.15% 00:24:25.787 lat (msec) : 500=12.72% 00:24:25.787 cpu : usr=1.36%, sys=1.39%, ctx=1650, majf=0, minf=1 00:24:25.787 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:24:25.787 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:25.787 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:25.787 issued rwts: total=0,4387,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:25.787 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:25.787 job9: (groupid=0, jobs=1): err= 0: pid=1023863: Fri Jul 26 08:57:43 2024 00:24:25.787 write: IOPS=745, BW=186MiB/s (195MB/s)(1870MiB/10038msec); 0 zone resets 00:24:25.787 slat (usec): min=16, max=48074, avg=747.56, stdev=2517.96 00:24:25.787 clat (usec): min=1162, max=318076, avg=85095.08, stdev=64542.58 00:24:25.787 lat (usec): min=1208, max=318158, avg=85842.64, stdev=65178.19 00:24:25.787 clat percentiles (msec): 00:24:25.787 | 1.00th=[ 7], 5.00th=[ 15], 10.00th=[ 25], 20.00th=[ 38], 00:24:25.787 | 30.00th=[ 41], 40.00th=[ 48], 50.00th=[ 61], 60.00th=[ 77], 00:24:25.787 | 70.00th=[ 100], 80.00th=[ 150], 90.00th=[ 182], 95.00th=[ 218], 00:24:25.787 | 99.00th=[ 271], 99.50th=[ 284], 99.90th=[ 305], 99.95th=[ 313], 00:24:25.787 | 99.99th=[ 317] 00:24:25.787 bw ( KiB/s): min=63488, max=400896, per=13.14%, avg=189866.75, stdev=86785.14, samples=20 00:24:25.787 iops : min= 248, max= 1566, avg=741.60, stdev=339.02, samples=20 00:24:25.787 lat (msec) : 2=0.09%, 4=0.37%, 10=2.55%, 20=4.28%, 50=34.79% 00:24:25.787 lat (msec) : 100=28.39%, 250=27.06%, 500=2.46% 00:24:25.787 cpu : usr=2.30%, sys=2.38%, ctx=4625, majf=0, minf=1 00:24:25.787 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:24:25.787 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:25.787 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:25.787 issued rwts: total=0,7481,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:25.787 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:25.787 job10: (groupid=0, jobs=1): err= 0: pid=1023864: Fri Jul 26 08:57:43 2024 00:24:25.787 write: IOPS=572, BW=143MiB/s (150MB/s)(1461MiB/10205msec); 0 zone resets 00:24:25.787 slat (usec): min=21, max=81125, avg=1067.86, stdev=3504.56 00:24:25.787 clat (msec): min=2, max=405, avg=110.59, stdev=75.73 00:24:25.787 lat (msec): min=2, max=406, avg=111.66, stdev=76.59 00:24:25.787 clat percentiles (msec): 00:24:25.787 | 1.00th=[ 9], 5.00th=[ 20], 10.00th=[ 32], 20.00th=[ 47], 00:24:25.787 | 30.00th=[ 64], 40.00th=[ 77], 50.00th=[ 96], 60.00th=[ 108], 00:24:25.787 | 70.00th=[ 126], 80.00th=[ 161], 90.00th=[ 236], 95.00th=[ 279], 00:24:25.787 | 99.00th=[ 321], 99.50th=[ 334], 99.90th=[ 393], 99.95th=[ 393], 00:24:25.787 | 99.99th=[ 405] 00:24:25.787 bw ( KiB/s): min=59392, max=254978, per=10.25%, avg=147993.70, stdev=56867.87, samples=20 00:24:25.787 iops : min= 232, max= 996, avg=578.10, stdev=222.14, samples=20 00:24:25.787 lat (msec) : 4=0.19%, 10=1.66%, 20=3.30%, 50=17.76%, 100=30.54% 00:24:25.787 lat (msec) : 250=38.55%, 500=8.01% 00:24:25.787 cpu : usr=1.83%, sys=2.07%, ctx=3609, majf=0, minf=1 00:24:25.787 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:24:25.787 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:25.787 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:25.787 issued rwts: total=0,5845,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:25.787 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:25.787 00:24:25.787 Run status group 0 (all jobs): 00:24:25.787 WRITE: bw=1411MiB/s (1479MB/s), 103MiB/s-186MiB/s (108MB/s-195MB/s), io=14.1GiB (15.1GB), run=10038-10206msec 00:24:25.787 00:24:25.787 Disk stats (read/write): 00:24:25.787 nvme0n1: ios=47/14249, merge=0/0, ticks=1527/1216447, in_queue=1217974, util=99.91% 00:24:25.787 nvme10n1: ios=47/8775, merge=0/0, ticks=43/1248263, in_queue=1248306, util=97.56% 00:24:25.787 nvme1n1: ios=40/9083, merge=0/0, ticks=1013/1236850, in_queue=1237863, util=100.00% 00:24:25.787 nvme2n1: ios=42/8186, merge=0/0, ticks=1204/1193256, in_queue=1194460, util=100.00% 00:24:25.787 nvme3n1: ios=0/9167, merge=0/0, ticks=0/1199057, in_queue=1199057, util=97.79% 00:24:25.787 nvme4n1: ios=0/10220, merge=0/0, ticks=0/1206790, in_queue=1206790, util=98.12% 00:24:25.787 nvme5n1: ios=43/9020, merge=0/0, ticks=1365/1206925, in_queue=1208290, util=100.00% 00:24:25.787 nvme6n1: ios=43/9895, merge=0/0, ticks=1857/1212416, in_queue=1214273, util=100.00% 00:24:25.787 nvme7n1: ios=40/8753, merge=0/0, ticks=1100/1237486, in_queue=1238586, util=100.00% 00:24:25.787 nvme8n1: ios=0/14604, merge=0/0, ticks=0/1231781, in_queue=1231781, util=98.96% 00:24:25.787 nvme9n1: ios=36/11651, merge=0/0, ticks=672/1246975, in_queue=1247647, util=100.00% 00:24:25.787 08:57:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:24:25.787 08:57:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:24:25.787 08:57:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:25.787 08:57:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:24:25.787 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:24:25.787 08:57:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:24:25.787 08:57:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:24:25.787 08:57:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:25.787 08:57:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK1 00:24:25.787 08:57:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:25.787 08:57:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK1 00:24:25.787 08:57:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:24:25.787 08:57:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:25.787 08:57:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.787 08:57:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:25.787 08:57:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.787 08:57:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:25.787 08:57:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:24:25.787 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:24:25.787 08:57:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:24:25.787 08:57:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:24:25.787 08:57:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:25.787 08:57:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK2 00:24:25.787 08:57:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:25.787 08:57:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK2 00:24:25.787 08:57:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:24:25.787 08:57:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:24:25.787 08:57:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.787 08:57:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:25.787 08:57:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.787 08:57:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:25.787 08:57:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:24:26.046 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:24:26.046 08:57:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:24:26.046 08:57:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:24:26.046 08:57:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:26.046 08:57:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK3 00:24:26.046 08:57:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:26.046 08:57:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK3 00:24:26.046 08:57:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:24:26.046 08:57:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:24:26.046 08:57:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:26.046 08:57:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:26.046 08:57:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:26.046 08:57:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:26.046 08:57:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:24:26.304 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:24:26.304 08:57:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:24:26.304 08:57:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:24:26.304 08:57:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:26.304 08:57:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK4 00:24:26.304 08:57:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:26.304 08:57:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK4 00:24:26.304 08:57:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:24:26.304 08:57:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:24:26.304 08:57:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:26.304 08:57:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:26.304 08:57:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:26.304 08:57:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:26.304 08:57:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:24:26.564 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:24:26.564 08:57:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:24:26.564 08:57:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:24:26.564 08:57:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:26.564 08:57:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK5 00:24:26.564 08:57:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:26.564 08:57:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK5 00:24:26.564 08:57:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:24:26.564 08:57:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:24:26.564 08:57:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:26.564 08:57:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:26.564 08:57:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:26.564 08:57:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:26.564 08:57:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:24:26.564 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:24:26.564 08:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:24:26.824 08:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:24:26.824 08:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:26.824 08:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK6 00:24:26.824 08:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:26.824 08:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK6 00:24:26.824 08:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:24:26.824 08:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:24:26.824 08:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:26.824 08:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:26.824 08:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:26.824 08:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:26.824 08:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:24:26.824 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:24:26.824 08:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:24:26.824 08:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:24:26.824 08:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:26.824 08:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK7 00:24:26.824 08:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:26.824 08:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK7 00:24:26.824 08:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:24:26.824 08:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:24:26.824 08:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:26.824 08:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:26.824 08:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:26.824 08:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:26.824 08:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:24:27.083 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:24:27.083 08:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:24:27.083 08:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:24:27.083 08:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:27.083 08:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK8 00:24:27.083 08:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:27.083 08:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK8 00:24:27.083 08:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:24:27.083 08:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:24:27.083 08:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.083 08:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:27.083 08:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.083 08:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:27.083 08:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:24:27.083 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:24:27.083 08:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:24:27.083 08:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:24:27.083 08:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:27.083 08:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK9 00:24:27.083 08:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:27.083 08:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK9 00:24:27.083 08:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:24:27.083 08:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:24:27.083 08:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.083 08:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:27.083 08:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.083 08:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:27.083 08:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:24:27.342 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:24:27.342 08:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:24:27.342 08:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:24:27.342 08:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:27.342 08:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK10 00:24:27.342 08:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:27.342 08:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK10 00:24:27.342 08:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:24:27.342 08:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:24:27.342 08:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.342 08:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:27.342 08:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.342 08:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:27.342 08:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:24:27.342 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:24:27.342 08:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:24:27.342 08:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:24:27.342 08:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:27.342 08:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK11 00:24:27.342 08:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:27.342 08:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK11 00:24:27.342 08:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:24:27.342 08:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:24:27.342 08:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.342 08:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:27.342 08:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.342 08:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:24:27.342 08:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:24:27.342 08:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:24:27.342 08:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:27.342 08:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # sync 00:24:27.342 08:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:27.342 08:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@120 -- # set +e 00:24:27.342 08:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:27.342 08:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:27.342 rmmod nvme_tcp 00:24:27.342 rmmod nvme_fabrics 00:24:27.342 rmmod nvme_keyring 00:24:27.342 08:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:27.342 08:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@124 -- # set -e 00:24:27.342 08:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@125 -- # return 0 00:24:27.342 08:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@489 -- # '[' -n 1017801 ']' 00:24:27.342 08:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@490 -- # killprocess 1017801 00:24:27.342 08:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@950 -- # '[' -z 1017801 ']' 00:24:27.342 08:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@954 -- # kill -0 1017801 00:24:27.342 08:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@955 -- # uname 00:24:27.342 08:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:27.342 08:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1017801 00:24:27.600 08:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:27.600 08:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:27.600 08:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1017801' 00:24:27.600 killing process with pid 1017801 00:24:27.600 08:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@969 -- # kill 1017801 00:24:27.600 08:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@974 -- # wait 1017801 00:24:27.858 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:27.858 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:27.858 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:27.858 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:27.858 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:27.859 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:27.859 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:27.859 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:30.399 08:57:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:30.399 00:24:30.399 real 1m0.278s 00:24:30.399 user 3m19.142s 00:24:30.399 sys 0m25.709s 00:24:30.399 08:57:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:30.399 08:57:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:30.399 ************************************ 00:24:30.399 END TEST nvmf_multiconnection 00:24:30.399 ************************************ 00:24:30.399 08:57:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@48 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:24:30.399 08:57:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:30.399 08:57:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:30.399 08:57:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:30.399 ************************************ 00:24:30.399 START TEST nvmf_initiator_timeout 00:24:30.399 ************************************ 00:24:30.399 08:57:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:24:30.399 * Looking for test storage... 00:24:30.399 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:30.399 08:57:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:30.399 08:57:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:24:30.399 08:57:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:30.399 08:57:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:30.399 08:57:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:30.399 08:57:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:30.399 08:57:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:30.399 08:57:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:30.399 08:57:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:30.399 08:57:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:30.399 08:57:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:30.399 08:57:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:30.399 08:57:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:30.399 08:57:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:30.399 08:57:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:30.399 08:57:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:30.399 08:57:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:30.399 08:57:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:30.399 08:57:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:30.399 08:57:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:30.399 08:57:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:30.399 08:57:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:30.399 08:57:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:30.399 08:57:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:30.399 08:57:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:30.399 08:57:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:24:30.399 08:57:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:30.399 08:57:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@47 -- # : 0 00:24:30.399 08:57:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:30.399 08:57:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:30.399 08:57:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:30.399 08:57:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:30.399 08:57:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:30.399 08:57:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:30.399 08:57:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:30.399 08:57:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:30.399 08:57:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:30.399 08:57:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:30.399 08:57:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:24:30.399 08:57:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:30.399 08:57:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:30.399 08:57:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:30.399 08:57:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:30.399 08:57:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:30.399 08:57:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:30.399 08:57:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:30.399 08:57:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:30.399 08:57:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:30.399 08:57:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:30.399 08:57:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@285 -- # xtrace_disable 00:24:30.399 08:57:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:32.302 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:32.302 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # pci_devs=() 00:24:32.302 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:32.302 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:32.302 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:32.302 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:32.302 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:32.302 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # net_devs=() 00:24:32.302 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:32.302 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # e810=() 00:24:32.302 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # local -ga e810 00:24:32.302 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # x722=() 00:24:32.302 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # local -ga x722 00:24:32.302 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # mlx=() 00:24:32.302 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # local -ga mlx 00:24:32.302 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:32.302 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:32.302 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:32.302 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:32.302 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:32.303 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:32.303 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:32.303 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:32.303 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:32.303 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:32.303 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:32.303 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:32.303 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:32.303 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:32.303 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:32.303 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:32.303 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:32.303 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:32.303 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:32.303 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:32.303 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:32.303 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:32.303 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:32.303 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:32.303 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:32.303 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:32.303 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:32.303 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:32.303 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:32.303 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:32.303 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:32.303 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:32.303 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:32.303 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:32.303 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:32.303 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:32.303 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:32.303 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:32.303 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:32.303 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:32.303 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:32.303 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:32.303 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:32.303 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:32.303 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:32.303 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:32.303 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:32.303 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:32.303 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:32.303 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:32.303 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:32.303 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:32.303 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:32.303 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:32.303 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:32.303 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:32.303 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:32.303 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # is_hw=yes 00:24:32.303 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:32.303 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:32.303 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:32.303 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:32.303 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:32.303 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:32.303 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:32.303 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:32.303 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:32.303 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:32.303 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:32.303 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:32.303 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:32.303 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:32.303 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:32.303 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:32.303 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:32.303 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:32.303 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:32.303 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:32.303 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:32.303 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:32.303 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:32.303 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:32.303 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.143 ms 00:24:32.303 00:24:32.303 --- 10.0.0.2 ping statistics --- 00:24:32.303 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:32.303 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:24:32.303 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:32.303 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:32.303 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:24:32.303 00:24:32.303 --- 10.0.0.1 ping statistics --- 00:24:32.303 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:32.303 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:24:32.303 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:32.303 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # return 0 00:24:32.303 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:32.303 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:32.303 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:32.303 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:32.303 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:32.303 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:32.303 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:32.303 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:24:32.303 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:32.303 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:32.303 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:32.303 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@481 -- # nvmfpid=1027181 00:24:32.303 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:32.303 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # waitforlisten 1027181 00:24:32.303 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@831 -- # '[' -z 1027181 ']' 00:24:32.303 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:32.303 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:32.304 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:32.304 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:32.304 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:32.304 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:32.304 [2024-07-26 08:57:50.655730] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:24:32.304 [2024-07-26 08:57:50.655828] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:32.304 EAL: No free 2048 kB hugepages reported on node 1 00:24:32.304 [2024-07-26 08:57:50.698194] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:24:32.304 [2024-07-26 08:57:50.730415] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:32.562 [2024-07-26 08:57:50.823936] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:32.562 [2024-07-26 08:57:50.823991] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:32.562 [2024-07-26 08:57:50.824005] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:32.562 [2024-07-26 08:57:50.824017] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:32.562 [2024-07-26 08:57:50.824028] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:32.562 [2024-07-26 08:57:50.824085] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:32.562 [2024-07-26 08:57:50.824133] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:32.562 [2024-07-26 08:57:50.824216] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:32.562 [2024-07-26 08:57:50.824219] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:32.562 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:32.562 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@864 -- # return 0 00:24:32.562 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:32.562 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:32.562 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:32.562 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:32.562 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:24:32.562 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:32.562 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:32.562 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:32.562 Malloc0 00:24:32.562 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:32.562 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:24:32.562 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:32.562 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:32.562 Delay0 00:24:32.562 08:57:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:32.562 08:57:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:32.562 08:57:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:32.562 08:57:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:32.562 [2024-07-26 08:57:51.009115] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:32.562 08:57:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:32.562 08:57:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:24:32.562 08:57:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:32.562 08:57:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:32.820 08:57:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:32.820 08:57:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:24:32.820 08:57:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:32.820 08:57:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:32.820 08:57:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:32.820 08:57:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:32.820 08:57:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:32.820 08:57:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:32.820 [2024-07-26 08:57:51.037434] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:32.820 08:57:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:32.820 08:57:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:24:33.388 08:57:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:24:33.388 08:57:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1198 -- # local i=0 00:24:33.388 08:57:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:24:33.388 08:57:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:24:33.388 08:57:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1205 -- # sleep 2 00:24:35.292 08:57:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:24:35.292 08:57:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:24:35.292 08:57:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:24:35.292 08:57:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:24:35.292 08:57:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:24:35.292 08:57:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # return 0 00:24:35.292 08:57:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=1027492 00:24:35.292 08:57:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:24:35.292 08:57:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:24:35.292 [global] 00:24:35.292 thread=1 00:24:35.292 invalidate=1 00:24:35.292 rw=write 00:24:35.292 time_based=1 00:24:35.292 runtime=60 00:24:35.292 ioengine=libaio 00:24:35.292 direct=1 00:24:35.292 bs=4096 00:24:35.292 iodepth=1 00:24:35.292 norandommap=0 00:24:35.292 numjobs=1 00:24:35.292 00:24:35.552 verify_dump=1 00:24:35.552 verify_backlog=512 00:24:35.552 verify_state_save=0 00:24:35.552 do_verify=1 00:24:35.552 verify=crc32c-intel 00:24:35.552 [job0] 00:24:35.552 filename=/dev/nvme0n1 00:24:35.552 Could not set queue depth (nvme0n1) 00:24:35.552 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:24:35.552 fio-3.35 00:24:35.552 Starting 1 thread 00:24:38.837 08:57:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:24:38.837 08:57:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.837 08:57:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:38.837 true 00:24:38.837 08:57:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.837 08:57:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:24:38.837 08:57:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.837 08:57:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:38.837 true 00:24:38.837 08:57:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.837 08:57:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:24:38.837 08:57:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.837 08:57:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:38.837 true 00:24:38.837 08:57:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.837 08:57:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:24:38.837 08:57:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.837 08:57:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:38.837 true 00:24:38.837 08:57:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.837 08:57:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:24:41.411 08:57:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:24:41.411 08:57:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:41.411 08:57:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:41.411 true 00:24:41.411 08:57:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:41.411 08:57:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:24:41.411 08:57:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:41.411 08:57:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:41.411 true 00:24:41.411 08:57:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:41.411 08:57:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:24:41.411 08:57:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:41.411 08:57:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:41.411 true 00:24:41.411 08:57:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:41.411 08:57:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:24:41.411 08:57:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:41.411 08:57:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:41.411 true 00:24:41.411 08:57:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:41.412 08:57:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:24:41.412 08:57:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 1027492 00:25:37.642 00:25:37.643 job0: (groupid=0, jobs=1): err= 0: pid=1027679: Fri Jul 26 08:58:54 2024 00:25:37.643 read: IOPS=131, BW=525KiB/s (538kB/s)(30.8MiB/60013msec) 00:25:37.643 slat (usec): min=5, max=7489, avg=13.09, stdev=106.57 00:25:37.643 clat (usec): min=275, max=42081, avg=2100.80, stdev=8285.50 00:25:37.643 lat (usec): min=283, max=42096, avg=2113.89, stdev=8286.49 00:25:37.643 clat percentiles (usec): 00:25:37.643 | 1.00th=[ 289], 5.00th=[ 302], 10.00th=[ 310], 20.00th=[ 330], 00:25:37.643 | 30.00th=[ 338], 40.00th=[ 343], 50.00th=[ 351], 60.00th=[ 359], 00:25:37.643 | 70.00th=[ 367], 80.00th=[ 379], 90.00th=[ 416], 95.00th=[ 553], 00:25:37.643 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:25:37.643 | 99.99th=[42206] 00:25:37.643 write: IOPS=136, BW=546KiB/s (559kB/s)(32.0MiB/60013msec); 0 zone resets 00:25:37.643 slat (nsec): min=7233, max=86287, avg=17828.15, stdev=10198.48 00:25:37.643 clat (usec): min=197, max=40848k, avg=5265.26, stdev=451313.20 00:25:37.643 lat (usec): min=205, max=40848k, avg=5283.09, stdev=451313.09 00:25:37.643 clat percentiles (usec): 00:25:37.643 | 1.00th=[ 206], 5.00th=[ 212], 10.00th=[ 217], 00:25:37.643 | 20.00th=[ 225], 30.00th=[ 233], 40.00th=[ 258], 00:25:37.643 | 50.00th=[ 281], 60.00th=[ 297], 70.00th=[ 306], 00:25:37.643 | 80.00th=[ 322], 90.00th=[ 343], 95.00th=[ 379], 00:25:37.643 | 99.00th=[ 412], 99.50th=[ 424], 99.90th=[ 453], 00:25:37.643 | 99.95th=[ 644], 99.99th=[17112761] 00:25:37.643 bw ( KiB/s): min= 592, max= 7064, per=100.00%, avg=5041.23, stdev=1751.89, samples=13 00:25:37.643 iops : min= 148, max= 1766, avg=1260.31, stdev=437.97, samples=13 00:25:37.643 lat (usec) : 250=19.15%, 500=77.48%, 750=1.24%, 1000=0.02% 00:25:37.643 lat (msec) : 2=0.02%, 4=0.01%, 50=2.08%, >=2000=0.01% 00:25:37.643 cpu : usr=0.34%, sys=0.47%, ctx=16078, majf=0, minf=2 00:25:37.643 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:37.643 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:37.643 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:37.643 issued rwts: total=7884,8192,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:37.643 latency : target=0, window=0, percentile=100.00%, depth=1 00:25:37.643 00:25:37.643 Run status group 0 (all jobs): 00:25:37.643 READ: bw=525KiB/s (538kB/s), 525KiB/s-525KiB/s (538kB/s-538kB/s), io=30.8MiB (32.3MB), run=60013-60013msec 00:25:37.643 WRITE: bw=546KiB/s (559kB/s), 546KiB/s-546KiB/s (559kB/s-559kB/s), io=32.0MiB (33.6MB), run=60013-60013msec 00:25:37.643 00:25:37.643 Disk stats (read/write): 00:25:37.643 nvme0n1: ios=7980/8192, merge=0/0, ticks=17611/2174, in_queue=19785, util=99.66% 00:25:37.643 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:25:37.643 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:25:37.643 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:25:37.643 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1219 -- # local i=0 00:25:37.643 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:37.643 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:25:37.643 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:37.643 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:25:37.643 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # return 0 00:25:37.643 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:25:37.643 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:25:37.643 nvmf hotplug test: fio successful as expected 00:25:37.643 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:37.643 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:37.643 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:37.643 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:37.643 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:25:37.643 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:25:37.643 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:25:37.643 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:37.643 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # sync 00:25:37.643 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:37.643 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@120 -- # set +e 00:25:37.643 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:37.643 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:37.643 rmmod nvme_tcp 00:25:37.643 rmmod nvme_fabrics 00:25:37.643 rmmod nvme_keyring 00:25:37.643 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:37.643 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set -e 00:25:37.643 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # return 0 00:25:37.643 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@489 -- # '[' -n 1027181 ']' 00:25:37.643 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@490 -- # killprocess 1027181 00:25:37.643 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@950 -- # '[' -z 1027181 ']' 00:25:37.643 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # kill -0 1027181 00:25:37.643 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@955 -- # uname 00:25:37.643 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:37.643 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1027181 00:25:37.643 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:37.643 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:37.643 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1027181' 00:25:37.643 killing process with pid 1027181 00:25:37.643 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@969 -- # kill 1027181 00:25:37.643 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@974 -- # wait 1027181 00:25:37.643 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:37.643 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:37.643 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:37.643 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:37.643 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:37.643 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:37.643 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:37.643 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:38.212 08:58:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:38.212 00:25:38.212 real 1m8.195s 00:25:38.212 user 4m9.267s 00:25:38.212 sys 0m7.643s 00:25:38.212 08:58:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:38.212 08:58:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:38.212 ************************************ 00:25:38.212 END TEST nvmf_initiator_timeout 00:25:38.212 ************************************ 00:25:38.212 08:58:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@51 -- # [[ phy == phy ]] 00:25:38.212 08:58:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@52 -- # '[' tcp = tcp ']' 00:25:38.212 08:58:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # gather_supported_nvmf_pci_devs 00:25:38.212 08:58:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@285 -- # xtrace_disable 00:25:38.212 08:58:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:40.119 08:58:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:40.119 08:58:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@291 -- # pci_devs=() 00:25:40.119 08:58:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:40.119 08:58:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:40.119 08:58:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:40.119 08:58:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:40.119 08:58:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:40.119 08:58:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@295 -- # net_devs=() 00:25:40.119 08:58:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:40.119 08:58:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@296 -- # e810=() 00:25:40.119 08:58:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@296 -- # local -ga e810 00:25:40.119 08:58:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@297 -- # x722=() 00:25:40.119 08:58:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@297 -- # local -ga x722 00:25:40.119 08:58:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@298 -- # mlx=() 00:25:40.119 08:58:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@298 -- # local -ga mlx 00:25:40.119 08:58:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:40.119 08:58:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:40.119 08:58:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:40.119 08:58:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:40.119 08:58:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:40.119 08:58:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:40.119 08:58:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:40.119 08:58:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:40.119 08:58:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:40.119 08:58:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:40.119 08:58:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:40.119 08:58:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:40.119 08:58:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:40.119 08:58:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:40.119 08:58:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:40.119 08:58:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:40.119 08:58:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:40.119 08:58:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:40.119 08:58:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:40.119 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:40.119 08:58:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:40.119 08:58:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:40.119 08:58:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:40.119 08:58:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:40.119 08:58:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:40.119 08:58:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:40.119 08:58:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:40.119 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:40.119 08:58:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:40.119 08:58:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:40.119 08:58:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:40.119 08:58:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:40.119 08:58:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:40.119 08:58:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:40.119 08:58:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:40.119 08:58:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:40.119 08:58:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:40.119 08:58:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:40.119 08:58:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:40.119 08:58:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:40.119 08:58:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:40.119 08:58:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:40.119 08:58:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:40.119 08:58:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:40.119 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:40.119 08:58:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:40.119 08:58:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:40.119 08:58:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:40.119 08:58:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:40.119 08:58:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:40.119 08:58:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:40.119 08:58:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:40.119 08:58:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:40.119 08:58:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:40.119 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:40.119 08:58:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:40.119 08:58:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:40.119 08:58:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:40.119 08:58:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # (( 2 > 0 )) 00:25:40.119 08:58:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:25:40.119 08:58:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:40.119 08:58:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:40.119 08:58:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:40.119 ************************************ 00:25:40.119 START TEST nvmf_perf_adq 00:25:40.119 ************************************ 00:25:40.119 08:58:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:25:40.119 * Looking for test storage... 00:25:40.119 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:40.119 08:58:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:40.119 08:58:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:25:40.119 08:58:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:40.119 08:58:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:40.119 08:58:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:40.119 08:58:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:40.119 08:58:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:40.119 08:58:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:40.119 08:58:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:40.119 08:58:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:40.119 08:58:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:40.119 08:58:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:40.119 08:58:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:40.119 08:58:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:40.119 08:58:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:40.119 08:58:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:40.119 08:58:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:40.119 08:58:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:40.119 08:58:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:40.120 08:58:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:40.120 08:58:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:40.120 08:58:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:40.120 08:58:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:40.120 08:58:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:40.120 08:58:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:40.120 08:58:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:25:40.120 08:58:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:40.120 08:58:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:25:40.120 08:58:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:40.120 08:58:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:40.378 08:58:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:40.378 08:58:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:40.378 08:58:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:40.378 08:58:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:40.378 08:58:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:40.378 08:58:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:40.378 08:58:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:25:40.378 08:58:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:25:40.378 08:58:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:42.282 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:42.282 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:25:42.282 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:42.282 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:42.282 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:42.282 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:42.282 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:42.282 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:25:42.282 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:42.282 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:25:42.282 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:25:42.282 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:25:42.282 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:25:42.282 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:25:42.282 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:25:42.282 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:42.282 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:42.282 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:42.282 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:42.282 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:42.282 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:42.282 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:42.282 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:42.282 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:42.282 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:42.282 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:42.282 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:42.282 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:42.282 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:42.282 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:42.282 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:42.282 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:42.282 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:42.282 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:42.282 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:42.282 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:42.282 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:42.282 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:42.282 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:42.282 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:42.282 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:42.282 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:42.282 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:42.282 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:42.282 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:42.282 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:42.282 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:42.282 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:42.282 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:42.282 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:42.282 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:42.282 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:42.282 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:42.282 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:42.282 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:42.282 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:42.282 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:42.282 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:42.282 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:42.282 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:42.282 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:42.282 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:42.282 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:42.282 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:42.282 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:42.282 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:42.282 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:42.282 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:42.282 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:42.282 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:42.282 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:42.282 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:42.282 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:42.282 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:25:42.282 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:25:42.282 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:25:42.282 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:25:42.851 08:59:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:25:44.756 08:59:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:25:50.029 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:25:50.029 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:50.029 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:50.029 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:50.029 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:50.029 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:50.029 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:50.029 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:50.029 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:50.029 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:50.029 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:50.029 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:25:50.029 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:50.029 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:50.029 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:25:50.029 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:50.029 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:50.029 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:50.029 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:50.029 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:50.029 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:25:50.029 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:50.029 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:25:50.029 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:25:50.029 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:25:50.029 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:25:50.029 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:25:50.029 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:25:50.029 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:50.029 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:50.029 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:50.029 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:50.029 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:50.029 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:50.029 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:50.029 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:50.029 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:50.029 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:50.029 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:50.029 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:50.029 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:50.029 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:50.029 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:50.029 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:50.029 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:50.029 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:50.029 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:50.029 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:50.029 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:50.029 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:50.030 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:50.030 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:50.030 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:50.030 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:50.030 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:50.030 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:50.030 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:50.030 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:50.030 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:50.030 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:50.030 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:50.030 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:50.030 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:50.030 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:50.030 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:50.030 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:50.030 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:50.030 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:50.030 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:50.030 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:50.030 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:50.030 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:50.030 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:50.030 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:50.030 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:50.030 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:50.030 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:50.030 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:50.030 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:50.030 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:50.030 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:50.030 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:50.030 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:50.030 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:50.030 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:50.030 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:25:50.030 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:50.030 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:50.030 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:50.030 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:50.030 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:50.030 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:50.030 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:50.030 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:50.030 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:50.030 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:50.030 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:50.030 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:50.030 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:50.030 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:50.030 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:50.030 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:50.030 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:50.030 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:50.030 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:50.030 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:50.030 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:50.030 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:50.030 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:50.030 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:50.030 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.195 ms 00:25:50.030 00:25:50.030 --- 10.0.0.2 ping statistics --- 00:25:50.030 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:50.030 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:25:50.030 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:50.030 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:50.030 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.159 ms 00:25:50.030 00:25:50.030 --- 10.0.0.1 ping statistics --- 00:25:50.030 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:50.030 rtt min/avg/max/mdev = 0.159/0.159/0.159/0.000 ms 00:25:50.030 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:50.030 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:25:50.030 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:50.030 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:50.030 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:50.030 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:50.030 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:50.030 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:50.030 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:50.030 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:25:50.030 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:50.030 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:50.030 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:50.030 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=1039075 00:25:50.030 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:25:50.030 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 1039075 00:25:50.030 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 1039075 ']' 00:25:50.030 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:50.030 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:50.030 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:50.030 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:50.030 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:50.030 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:50.030 [2024-07-26 08:59:08.346003] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:25:50.030 [2024-07-26 08:59:08.346104] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:50.030 EAL: No free 2048 kB hugepages reported on node 1 00:25:50.030 [2024-07-26 08:59:08.382776] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:25:50.030 [2024-07-26 08:59:08.413265] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:50.290 [2024-07-26 08:59:08.505323] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:50.290 [2024-07-26 08:59:08.505385] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:50.290 [2024-07-26 08:59:08.505416] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:50.290 [2024-07-26 08:59:08.505434] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:50.290 [2024-07-26 08:59:08.505449] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:50.290 [2024-07-26 08:59:08.505560] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:50.290 [2024-07-26 08:59:08.505611] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:50.290 [2024-07-26 08:59:08.505670] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:25:50.290 [2024-07-26 08:59:08.505676] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:50.290 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:50.290 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:25:50.290 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:50.290 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:50.290 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:50.290 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:50.290 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:25:50.290 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:25:50.290 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:25:50.290 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:50.290 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:50.290 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:50.290 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:25:50.290 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:25:50.290 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:50.290 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:50.290 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:50.290 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:25:50.290 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:50.290 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:50.290 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:50.290 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:25:50.290 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:50.290 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:50.290 [2024-07-26 08:59:08.730899] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:50.290 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:50.290 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:25:50.290 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:50.290 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:50.551 Malloc1 00:25:50.551 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:50.551 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:50.551 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:50.551 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:50.551 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:50.551 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:25:50.551 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:50.551 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:50.551 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:50.551 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:50.551 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:50.551 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:50.551 [2024-07-26 08:59:08.783664] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:50.551 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:50.551 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=1039221 00:25:50.551 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:25:50.551 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:25:50.551 EAL: No free 2048 kB hugepages reported on node 1 00:25:52.513 08:59:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:25:52.513 08:59:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:52.513 08:59:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:52.513 08:59:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:52.513 08:59:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:25:52.513 "tick_rate": 2700000000, 00:25:52.513 "poll_groups": [ 00:25:52.513 { 00:25:52.513 "name": "nvmf_tgt_poll_group_000", 00:25:52.513 "admin_qpairs": 1, 00:25:52.513 "io_qpairs": 1, 00:25:52.513 "current_admin_qpairs": 1, 00:25:52.513 "current_io_qpairs": 1, 00:25:52.513 "pending_bdev_io": 0, 00:25:52.513 "completed_nvme_io": 20030, 00:25:52.513 "transports": [ 00:25:52.513 { 00:25:52.513 "trtype": "TCP" 00:25:52.513 } 00:25:52.513 ] 00:25:52.513 }, 00:25:52.513 { 00:25:52.513 "name": "nvmf_tgt_poll_group_001", 00:25:52.513 "admin_qpairs": 0, 00:25:52.513 "io_qpairs": 1, 00:25:52.513 "current_admin_qpairs": 0, 00:25:52.513 "current_io_qpairs": 1, 00:25:52.513 "pending_bdev_io": 0, 00:25:52.513 "completed_nvme_io": 18360, 00:25:52.513 "transports": [ 00:25:52.513 { 00:25:52.513 "trtype": "TCP" 00:25:52.513 } 00:25:52.513 ] 00:25:52.513 }, 00:25:52.513 { 00:25:52.513 "name": "nvmf_tgt_poll_group_002", 00:25:52.513 "admin_qpairs": 0, 00:25:52.513 "io_qpairs": 1, 00:25:52.513 "current_admin_qpairs": 0, 00:25:52.513 "current_io_qpairs": 1, 00:25:52.513 "pending_bdev_io": 0, 00:25:52.513 "completed_nvme_io": 19107, 00:25:52.513 "transports": [ 00:25:52.513 { 00:25:52.513 "trtype": "TCP" 00:25:52.513 } 00:25:52.513 ] 00:25:52.513 }, 00:25:52.513 { 00:25:52.513 "name": "nvmf_tgt_poll_group_003", 00:25:52.513 "admin_qpairs": 0, 00:25:52.513 "io_qpairs": 1, 00:25:52.513 "current_admin_qpairs": 0, 00:25:52.513 "current_io_qpairs": 1, 00:25:52.513 "pending_bdev_io": 0, 00:25:52.513 "completed_nvme_io": 21326, 00:25:52.513 "transports": [ 00:25:52.513 { 00:25:52.513 "trtype": "TCP" 00:25:52.513 } 00:25:52.513 ] 00:25:52.513 } 00:25:52.513 ] 00:25:52.513 }' 00:25:52.513 08:59:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:25:52.513 08:59:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:25:52.513 08:59:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:25:52.513 08:59:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:25:52.513 08:59:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 1039221 00:26:00.646 Initializing NVMe Controllers 00:26:00.646 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:00.646 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:26:00.646 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:26:00.646 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:26:00.646 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:26:00.646 Initialization complete. Launching workers. 00:26:00.646 ======================================================== 00:26:00.646 Latency(us) 00:26:00.646 Device Information : IOPS MiB/s Average min max 00:26:00.646 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 11203.90 43.77 5712.56 2199.72 8462.60 00:26:00.646 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 9648.20 37.69 6634.86 2935.18 13114.82 00:26:00.646 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 9997.20 39.05 6401.50 2993.50 9746.56 00:26:00.646 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10393.60 40.60 6157.99 2971.54 8867.55 00:26:00.646 ======================================================== 00:26:00.646 Total : 41242.88 161.11 6207.57 2199.72 13114.82 00:26:00.646 00:26:00.646 08:59:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:26:00.646 08:59:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:00.646 08:59:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:26:00.646 08:59:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:00.646 08:59:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:26:00.646 08:59:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:00.646 08:59:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:00.646 rmmod nvme_tcp 00:26:00.646 rmmod nvme_fabrics 00:26:00.646 rmmod nvme_keyring 00:26:00.646 08:59:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:00.646 08:59:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:26:00.646 08:59:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:26:00.646 08:59:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 1039075 ']' 00:26:00.646 08:59:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 1039075 00:26:00.646 08:59:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 1039075 ']' 00:26:00.646 08:59:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 1039075 00:26:00.646 08:59:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:26:00.646 08:59:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:00.646 08:59:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1039075 00:26:00.646 08:59:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:00.646 08:59:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:00.646 08:59:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1039075' 00:26:00.646 killing process with pid 1039075 00:26:00.646 08:59:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 1039075 00:26:00.646 08:59:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 1039075 00:26:00.904 08:59:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:00.904 08:59:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:00.904 08:59:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:00.904 08:59:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:00.904 08:59:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:00.904 08:59:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:00.904 08:59:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:00.904 08:59:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:03.446 08:59:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:03.446 08:59:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:26:03.446 08:59:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:26:03.705 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:26:05.622 08:59:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:26:10.903 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:26:10.903 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:10.903 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:10.903 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:10.903 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:10.903 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:10.903 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:10.903 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:10.903 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:10.903 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:10.903 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:10.903 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:26:10.903 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:10.903 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:10.903 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:26:10.903 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:10.903 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:10.903 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:10.903 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:10.903 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:10.903 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:26:10.903 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:10.903 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:26:10.903 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:26:10.903 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:26:10.903 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:26:10.903 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:26:10.903 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:26:10.904 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:10.904 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:10.904 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:10.904 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:10.904 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:10.904 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:10.904 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:10.904 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:10.904 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:10.904 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:10.904 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:10.904 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:10.904 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:10.904 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:10.904 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:10.904 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:10.904 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:10.904 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:10.904 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:10.904 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:10.904 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:10.904 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:10.904 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:10.904 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:10.904 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:10.904 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:10.904 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:10.904 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:10.904 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:10.904 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:10.904 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:10.904 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:10.904 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:10.904 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:10.904 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:10.904 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:10.904 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:10.904 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:10.904 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:10.904 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:10.904 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:10.904 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:10.904 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:10.904 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:10.904 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:10.904 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:10.904 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:10.904 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:10.904 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:10.904 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:10.904 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:10.904 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:10.904 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:10.904 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:10.904 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:10.904 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:10.904 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:10.904 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:26:10.904 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:10.904 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:10.904 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:10.904 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:10.904 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:10.904 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:10.904 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:10.904 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:10.904 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:10.904 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:10.904 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:10.904 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:10.904 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:10.904 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:10.904 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:10.904 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:10.904 08:59:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:10.904 08:59:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:10.904 08:59:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:10.904 08:59:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:10.904 08:59:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:10.904 08:59:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:10.904 08:59:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:10.904 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:10.904 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.206 ms 00:26:10.904 00:26:10.904 --- 10.0.0.2 ping statistics --- 00:26:10.904 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:10.904 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:26:10.904 08:59:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:10.904 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:10.904 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.092 ms 00:26:10.904 00:26:10.904 --- 10.0.0.1 ping statistics --- 00:26:10.904 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:10.904 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:26:10.904 08:59:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:10.904 08:59:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:26:10.904 08:59:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:10.904 08:59:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:10.904 08:59:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:10.904 08:59:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:10.904 08:59:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:10.904 08:59:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:10.904 08:59:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:10.904 08:59:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:26:10.904 08:59:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:26:10.904 08:59:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:26:10.904 08:59:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:26:10.904 net.core.busy_poll = 1 00:26:10.904 08:59:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:26:10.904 net.core.busy_read = 1 00:26:10.904 08:59:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:26:10.904 08:59:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:26:10.904 08:59:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:26:10.905 08:59:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:26:10.905 08:59:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:26:10.905 08:59:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:26:10.905 08:59:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:10.905 08:59:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:10.905 08:59:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:10.905 08:59:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=1041829 00:26:10.905 08:59:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:26:10.905 08:59:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 1041829 00:26:10.905 08:59:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 1041829 ']' 00:26:10.905 08:59:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:10.905 08:59:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:10.905 08:59:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:10.905 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:10.905 08:59:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:10.905 08:59:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:10.905 [2024-07-26 08:59:29.295550] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:26:10.905 [2024-07-26 08:59:29.295640] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:10.905 EAL: No free 2048 kB hugepages reported on node 1 00:26:10.905 [2024-07-26 08:59:29.337228] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:11.165 [2024-07-26 08:59:29.364345] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:11.165 [2024-07-26 08:59:29.449982] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:11.165 [2024-07-26 08:59:29.450035] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:11.165 [2024-07-26 08:59:29.450068] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:11.165 [2024-07-26 08:59:29.450081] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:11.165 [2024-07-26 08:59:29.450091] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:11.165 [2024-07-26 08:59:29.450155] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:11.165 [2024-07-26 08:59:29.450218] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:11.165 [2024-07-26 08:59:29.450269] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:26:11.165 [2024-07-26 08:59:29.450271] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:11.165 08:59:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:11.165 08:59:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:26:11.165 08:59:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:11.165 08:59:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:11.165 08:59:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:11.165 08:59:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:11.165 08:59:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:26:11.165 08:59:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:26:11.165 08:59:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:11.165 08:59:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:26:11.165 08:59:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:11.165 08:59:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:11.165 08:59:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:26:11.165 08:59:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:26:11.165 08:59:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:11.165 08:59:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:11.165 08:59:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:11.165 08:59:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:26:11.165 08:59:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:11.165 08:59:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:11.426 08:59:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:11.426 08:59:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:26:11.426 08:59:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:11.426 08:59:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:11.426 [2024-07-26 08:59:29.689643] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:11.426 08:59:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:11.426 08:59:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:26:11.426 08:59:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:11.426 08:59:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:11.426 Malloc1 00:26:11.426 08:59:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:11.426 08:59:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:11.426 08:59:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:11.426 08:59:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:11.426 08:59:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:11.426 08:59:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:26:11.426 08:59:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:11.426 08:59:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:11.426 08:59:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:11.426 08:59:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:11.426 08:59:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:11.426 08:59:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:11.426 [2024-07-26 08:59:29.742822] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:11.426 08:59:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:11.426 08:59:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=1041866 00:26:11.426 08:59:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:26:11.426 08:59:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:26:11.426 EAL: No free 2048 kB hugepages reported on node 1 00:26:13.331 08:59:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:26:13.331 08:59:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:13.331 08:59:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:13.331 08:59:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:13.331 08:59:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:26:13.331 "tick_rate": 2700000000, 00:26:13.331 "poll_groups": [ 00:26:13.331 { 00:26:13.331 "name": "nvmf_tgt_poll_group_000", 00:26:13.331 "admin_qpairs": 1, 00:26:13.331 "io_qpairs": 3, 00:26:13.331 "current_admin_qpairs": 1, 00:26:13.331 "current_io_qpairs": 3, 00:26:13.331 "pending_bdev_io": 0, 00:26:13.331 "completed_nvme_io": 24645, 00:26:13.331 "transports": [ 00:26:13.331 { 00:26:13.331 "trtype": "TCP" 00:26:13.331 } 00:26:13.331 ] 00:26:13.331 }, 00:26:13.331 { 00:26:13.331 "name": "nvmf_tgt_poll_group_001", 00:26:13.331 "admin_qpairs": 0, 00:26:13.331 "io_qpairs": 1, 00:26:13.331 "current_admin_qpairs": 0, 00:26:13.331 "current_io_qpairs": 1, 00:26:13.331 "pending_bdev_io": 0, 00:26:13.331 "completed_nvme_io": 26393, 00:26:13.331 "transports": [ 00:26:13.331 { 00:26:13.331 "trtype": "TCP" 00:26:13.331 } 00:26:13.331 ] 00:26:13.331 }, 00:26:13.331 { 00:26:13.331 "name": "nvmf_tgt_poll_group_002", 00:26:13.331 "admin_qpairs": 0, 00:26:13.331 "io_qpairs": 0, 00:26:13.331 "current_admin_qpairs": 0, 00:26:13.331 "current_io_qpairs": 0, 00:26:13.331 "pending_bdev_io": 0, 00:26:13.331 "completed_nvme_io": 0, 00:26:13.331 "transports": [ 00:26:13.331 { 00:26:13.331 "trtype": "TCP" 00:26:13.331 } 00:26:13.331 ] 00:26:13.331 }, 00:26:13.331 { 00:26:13.331 "name": "nvmf_tgt_poll_group_003", 00:26:13.331 "admin_qpairs": 0, 00:26:13.331 "io_qpairs": 0, 00:26:13.331 "current_admin_qpairs": 0, 00:26:13.331 "current_io_qpairs": 0, 00:26:13.331 "pending_bdev_io": 0, 00:26:13.331 "completed_nvme_io": 0, 00:26:13.331 "transports": [ 00:26:13.331 { 00:26:13.331 "trtype": "TCP" 00:26:13.331 } 00:26:13.331 ] 00:26:13.331 } 00:26:13.331 ] 00:26:13.331 }' 00:26:13.331 08:59:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:26:13.331 08:59:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:26:13.590 08:59:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:26:13.590 08:59:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:26:13.590 08:59:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 1041866 00:26:21.743 Initializing NVMe Controllers 00:26:21.743 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:21.743 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:26:21.743 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:26:21.743 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:26:21.743 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:26:21.743 Initialization complete. Launching workers. 00:26:21.743 ======================================================== 00:26:21.743 Latency(us) 00:26:21.743 Device Information : IOPS MiB/s Average min max 00:26:21.743 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 4309.90 16.84 14855.26 2291.36 66460.31 00:26:21.743 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 13926.30 54.40 4595.38 1621.49 7048.00 00:26:21.743 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 4182.30 16.34 15310.01 3268.21 64894.36 00:26:21.743 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 4558.90 17.81 14045.21 1849.79 63867.22 00:26:21.743 ======================================================== 00:26:21.743 Total : 26977.40 105.38 9492.50 1621.49 66460.31 00:26:21.743 00:26:21.743 08:59:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:26:21.743 08:59:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:21.743 08:59:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:26:21.743 08:59:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:21.743 08:59:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:26:21.743 08:59:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:21.743 08:59:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:21.743 rmmod nvme_tcp 00:26:21.743 rmmod nvme_fabrics 00:26:21.743 rmmod nvme_keyring 00:26:21.743 08:59:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:21.743 08:59:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:26:21.743 08:59:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:26:21.743 08:59:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 1041829 ']' 00:26:21.743 08:59:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 1041829 00:26:21.743 08:59:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 1041829 ']' 00:26:21.743 08:59:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 1041829 00:26:21.743 08:59:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:26:21.743 08:59:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:21.743 08:59:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1041829 00:26:21.743 08:59:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:21.743 08:59:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:21.743 08:59:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1041829' 00:26:21.743 killing process with pid 1041829 00:26:21.743 08:59:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 1041829 00:26:21.743 08:59:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 1041829 00:26:22.001 08:59:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:22.001 08:59:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:22.001 08:59:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:22.001 08:59:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:22.001 08:59:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:22.001 08:59:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:22.001 08:59:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:22.001 08:59:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:23.906 08:59:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:23.906 08:59:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:26:23.906 00:26:23.906 real 0m43.737s 00:26:23.906 user 2m30.790s 00:26:23.906 sys 0m12.804s 00:26:23.906 08:59:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:23.906 08:59:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:23.906 ************************************ 00:26:23.906 END TEST nvmf_perf_adq 00:26:23.906 ************************************ 00:26:23.906 08:59:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@63 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:26:23.906 08:59:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:23.906 08:59:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:23.906 08:59:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:23.906 ************************************ 00:26:23.906 START TEST nvmf_shutdown 00:26:23.906 ************************************ 00:26:23.906 08:59:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:26:23.906 * Looking for test storage... 00:26:23.906 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:23.906 08:59:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:23.906 08:59:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:26:23.906 08:59:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:23.906 08:59:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:23.906 08:59:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:23.906 08:59:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:23.906 08:59:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:23.906 08:59:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:23.906 08:59:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:23.906 08:59:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:23.906 08:59:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:24.165 08:59:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:24.165 08:59:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:26:24.165 08:59:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:26:24.165 08:59:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:24.165 08:59:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:24.165 08:59:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:24.165 08:59:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:24.165 08:59:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:24.165 08:59:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:24.165 08:59:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:24.165 08:59:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:24.165 08:59:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:24.165 08:59:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:24.165 08:59:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:24.165 08:59:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:26:24.165 08:59:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:24.165 08:59:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:26:24.165 08:59:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:24.165 08:59:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:24.165 08:59:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:24.165 08:59:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:24.165 08:59:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:24.165 08:59:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:24.165 08:59:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:24.165 08:59:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:24.165 08:59:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:24.165 08:59:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:24.165 08:59:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:26:24.165 08:59:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:26:24.165 08:59:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:24.165 08:59:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:24.165 ************************************ 00:26:24.165 START TEST nvmf_shutdown_tc1 00:26:24.165 ************************************ 00:26:24.165 08:59:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc1 00:26:24.165 08:59:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:26:24.165 08:59:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:26:24.165 08:59:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:24.165 08:59:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:24.165 08:59:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:24.165 08:59:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:24.165 08:59:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:24.166 08:59:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:24.166 08:59:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:24.166 08:59:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:24.166 08:59:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:24.166 08:59:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:24.166 08:59:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:26:24.166 08:59:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:26.067 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:26.067 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:26:26.067 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:26.067 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:26.067 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:26.067 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:26.067 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:26.067 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:26:26.067 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:26.067 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:26:26.067 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:26:26.067 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:26:26.067 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:26:26.067 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:26:26.067 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:26:26.067 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:26.067 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:26.067 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:26.067 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:26.067 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:26.067 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:26.067 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:26.067 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:26.067 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:26.067 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:26.067 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:26.067 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:26.067 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:26.067 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:26.067 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:26.067 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:26.067 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:26.067 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:26.067 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:26.067 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:26.067 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:26.067 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:26.067 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:26.067 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:26.067 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:26.067 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:26.067 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:26.067 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:26.067 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:26.067 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:26.067 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:26.067 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:26.067 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:26.067 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:26.067 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:26.067 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:26.067 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:26.067 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:26.067 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:26.067 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:26.067 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:26.068 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:26.068 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:26.068 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:26.068 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:26.068 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:26.068 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:26.068 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:26.068 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:26.068 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:26.068 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:26.068 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:26.068 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:26.068 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:26.068 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:26.068 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:26.068 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:26.068 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:26:26.068 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:26.068 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:26.068 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:26.068 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:26.068 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:26.068 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:26.068 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:26.068 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:26.068 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:26.068 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:26.068 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:26.068 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:26.068 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:26.068 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:26.068 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:26.068 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:26.068 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:26.068 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:26.068 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:26.068 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:26.326 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:26.326 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:26.326 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:26.326 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:26.326 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.186 ms 00:26:26.326 00:26:26.327 --- 10.0.0.2 ping statistics --- 00:26:26.327 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:26.327 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:26:26.327 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:26.327 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:26.327 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.154 ms 00:26:26.327 00:26:26.327 --- 10.0.0.1 ping statistics --- 00:26:26.327 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:26.327 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:26:26.327 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:26.327 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:26:26.327 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:26.327 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:26.327 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:26.327 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:26.327 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:26.327 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:26.327 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:26.327 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:26:26.327 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:26.327 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:26.327 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:26.327 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=1045020 00:26:26.327 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:26:26.327 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 1045020 00:26:26.327 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 1045020 ']' 00:26:26.327 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:26.327 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:26.327 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:26.327 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:26.327 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:26.327 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:26.327 [2024-07-26 08:59:44.640171] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:26:26.327 [2024-07-26 08:59:44.640248] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:26.327 EAL: No free 2048 kB hugepages reported on node 1 00:26:26.327 [2024-07-26 08:59:44.679834] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:26.327 [2024-07-26 08:59:44.706681] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:26.588 [2024-07-26 08:59:44.792261] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:26.588 [2024-07-26 08:59:44.792327] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:26.588 [2024-07-26 08:59:44.792356] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:26.588 [2024-07-26 08:59:44.792367] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:26.588 [2024-07-26 08:59:44.792377] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:26.588 [2024-07-26 08:59:44.792471] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:26.588 [2024-07-26 08:59:44.792532] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:26:26.588 [2024-07-26 08:59:44.792594] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:26:26.588 [2024-07-26 08:59:44.792596] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:26.588 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:26.588 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:26:26.588 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:26.588 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:26.588 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:26.588 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:26.588 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:26.588 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.588 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:26.588 [2024-07-26 08:59:44.926177] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:26.588 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.588 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:26:26.588 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:26:26.588 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:26.588 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:26.588 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:26.588 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:26.588 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:26:26.588 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:26.588 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:26:26.588 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:26.588 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:26:26.588 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:26.588 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:26:26.588 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:26.588 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:26:26.588 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:26.588 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:26:26.588 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:26.588 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:26:26.588 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:26.588 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:26:26.588 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:26.588 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:26:26.588 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:26.588 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:26:26.588 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:26:26.588 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.588 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:26.588 Malloc1 00:26:26.588 [2024-07-26 08:59:45.001178] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:26.588 Malloc2 00:26:26.849 Malloc3 00:26:26.849 Malloc4 00:26:26.849 Malloc5 00:26:26.849 Malloc6 00:26:26.849 Malloc7 00:26:27.110 Malloc8 00:26:27.110 Malloc9 00:26:27.110 Malloc10 00:26:27.110 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.110 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:26:27.110 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:27.110 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:27.110 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=1045193 00:26:27.110 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 1045193 /var/tmp/bdevperf.sock 00:26:27.110 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 1045193 ']' 00:26:27.110 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:26:27.110 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:27.110 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:26:27.110 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:27.110 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:26:27.110 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:27.110 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:27.110 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:26:27.110 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:27.110 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:27.110 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:27.110 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:27.110 { 00:26:27.110 "params": { 00:26:27.110 "name": "Nvme$subsystem", 00:26:27.110 "trtype": "$TEST_TRANSPORT", 00:26:27.110 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:27.110 "adrfam": "ipv4", 00:26:27.110 "trsvcid": "$NVMF_PORT", 00:26:27.110 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:27.110 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:27.110 "hdgst": ${hdgst:-false}, 00:26:27.110 "ddgst": ${ddgst:-false} 00:26:27.110 }, 00:26:27.110 "method": "bdev_nvme_attach_controller" 00:26:27.110 } 00:26:27.110 EOF 00:26:27.110 )") 00:26:27.110 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:27.110 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:27.110 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:27.110 { 00:26:27.110 "params": { 00:26:27.110 "name": "Nvme$subsystem", 00:26:27.110 "trtype": "$TEST_TRANSPORT", 00:26:27.110 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:27.110 "adrfam": "ipv4", 00:26:27.110 "trsvcid": "$NVMF_PORT", 00:26:27.110 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:27.110 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:27.110 "hdgst": ${hdgst:-false}, 00:26:27.110 "ddgst": ${ddgst:-false} 00:26:27.110 }, 00:26:27.110 "method": "bdev_nvme_attach_controller" 00:26:27.110 } 00:26:27.110 EOF 00:26:27.110 )") 00:26:27.110 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:27.110 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:27.110 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:27.110 { 00:26:27.110 "params": { 00:26:27.110 "name": "Nvme$subsystem", 00:26:27.110 "trtype": "$TEST_TRANSPORT", 00:26:27.110 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:27.110 "adrfam": "ipv4", 00:26:27.110 "trsvcid": "$NVMF_PORT", 00:26:27.110 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:27.110 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:27.110 "hdgst": ${hdgst:-false}, 00:26:27.110 "ddgst": ${ddgst:-false} 00:26:27.110 }, 00:26:27.110 "method": "bdev_nvme_attach_controller" 00:26:27.110 } 00:26:27.110 EOF 00:26:27.110 )") 00:26:27.110 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:27.110 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:27.110 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:27.110 { 00:26:27.110 "params": { 00:26:27.110 "name": "Nvme$subsystem", 00:26:27.110 "trtype": "$TEST_TRANSPORT", 00:26:27.110 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:27.110 "adrfam": "ipv4", 00:26:27.110 "trsvcid": "$NVMF_PORT", 00:26:27.110 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:27.110 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:27.110 "hdgst": ${hdgst:-false}, 00:26:27.110 "ddgst": ${ddgst:-false} 00:26:27.110 }, 00:26:27.110 "method": "bdev_nvme_attach_controller" 00:26:27.110 } 00:26:27.110 EOF 00:26:27.110 )") 00:26:27.110 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:27.110 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:27.110 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:27.110 { 00:26:27.110 "params": { 00:26:27.110 "name": "Nvme$subsystem", 00:26:27.110 "trtype": "$TEST_TRANSPORT", 00:26:27.110 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:27.110 "adrfam": "ipv4", 00:26:27.110 "trsvcid": "$NVMF_PORT", 00:26:27.110 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:27.110 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:27.110 "hdgst": ${hdgst:-false}, 00:26:27.110 "ddgst": ${ddgst:-false} 00:26:27.110 }, 00:26:27.110 "method": "bdev_nvme_attach_controller" 00:26:27.110 } 00:26:27.110 EOF 00:26:27.110 )") 00:26:27.110 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:27.110 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:27.111 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:27.111 { 00:26:27.111 "params": { 00:26:27.111 "name": "Nvme$subsystem", 00:26:27.111 "trtype": "$TEST_TRANSPORT", 00:26:27.111 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:27.111 "adrfam": "ipv4", 00:26:27.111 "trsvcid": "$NVMF_PORT", 00:26:27.111 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:27.111 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:27.111 "hdgst": ${hdgst:-false}, 00:26:27.111 "ddgst": ${ddgst:-false} 00:26:27.111 }, 00:26:27.111 "method": "bdev_nvme_attach_controller" 00:26:27.111 } 00:26:27.111 EOF 00:26:27.111 )") 00:26:27.111 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:27.111 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:27.111 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:27.111 { 00:26:27.111 "params": { 00:26:27.111 "name": "Nvme$subsystem", 00:26:27.111 "trtype": "$TEST_TRANSPORT", 00:26:27.111 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:27.111 "adrfam": "ipv4", 00:26:27.111 "trsvcid": "$NVMF_PORT", 00:26:27.111 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:27.111 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:27.111 "hdgst": ${hdgst:-false}, 00:26:27.111 "ddgst": ${ddgst:-false} 00:26:27.111 }, 00:26:27.111 "method": "bdev_nvme_attach_controller" 00:26:27.111 } 00:26:27.111 EOF 00:26:27.111 )") 00:26:27.111 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:27.111 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:27.111 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:27.111 { 00:26:27.111 "params": { 00:26:27.111 "name": "Nvme$subsystem", 00:26:27.111 "trtype": "$TEST_TRANSPORT", 00:26:27.111 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:27.111 "adrfam": "ipv4", 00:26:27.111 "trsvcid": "$NVMF_PORT", 00:26:27.111 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:27.111 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:27.111 "hdgst": ${hdgst:-false}, 00:26:27.111 "ddgst": ${ddgst:-false} 00:26:27.111 }, 00:26:27.111 "method": "bdev_nvme_attach_controller" 00:26:27.111 } 00:26:27.111 EOF 00:26:27.111 )") 00:26:27.111 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:27.111 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:27.111 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:27.111 { 00:26:27.111 "params": { 00:26:27.111 "name": "Nvme$subsystem", 00:26:27.111 "trtype": "$TEST_TRANSPORT", 00:26:27.111 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:27.111 "adrfam": "ipv4", 00:26:27.111 "trsvcid": "$NVMF_PORT", 00:26:27.111 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:27.111 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:27.111 "hdgst": ${hdgst:-false}, 00:26:27.111 "ddgst": ${ddgst:-false} 00:26:27.111 }, 00:26:27.111 "method": "bdev_nvme_attach_controller" 00:26:27.111 } 00:26:27.111 EOF 00:26:27.111 )") 00:26:27.111 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:27.111 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:27.111 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:27.111 { 00:26:27.111 "params": { 00:26:27.111 "name": "Nvme$subsystem", 00:26:27.111 "trtype": "$TEST_TRANSPORT", 00:26:27.111 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:27.111 "adrfam": "ipv4", 00:26:27.111 "trsvcid": "$NVMF_PORT", 00:26:27.111 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:27.111 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:27.111 "hdgst": ${hdgst:-false}, 00:26:27.111 "ddgst": ${ddgst:-false} 00:26:27.111 }, 00:26:27.111 "method": "bdev_nvme_attach_controller" 00:26:27.111 } 00:26:27.111 EOF 00:26:27.111 )") 00:26:27.111 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:27.111 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:26:27.111 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:26:27.111 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:27.111 "params": { 00:26:27.111 "name": "Nvme1", 00:26:27.111 "trtype": "tcp", 00:26:27.111 "traddr": "10.0.0.2", 00:26:27.111 "adrfam": "ipv4", 00:26:27.111 "trsvcid": "4420", 00:26:27.111 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:27.111 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:27.111 "hdgst": false, 00:26:27.111 "ddgst": false 00:26:27.111 }, 00:26:27.111 "method": "bdev_nvme_attach_controller" 00:26:27.111 },{ 00:26:27.111 "params": { 00:26:27.111 "name": "Nvme2", 00:26:27.111 "trtype": "tcp", 00:26:27.111 "traddr": "10.0.0.2", 00:26:27.111 "adrfam": "ipv4", 00:26:27.111 "trsvcid": "4420", 00:26:27.111 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:27.111 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:26:27.111 "hdgst": false, 00:26:27.111 "ddgst": false 00:26:27.111 }, 00:26:27.111 "method": "bdev_nvme_attach_controller" 00:26:27.111 },{ 00:26:27.111 "params": { 00:26:27.111 "name": "Nvme3", 00:26:27.111 "trtype": "tcp", 00:26:27.111 "traddr": "10.0.0.2", 00:26:27.111 "adrfam": "ipv4", 00:26:27.111 "trsvcid": "4420", 00:26:27.111 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:26:27.111 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:26:27.111 "hdgst": false, 00:26:27.111 "ddgst": false 00:26:27.111 }, 00:26:27.111 "method": "bdev_nvme_attach_controller" 00:26:27.111 },{ 00:26:27.111 "params": { 00:26:27.111 "name": "Nvme4", 00:26:27.111 "trtype": "tcp", 00:26:27.111 "traddr": "10.0.0.2", 00:26:27.111 "adrfam": "ipv4", 00:26:27.111 "trsvcid": "4420", 00:26:27.111 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:26:27.111 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:26:27.111 "hdgst": false, 00:26:27.111 "ddgst": false 00:26:27.111 }, 00:26:27.111 "method": "bdev_nvme_attach_controller" 00:26:27.111 },{ 00:26:27.111 "params": { 00:26:27.111 "name": "Nvme5", 00:26:27.111 "trtype": "tcp", 00:26:27.111 "traddr": "10.0.0.2", 00:26:27.111 "adrfam": "ipv4", 00:26:27.111 "trsvcid": "4420", 00:26:27.111 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:26:27.111 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:26:27.111 "hdgst": false, 00:26:27.111 "ddgst": false 00:26:27.111 }, 00:26:27.111 "method": "bdev_nvme_attach_controller" 00:26:27.111 },{ 00:26:27.111 "params": { 00:26:27.111 "name": "Nvme6", 00:26:27.111 "trtype": "tcp", 00:26:27.111 "traddr": "10.0.0.2", 00:26:27.111 "adrfam": "ipv4", 00:26:27.111 "trsvcid": "4420", 00:26:27.111 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:26:27.111 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:26:27.111 "hdgst": false, 00:26:27.111 "ddgst": false 00:26:27.111 }, 00:26:27.111 "method": "bdev_nvme_attach_controller" 00:26:27.111 },{ 00:26:27.111 "params": { 00:26:27.111 "name": "Nvme7", 00:26:27.111 "trtype": "tcp", 00:26:27.111 "traddr": "10.0.0.2", 00:26:27.111 "adrfam": "ipv4", 00:26:27.111 "trsvcid": "4420", 00:26:27.111 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:26:27.111 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:26:27.111 "hdgst": false, 00:26:27.111 "ddgst": false 00:26:27.111 }, 00:26:27.111 "method": "bdev_nvme_attach_controller" 00:26:27.111 },{ 00:26:27.111 "params": { 00:26:27.111 "name": "Nvme8", 00:26:27.111 "trtype": "tcp", 00:26:27.111 "traddr": "10.0.0.2", 00:26:27.111 "adrfam": "ipv4", 00:26:27.111 "trsvcid": "4420", 00:26:27.111 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:26:27.111 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:26:27.111 "hdgst": false, 00:26:27.111 "ddgst": false 00:26:27.111 }, 00:26:27.111 "method": "bdev_nvme_attach_controller" 00:26:27.111 },{ 00:26:27.111 "params": { 00:26:27.111 "name": "Nvme9", 00:26:27.112 "trtype": "tcp", 00:26:27.112 "traddr": "10.0.0.2", 00:26:27.112 "adrfam": "ipv4", 00:26:27.112 "trsvcid": "4420", 00:26:27.112 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:26:27.112 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:26:27.112 "hdgst": false, 00:26:27.112 "ddgst": false 00:26:27.112 }, 00:26:27.112 "method": "bdev_nvme_attach_controller" 00:26:27.112 },{ 00:26:27.112 "params": { 00:26:27.112 "name": "Nvme10", 00:26:27.112 "trtype": "tcp", 00:26:27.112 "traddr": "10.0.0.2", 00:26:27.112 "adrfam": "ipv4", 00:26:27.112 "trsvcid": "4420", 00:26:27.112 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:26:27.112 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:26:27.112 "hdgst": false, 00:26:27.112 "ddgst": false 00:26:27.112 }, 00:26:27.112 "method": "bdev_nvme_attach_controller" 00:26:27.112 }' 00:26:27.112 [2024-07-26 08:59:45.519568] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:26:27.112 [2024-07-26 08:59:45.519639] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:26:27.112 EAL: No free 2048 kB hugepages reported on node 1 00:26:27.112 [2024-07-26 08:59:45.554429] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:27.372 [2024-07-26 08:59:45.584213] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:27.372 [2024-07-26 08:59:45.672304] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:29.280 08:59:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:29.281 08:59:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:26:29.281 08:59:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:26:29.281 08:59:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.281 08:59:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:29.281 08:59:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.281 08:59:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 1045193 00:26:29.281 08:59:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:26:29.281 08:59:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:26:30.216 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 1045193 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:26:30.216 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 1045020 00:26:30.216 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:26:30.217 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:26:30.217 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:26:30.217 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:26:30.217 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:30.217 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:30.217 { 00:26:30.217 "params": { 00:26:30.217 "name": "Nvme$subsystem", 00:26:30.217 "trtype": "$TEST_TRANSPORT", 00:26:30.217 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:30.217 "adrfam": "ipv4", 00:26:30.217 "trsvcid": "$NVMF_PORT", 00:26:30.217 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:30.217 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:30.217 "hdgst": ${hdgst:-false}, 00:26:30.217 "ddgst": ${ddgst:-false} 00:26:30.217 }, 00:26:30.217 "method": "bdev_nvme_attach_controller" 00:26:30.217 } 00:26:30.217 EOF 00:26:30.217 )") 00:26:30.217 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:30.217 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:30.217 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:30.217 { 00:26:30.217 "params": { 00:26:30.217 "name": "Nvme$subsystem", 00:26:30.217 "trtype": "$TEST_TRANSPORT", 00:26:30.217 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:30.217 "adrfam": "ipv4", 00:26:30.217 "trsvcid": "$NVMF_PORT", 00:26:30.217 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:30.217 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:30.217 "hdgst": ${hdgst:-false}, 00:26:30.217 "ddgst": ${ddgst:-false} 00:26:30.217 }, 00:26:30.217 "method": "bdev_nvme_attach_controller" 00:26:30.217 } 00:26:30.217 EOF 00:26:30.217 )") 00:26:30.217 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:30.217 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:30.217 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:30.217 { 00:26:30.217 "params": { 00:26:30.217 "name": "Nvme$subsystem", 00:26:30.217 "trtype": "$TEST_TRANSPORT", 00:26:30.217 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:30.217 "adrfam": "ipv4", 00:26:30.217 "trsvcid": "$NVMF_PORT", 00:26:30.217 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:30.217 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:30.217 "hdgst": ${hdgst:-false}, 00:26:30.217 "ddgst": ${ddgst:-false} 00:26:30.217 }, 00:26:30.217 "method": "bdev_nvme_attach_controller" 00:26:30.217 } 00:26:30.217 EOF 00:26:30.217 )") 00:26:30.217 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:30.217 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:30.217 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:30.217 { 00:26:30.217 "params": { 00:26:30.217 "name": "Nvme$subsystem", 00:26:30.217 "trtype": "$TEST_TRANSPORT", 00:26:30.217 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:30.217 "adrfam": "ipv4", 00:26:30.217 "trsvcid": "$NVMF_PORT", 00:26:30.217 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:30.217 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:30.217 "hdgst": ${hdgst:-false}, 00:26:30.217 "ddgst": ${ddgst:-false} 00:26:30.217 }, 00:26:30.217 "method": "bdev_nvme_attach_controller" 00:26:30.217 } 00:26:30.217 EOF 00:26:30.217 )") 00:26:30.217 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:30.217 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:30.217 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:30.217 { 00:26:30.217 "params": { 00:26:30.217 "name": "Nvme$subsystem", 00:26:30.217 "trtype": "$TEST_TRANSPORT", 00:26:30.217 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:30.217 "adrfam": "ipv4", 00:26:30.217 "trsvcid": "$NVMF_PORT", 00:26:30.217 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:30.217 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:30.217 "hdgst": ${hdgst:-false}, 00:26:30.217 "ddgst": ${ddgst:-false} 00:26:30.217 }, 00:26:30.217 "method": "bdev_nvme_attach_controller" 00:26:30.217 } 00:26:30.217 EOF 00:26:30.217 )") 00:26:30.217 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:30.217 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:30.217 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:30.217 { 00:26:30.217 "params": { 00:26:30.217 "name": "Nvme$subsystem", 00:26:30.217 "trtype": "$TEST_TRANSPORT", 00:26:30.217 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:30.217 "adrfam": "ipv4", 00:26:30.217 "trsvcid": "$NVMF_PORT", 00:26:30.217 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:30.217 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:30.217 "hdgst": ${hdgst:-false}, 00:26:30.217 "ddgst": ${ddgst:-false} 00:26:30.217 }, 00:26:30.217 "method": "bdev_nvme_attach_controller" 00:26:30.217 } 00:26:30.217 EOF 00:26:30.217 )") 00:26:30.217 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:30.217 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:30.217 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:30.217 { 00:26:30.217 "params": { 00:26:30.217 "name": "Nvme$subsystem", 00:26:30.217 "trtype": "$TEST_TRANSPORT", 00:26:30.217 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:30.217 "adrfam": "ipv4", 00:26:30.217 "trsvcid": "$NVMF_PORT", 00:26:30.217 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:30.217 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:30.217 "hdgst": ${hdgst:-false}, 00:26:30.217 "ddgst": ${ddgst:-false} 00:26:30.217 }, 00:26:30.217 "method": "bdev_nvme_attach_controller" 00:26:30.217 } 00:26:30.217 EOF 00:26:30.217 )") 00:26:30.217 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:30.217 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:30.217 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:30.217 { 00:26:30.217 "params": { 00:26:30.217 "name": "Nvme$subsystem", 00:26:30.217 "trtype": "$TEST_TRANSPORT", 00:26:30.217 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:30.217 "adrfam": "ipv4", 00:26:30.217 "trsvcid": "$NVMF_PORT", 00:26:30.217 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:30.217 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:30.217 "hdgst": ${hdgst:-false}, 00:26:30.217 "ddgst": ${ddgst:-false} 00:26:30.217 }, 00:26:30.217 "method": "bdev_nvme_attach_controller" 00:26:30.217 } 00:26:30.217 EOF 00:26:30.217 )") 00:26:30.217 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:30.217 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:30.217 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:30.217 { 00:26:30.217 "params": { 00:26:30.217 "name": "Nvme$subsystem", 00:26:30.217 "trtype": "$TEST_TRANSPORT", 00:26:30.217 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:30.217 "adrfam": "ipv4", 00:26:30.217 "trsvcid": "$NVMF_PORT", 00:26:30.217 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:30.217 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:30.217 "hdgst": ${hdgst:-false}, 00:26:30.217 "ddgst": ${ddgst:-false} 00:26:30.217 }, 00:26:30.217 "method": "bdev_nvme_attach_controller" 00:26:30.217 } 00:26:30.217 EOF 00:26:30.217 )") 00:26:30.217 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:30.217 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:30.217 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:30.217 { 00:26:30.217 "params": { 00:26:30.217 "name": "Nvme$subsystem", 00:26:30.217 "trtype": "$TEST_TRANSPORT", 00:26:30.217 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:30.217 "adrfam": "ipv4", 00:26:30.217 "trsvcid": "$NVMF_PORT", 00:26:30.217 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:30.217 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:30.217 "hdgst": ${hdgst:-false}, 00:26:30.217 "ddgst": ${ddgst:-false} 00:26:30.217 }, 00:26:30.217 "method": "bdev_nvme_attach_controller" 00:26:30.217 } 00:26:30.217 EOF 00:26:30.217 )") 00:26:30.217 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:30.217 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:26:30.218 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:26:30.218 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:30.218 "params": { 00:26:30.218 "name": "Nvme1", 00:26:30.218 "trtype": "tcp", 00:26:30.218 "traddr": "10.0.0.2", 00:26:30.218 "adrfam": "ipv4", 00:26:30.218 "trsvcid": "4420", 00:26:30.218 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:30.218 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:30.218 "hdgst": false, 00:26:30.218 "ddgst": false 00:26:30.218 }, 00:26:30.218 "method": "bdev_nvme_attach_controller" 00:26:30.218 },{ 00:26:30.218 "params": { 00:26:30.218 "name": "Nvme2", 00:26:30.218 "trtype": "tcp", 00:26:30.218 "traddr": "10.0.0.2", 00:26:30.218 "adrfam": "ipv4", 00:26:30.218 "trsvcid": "4420", 00:26:30.218 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:30.218 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:26:30.218 "hdgst": false, 00:26:30.218 "ddgst": false 00:26:30.218 }, 00:26:30.218 "method": "bdev_nvme_attach_controller" 00:26:30.218 },{ 00:26:30.218 "params": { 00:26:30.218 "name": "Nvme3", 00:26:30.218 "trtype": "tcp", 00:26:30.218 "traddr": "10.0.0.2", 00:26:30.218 "adrfam": "ipv4", 00:26:30.218 "trsvcid": "4420", 00:26:30.218 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:26:30.218 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:26:30.218 "hdgst": false, 00:26:30.218 "ddgst": false 00:26:30.218 }, 00:26:30.218 "method": "bdev_nvme_attach_controller" 00:26:30.218 },{ 00:26:30.218 "params": { 00:26:30.218 "name": "Nvme4", 00:26:30.218 "trtype": "tcp", 00:26:30.218 "traddr": "10.0.0.2", 00:26:30.218 "adrfam": "ipv4", 00:26:30.218 "trsvcid": "4420", 00:26:30.218 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:26:30.218 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:26:30.218 "hdgst": false, 00:26:30.218 "ddgst": false 00:26:30.218 }, 00:26:30.218 "method": "bdev_nvme_attach_controller" 00:26:30.218 },{ 00:26:30.218 "params": { 00:26:30.218 "name": "Nvme5", 00:26:30.218 "trtype": "tcp", 00:26:30.218 "traddr": "10.0.0.2", 00:26:30.218 "adrfam": "ipv4", 00:26:30.218 "trsvcid": "4420", 00:26:30.218 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:26:30.218 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:26:30.218 "hdgst": false, 00:26:30.218 "ddgst": false 00:26:30.218 }, 00:26:30.218 "method": "bdev_nvme_attach_controller" 00:26:30.218 },{ 00:26:30.218 "params": { 00:26:30.218 "name": "Nvme6", 00:26:30.218 "trtype": "tcp", 00:26:30.218 "traddr": "10.0.0.2", 00:26:30.218 "adrfam": "ipv4", 00:26:30.218 "trsvcid": "4420", 00:26:30.218 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:26:30.218 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:26:30.218 "hdgst": false, 00:26:30.218 "ddgst": false 00:26:30.218 }, 00:26:30.218 "method": "bdev_nvme_attach_controller" 00:26:30.218 },{ 00:26:30.218 "params": { 00:26:30.218 "name": "Nvme7", 00:26:30.218 "trtype": "tcp", 00:26:30.218 "traddr": "10.0.0.2", 00:26:30.218 "adrfam": "ipv4", 00:26:30.218 "trsvcid": "4420", 00:26:30.218 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:26:30.218 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:26:30.218 "hdgst": false, 00:26:30.218 "ddgst": false 00:26:30.218 }, 00:26:30.218 "method": "bdev_nvme_attach_controller" 00:26:30.218 },{ 00:26:30.218 "params": { 00:26:30.218 "name": "Nvme8", 00:26:30.218 "trtype": "tcp", 00:26:30.218 "traddr": "10.0.0.2", 00:26:30.218 "adrfam": "ipv4", 00:26:30.218 "trsvcid": "4420", 00:26:30.218 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:26:30.218 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:26:30.218 "hdgst": false, 00:26:30.218 "ddgst": false 00:26:30.218 }, 00:26:30.218 "method": "bdev_nvme_attach_controller" 00:26:30.218 },{ 00:26:30.218 "params": { 00:26:30.218 "name": "Nvme9", 00:26:30.218 "trtype": "tcp", 00:26:30.218 "traddr": "10.0.0.2", 00:26:30.218 "adrfam": "ipv4", 00:26:30.218 "trsvcid": "4420", 00:26:30.218 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:26:30.218 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:26:30.218 "hdgst": false, 00:26:30.218 "ddgst": false 00:26:30.218 }, 00:26:30.218 "method": "bdev_nvme_attach_controller" 00:26:30.218 },{ 00:26:30.218 "params": { 00:26:30.218 "name": "Nvme10", 00:26:30.218 "trtype": "tcp", 00:26:30.218 "traddr": "10.0.0.2", 00:26:30.218 "adrfam": "ipv4", 00:26:30.218 "trsvcid": "4420", 00:26:30.218 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:26:30.218 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:26:30.218 "hdgst": false, 00:26:30.218 "ddgst": false 00:26:30.218 }, 00:26:30.218 "method": "bdev_nvme_attach_controller" 00:26:30.218 }' 00:26:30.218 [2024-07-26 08:59:48.590254] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:26:30.218 [2024-07-26 08:59:48.590334] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1045551 ] 00:26:30.218 EAL: No free 2048 kB hugepages reported on node 1 00:26:30.218 [2024-07-26 08:59:48.628536] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:30.218 [2024-07-26 08:59:48.658116] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:30.477 [2024-07-26 08:59:48.745547] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:31.856 Running I/O for 1 seconds... 00:26:33.231 00:26:33.231 Latency(us) 00:26:33.231 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:33.231 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:33.231 Verification LBA range: start 0x0 length 0x400 00:26:33.231 Nvme1n1 : 1.13 229.72 14.36 0.00 0.00 274320.25 7039.05 250104.79 00:26:33.231 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:33.231 Verification LBA range: start 0x0 length 0x400 00:26:33.231 Nvme2n1 : 1.13 226.01 14.13 0.00 0.00 275236.03 22427.88 254765.13 00:26:33.231 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:33.231 Verification LBA range: start 0x0 length 0x400 00:26:33.231 Nvme3n1 : 1.05 183.63 11.48 0.00 0.00 332782.81 27962.03 268746.15 00:26:33.231 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:33.231 Verification LBA range: start 0x0 length 0x400 00:26:33.231 Nvme4n1 : 1.18 277.95 17.37 0.00 0.00 216258.07 4684.61 253211.69 00:26:33.231 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:33.231 Verification LBA range: start 0x0 length 0x400 00:26:33.231 Nvme5n1 : 1.17 219.71 13.73 0.00 0.00 270077.91 20971.52 259425.47 00:26:33.231 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:33.231 Verification LBA range: start 0x0 length 0x400 00:26:33.231 Nvme6n1 : 1.19 269.88 16.87 0.00 0.00 215851.01 19223.89 231463.44 00:26:33.231 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:33.231 Verification LBA range: start 0x0 length 0x400 00:26:33.231 Nvme7n1 : 1.19 215.01 13.44 0.00 0.00 267566.46 21651.15 292047.83 00:26:33.231 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:33.231 Verification LBA range: start 0x0 length 0x400 00:26:33.231 Nvme8n1 : 1.19 268.16 16.76 0.00 0.00 210899.25 18155.90 257872.02 00:26:33.231 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:33.231 Verification LBA range: start 0x0 length 0x400 00:26:33.231 Nvme9n1 : 1.18 217.19 13.57 0.00 0.00 255839.95 21651.15 276513.37 00:26:33.231 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:33.231 Verification LBA range: start 0x0 length 0x400 00:26:33.231 Nvme10n1 : 1.20 266.28 16.64 0.00 0.00 205683.52 17087.91 231463.44 00:26:33.231 =================================================================================================================== 00:26:33.231 Total : 2373.54 148.35 0.00 0.00 246787.78 4684.61 292047.83 00:26:33.489 08:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:26:33.489 08:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:26:33.489 08:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:26:33.489 08:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:33.489 08:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:26:33.489 08:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:33.489 08:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:26:33.489 08:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:33.489 08:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:26:33.489 08:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:33.489 08:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:33.489 rmmod nvme_tcp 00:26:33.489 rmmod nvme_fabrics 00:26:33.489 rmmod nvme_keyring 00:26:33.489 08:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:33.489 08:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:26:33.489 08:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:26:33.489 08:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 1045020 ']' 00:26:33.489 08:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 1045020 00:26:33.489 08:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@950 -- # '[' -z 1045020 ']' 00:26:33.489 08:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # kill -0 1045020 00:26:33.489 08:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # uname 00:26:33.489 08:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:33.489 08:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1045020 00:26:33.489 08:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:26:33.489 08:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:26:33.489 08:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1045020' 00:26:33.489 killing process with pid 1045020 00:26:33.489 08:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@969 -- # kill 1045020 00:26:33.489 08:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@974 -- # wait 1045020 00:26:34.057 08:59:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:34.057 08:59:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:34.057 08:59:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:34.057 08:59:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:34.057 08:59:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:34.057 08:59:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:34.057 08:59:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:34.057 08:59:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:35.961 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:35.961 00:26:35.961 real 0m11.907s 00:26:35.961 user 0m34.431s 00:26:35.961 sys 0m3.294s 00:26:35.961 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:35.961 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:35.961 ************************************ 00:26:35.961 END TEST nvmf_shutdown_tc1 00:26:35.961 ************************************ 00:26:35.961 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:26:35.961 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:26:35.961 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:35.961 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:35.961 ************************************ 00:26:35.961 START TEST nvmf_shutdown_tc2 00:26:35.961 ************************************ 00:26:35.961 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc2 00:26:35.961 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:26:35.961 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:26:35.961 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:35.961 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:35.961 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:35.961 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:35.961 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:35.961 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:35.961 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:35.961 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:35.961 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:35.961 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:35.961 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:26:35.961 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:35.961 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:35.961 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:26:35.961 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:35.961 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:35.961 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:35.961 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:35.961 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:35.961 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:26:35.961 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:35.961 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:26:35.961 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:26:35.961 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:26:35.961 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:26:35.961 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:26:35.961 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:26:35.961 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:35.961 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:35.961 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:35.961 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:35.961 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:35.961 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:35.961 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:35.961 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:35.961 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:35.961 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:35.961 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:35.961 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:35.961 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:35.961 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:35.961 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:35.961 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:35.961 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:35.961 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:35.961 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:35.961 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:35.961 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:35.961 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:35.961 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:35.961 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:35.961 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:35.961 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:35.961 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:35.961 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:35.961 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:35.961 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:35.961 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:35.961 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:35.961 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:35.961 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:35.961 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:35.961 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:35.961 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:35.961 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:35.961 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:35.961 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:35.961 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:35.961 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:35.961 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:35.961 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:35.961 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:35.961 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:35.962 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:35.962 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:35.962 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:35.962 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:35.962 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:35.962 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:35.962 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:35.962 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:35.962 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:35.962 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:35.962 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:35.962 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:26:35.962 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:35.962 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:35.962 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:35.962 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:35.962 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:35.962 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:35.962 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:35.962 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:35.962 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:35.962 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:35.962 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:35.962 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:35.962 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:35.962 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:35.962 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:35.962 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:36.220 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:36.220 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:36.220 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:36.220 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:36.220 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:36.220 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:36.220 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:36.220 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:36.220 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.221 ms 00:26:36.220 00:26:36.220 --- 10.0.0.2 ping statistics --- 00:26:36.220 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:36.220 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:26:36.220 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:36.220 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:36.220 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.197 ms 00:26:36.220 00:26:36.220 --- 10.0.0.1 ping statistics --- 00:26:36.220 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:36.220 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:26:36.220 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:36.220 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:26:36.220 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:36.220 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:36.220 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:36.220 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:36.220 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:36.220 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:36.220 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:36.220 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:26:36.220 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:36.220 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:36.220 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:36.220 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1046379 00:26:36.220 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1046379 00:26:36.220 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 1046379 ']' 00:26:36.220 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:36.220 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:36.220 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:36.220 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:36.220 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:36.220 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:36.220 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:26:36.220 [2024-07-26 08:59:54.594585] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:26:36.220 [2024-07-26 08:59:54.594668] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:36.220 EAL: No free 2048 kB hugepages reported on node 1 00:26:36.220 [2024-07-26 08:59:54.632458] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:36.220 [2024-07-26 08:59:54.664338] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:36.478 [2024-07-26 08:59:54.758870] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:36.478 [2024-07-26 08:59:54.758919] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:36.478 [2024-07-26 08:59:54.758936] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:36.478 [2024-07-26 08:59:54.758949] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:36.478 [2024-07-26 08:59:54.758961] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:36.478 [2024-07-26 08:59:54.759057] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:36.478 [2024-07-26 08:59:54.759132] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:26:36.478 [2024-07-26 08:59:54.759183] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:26:36.478 [2024-07-26 08:59:54.759185] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:36.478 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:36.478 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:26:36.478 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:36.478 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:36.478 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:36.478 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:36.478 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:36.478 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.478 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:36.478 [2024-07-26 08:59:54.919468] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:36.478 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:36.478 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:26:36.478 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:26:36.478 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:36.478 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:36.478 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:36.478 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:36.478 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:26:36.478 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:36.478 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:26:36.737 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:36.737 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:26:36.737 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:36.737 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:26:36.737 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:36.737 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:26:36.737 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:36.738 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:26:36.738 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:36.738 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:26:36.738 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:36.738 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:26:36.738 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:36.738 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:26:36.738 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:36.738 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:26:36.738 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:26:36.738 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.738 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:36.738 Malloc1 00:26:36.738 [2024-07-26 08:59:55.004931] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:36.738 Malloc2 00:26:36.738 Malloc3 00:26:36.738 Malloc4 00:26:36.738 Malloc5 00:26:36.996 Malloc6 00:26:36.996 Malloc7 00:26:36.996 Malloc8 00:26:36.996 Malloc9 00:26:36.996 Malloc10 00:26:36.996 08:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:36.996 08:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:26:36.996 08:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:36.996 08:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:37.255 08:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=1046545 00:26:37.255 08:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 1046545 /var/tmp/bdevperf.sock 00:26:37.255 08:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 1046545 ']' 00:26:37.255 08:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:37.255 08:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:26:37.255 08:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:26:37.255 08:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:37.255 08:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:26:37.255 08:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:37.255 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:37.255 08:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:26:37.255 08:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:37.255 08:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:37.255 08:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:37.255 08:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:37.255 { 00:26:37.255 "params": { 00:26:37.255 "name": "Nvme$subsystem", 00:26:37.255 "trtype": "$TEST_TRANSPORT", 00:26:37.255 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:37.255 "adrfam": "ipv4", 00:26:37.255 "trsvcid": "$NVMF_PORT", 00:26:37.255 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:37.255 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:37.255 "hdgst": ${hdgst:-false}, 00:26:37.255 "ddgst": ${ddgst:-false} 00:26:37.255 }, 00:26:37.255 "method": "bdev_nvme_attach_controller" 00:26:37.255 } 00:26:37.255 EOF 00:26:37.255 )") 00:26:37.255 08:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:26:37.255 08:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:37.255 08:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:37.255 { 00:26:37.255 "params": { 00:26:37.255 "name": "Nvme$subsystem", 00:26:37.255 "trtype": "$TEST_TRANSPORT", 00:26:37.255 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:37.255 "adrfam": "ipv4", 00:26:37.255 "trsvcid": "$NVMF_PORT", 00:26:37.255 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:37.255 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:37.255 "hdgst": ${hdgst:-false}, 00:26:37.255 "ddgst": ${ddgst:-false} 00:26:37.255 }, 00:26:37.255 "method": "bdev_nvme_attach_controller" 00:26:37.255 } 00:26:37.255 EOF 00:26:37.255 )") 00:26:37.255 08:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:26:37.255 08:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:37.255 08:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:37.255 { 00:26:37.255 "params": { 00:26:37.255 "name": "Nvme$subsystem", 00:26:37.255 "trtype": "$TEST_TRANSPORT", 00:26:37.255 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:37.255 "adrfam": "ipv4", 00:26:37.255 "trsvcid": "$NVMF_PORT", 00:26:37.255 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:37.255 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:37.255 "hdgst": ${hdgst:-false}, 00:26:37.255 "ddgst": ${ddgst:-false} 00:26:37.255 }, 00:26:37.255 "method": "bdev_nvme_attach_controller" 00:26:37.255 } 00:26:37.255 EOF 00:26:37.255 )") 00:26:37.255 08:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:26:37.255 08:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:37.255 08:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:37.255 { 00:26:37.255 "params": { 00:26:37.255 "name": "Nvme$subsystem", 00:26:37.255 "trtype": "$TEST_TRANSPORT", 00:26:37.255 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:37.255 "adrfam": "ipv4", 00:26:37.255 "trsvcid": "$NVMF_PORT", 00:26:37.255 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:37.255 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:37.255 "hdgst": ${hdgst:-false}, 00:26:37.255 "ddgst": ${ddgst:-false} 00:26:37.255 }, 00:26:37.255 "method": "bdev_nvme_attach_controller" 00:26:37.255 } 00:26:37.255 EOF 00:26:37.255 )") 00:26:37.255 08:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:26:37.255 08:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:37.255 08:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:37.255 { 00:26:37.255 "params": { 00:26:37.255 "name": "Nvme$subsystem", 00:26:37.255 "trtype": "$TEST_TRANSPORT", 00:26:37.255 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:37.255 "adrfam": "ipv4", 00:26:37.255 "trsvcid": "$NVMF_PORT", 00:26:37.255 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:37.255 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:37.255 "hdgst": ${hdgst:-false}, 00:26:37.255 "ddgst": ${ddgst:-false} 00:26:37.255 }, 00:26:37.255 "method": "bdev_nvme_attach_controller" 00:26:37.255 } 00:26:37.255 EOF 00:26:37.255 )") 00:26:37.255 08:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:26:37.255 08:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:37.255 08:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:37.255 { 00:26:37.255 "params": { 00:26:37.255 "name": "Nvme$subsystem", 00:26:37.255 "trtype": "$TEST_TRANSPORT", 00:26:37.255 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:37.255 "adrfam": "ipv4", 00:26:37.255 "trsvcid": "$NVMF_PORT", 00:26:37.255 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:37.255 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:37.255 "hdgst": ${hdgst:-false}, 00:26:37.255 "ddgst": ${ddgst:-false} 00:26:37.255 }, 00:26:37.255 "method": "bdev_nvme_attach_controller" 00:26:37.255 } 00:26:37.255 EOF 00:26:37.255 )") 00:26:37.255 08:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:26:37.255 08:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:37.255 08:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:37.255 { 00:26:37.255 "params": { 00:26:37.255 "name": "Nvme$subsystem", 00:26:37.255 "trtype": "$TEST_TRANSPORT", 00:26:37.255 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:37.255 "adrfam": "ipv4", 00:26:37.255 "trsvcid": "$NVMF_PORT", 00:26:37.255 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:37.255 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:37.255 "hdgst": ${hdgst:-false}, 00:26:37.255 "ddgst": ${ddgst:-false} 00:26:37.255 }, 00:26:37.255 "method": "bdev_nvme_attach_controller" 00:26:37.255 } 00:26:37.255 EOF 00:26:37.255 )") 00:26:37.255 08:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:26:37.255 08:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:37.255 08:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:37.255 { 00:26:37.255 "params": { 00:26:37.255 "name": "Nvme$subsystem", 00:26:37.255 "trtype": "$TEST_TRANSPORT", 00:26:37.255 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:37.255 "adrfam": "ipv4", 00:26:37.255 "trsvcid": "$NVMF_PORT", 00:26:37.255 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:37.255 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:37.255 "hdgst": ${hdgst:-false}, 00:26:37.255 "ddgst": ${ddgst:-false} 00:26:37.255 }, 00:26:37.255 "method": "bdev_nvme_attach_controller" 00:26:37.255 } 00:26:37.255 EOF 00:26:37.255 )") 00:26:37.255 08:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:26:37.255 08:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:37.255 08:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:37.255 { 00:26:37.255 "params": { 00:26:37.255 "name": "Nvme$subsystem", 00:26:37.255 "trtype": "$TEST_TRANSPORT", 00:26:37.255 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:37.255 "adrfam": "ipv4", 00:26:37.255 "trsvcid": "$NVMF_PORT", 00:26:37.255 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:37.256 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:37.256 "hdgst": ${hdgst:-false}, 00:26:37.256 "ddgst": ${ddgst:-false} 00:26:37.256 }, 00:26:37.256 "method": "bdev_nvme_attach_controller" 00:26:37.256 } 00:26:37.256 EOF 00:26:37.256 )") 00:26:37.256 08:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:26:37.256 08:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:37.256 08:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:37.256 { 00:26:37.256 "params": { 00:26:37.256 "name": "Nvme$subsystem", 00:26:37.256 "trtype": "$TEST_TRANSPORT", 00:26:37.256 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:37.256 "adrfam": "ipv4", 00:26:37.256 "trsvcid": "$NVMF_PORT", 00:26:37.256 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:37.256 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:37.256 "hdgst": ${hdgst:-false}, 00:26:37.256 "ddgst": ${ddgst:-false} 00:26:37.256 }, 00:26:37.256 "method": "bdev_nvme_attach_controller" 00:26:37.256 } 00:26:37.256 EOF 00:26:37.256 )") 00:26:37.256 08:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:26:37.256 08:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:26:37.256 08:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:26:37.256 08:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:37.256 "params": { 00:26:37.256 "name": "Nvme1", 00:26:37.256 "trtype": "tcp", 00:26:37.256 "traddr": "10.0.0.2", 00:26:37.256 "adrfam": "ipv4", 00:26:37.256 "trsvcid": "4420", 00:26:37.256 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:37.256 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:37.256 "hdgst": false, 00:26:37.256 "ddgst": false 00:26:37.256 }, 00:26:37.256 "method": "bdev_nvme_attach_controller" 00:26:37.256 },{ 00:26:37.256 "params": { 00:26:37.256 "name": "Nvme2", 00:26:37.256 "trtype": "tcp", 00:26:37.256 "traddr": "10.0.0.2", 00:26:37.256 "adrfam": "ipv4", 00:26:37.256 "trsvcid": "4420", 00:26:37.256 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:37.256 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:26:37.256 "hdgst": false, 00:26:37.256 "ddgst": false 00:26:37.256 }, 00:26:37.256 "method": "bdev_nvme_attach_controller" 00:26:37.256 },{ 00:26:37.256 "params": { 00:26:37.256 "name": "Nvme3", 00:26:37.256 "trtype": "tcp", 00:26:37.256 "traddr": "10.0.0.2", 00:26:37.256 "adrfam": "ipv4", 00:26:37.256 "trsvcid": "4420", 00:26:37.256 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:26:37.256 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:26:37.256 "hdgst": false, 00:26:37.256 "ddgst": false 00:26:37.256 }, 00:26:37.256 "method": "bdev_nvme_attach_controller" 00:26:37.256 },{ 00:26:37.256 "params": { 00:26:37.256 "name": "Nvme4", 00:26:37.256 "trtype": "tcp", 00:26:37.256 "traddr": "10.0.0.2", 00:26:37.256 "adrfam": "ipv4", 00:26:37.256 "trsvcid": "4420", 00:26:37.256 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:26:37.256 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:26:37.256 "hdgst": false, 00:26:37.256 "ddgst": false 00:26:37.256 }, 00:26:37.256 "method": "bdev_nvme_attach_controller" 00:26:37.256 },{ 00:26:37.256 "params": { 00:26:37.256 "name": "Nvme5", 00:26:37.256 "trtype": "tcp", 00:26:37.256 "traddr": "10.0.0.2", 00:26:37.256 "adrfam": "ipv4", 00:26:37.256 "trsvcid": "4420", 00:26:37.256 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:26:37.256 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:26:37.256 "hdgst": false, 00:26:37.256 "ddgst": false 00:26:37.256 }, 00:26:37.256 "method": "bdev_nvme_attach_controller" 00:26:37.256 },{ 00:26:37.256 "params": { 00:26:37.256 "name": "Nvme6", 00:26:37.256 "trtype": "tcp", 00:26:37.256 "traddr": "10.0.0.2", 00:26:37.256 "adrfam": "ipv4", 00:26:37.256 "trsvcid": "4420", 00:26:37.256 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:26:37.256 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:26:37.256 "hdgst": false, 00:26:37.256 "ddgst": false 00:26:37.256 }, 00:26:37.256 "method": "bdev_nvme_attach_controller" 00:26:37.256 },{ 00:26:37.256 "params": { 00:26:37.256 "name": "Nvme7", 00:26:37.256 "trtype": "tcp", 00:26:37.256 "traddr": "10.0.0.2", 00:26:37.256 "adrfam": "ipv4", 00:26:37.256 "trsvcid": "4420", 00:26:37.256 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:26:37.256 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:26:37.256 "hdgst": false, 00:26:37.256 "ddgst": false 00:26:37.256 }, 00:26:37.256 "method": "bdev_nvme_attach_controller" 00:26:37.256 },{ 00:26:37.256 "params": { 00:26:37.256 "name": "Nvme8", 00:26:37.256 "trtype": "tcp", 00:26:37.256 "traddr": "10.0.0.2", 00:26:37.256 "adrfam": "ipv4", 00:26:37.256 "trsvcid": "4420", 00:26:37.256 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:26:37.256 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:26:37.256 "hdgst": false, 00:26:37.256 "ddgst": false 00:26:37.256 }, 00:26:37.256 "method": "bdev_nvme_attach_controller" 00:26:37.256 },{ 00:26:37.256 "params": { 00:26:37.256 "name": "Nvme9", 00:26:37.256 "trtype": "tcp", 00:26:37.256 "traddr": "10.0.0.2", 00:26:37.256 "adrfam": "ipv4", 00:26:37.256 "trsvcid": "4420", 00:26:37.256 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:26:37.256 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:26:37.256 "hdgst": false, 00:26:37.256 "ddgst": false 00:26:37.256 }, 00:26:37.256 "method": "bdev_nvme_attach_controller" 00:26:37.256 },{ 00:26:37.256 "params": { 00:26:37.256 "name": "Nvme10", 00:26:37.256 "trtype": "tcp", 00:26:37.256 "traddr": "10.0.0.2", 00:26:37.256 "adrfam": "ipv4", 00:26:37.256 "trsvcid": "4420", 00:26:37.256 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:26:37.256 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:26:37.256 "hdgst": false, 00:26:37.256 "ddgst": false 00:26:37.256 }, 00:26:37.256 "method": "bdev_nvme_attach_controller" 00:26:37.256 }' 00:26:37.256 [2024-07-26 08:59:55.516310] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:26:37.256 [2024-07-26 08:59:55.516414] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1046545 ] 00:26:37.256 EAL: No free 2048 kB hugepages reported on node 1 00:26:37.256 [2024-07-26 08:59:55.551801] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:37.256 [2024-07-26 08:59:55.580923] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:37.256 [2024-07-26 08:59:55.667728] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:39.206 Running I/O for 10 seconds... 00:26:39.206 08:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:39.206 08:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:26:39.206 08:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:26:39.206 08:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:39.206 08:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:39.206 08:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:39.206 08:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:26:39.206 08:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:26:39.206 08:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:26:39.206 08:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:26:39.206 08:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:26:39.206 08:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:26:39.206 08:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:26:39.206 08:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:26:39.206 08:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:26:39.206 08:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:39.206 08:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:39.206 08:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:39.206 08:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:26:39.206 08:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:26:39.206 08:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:26:39.464 08:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:26:39.464 08:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:26:39.464 08:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:26:39.464 08:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:26:39.464 08:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:39.464 08:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:39.464 08:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:39.464 08:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:26:39.464 08:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:26:39.464 08:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:26:39.722 08:59:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:26:39.722 08:59:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:26:39.722 08:59:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:26:39.722 08:59:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:26:39.722 08:59:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:39.722 08:59:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:39.722 08:59:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:39.982 08:59:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=131 00:26:39.982 08:59:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:26:39.982 08:59:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:26:39.982 08:59:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:26:39.982 08:59:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:26:39.982 08:59:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 1046545 00:26:39.982 08:59:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 1046545 ']' 00:26:39.982 08:59:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 1046545 00:26:39.982 08:59:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:26:39.982 08:59:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:39.982 08:59:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1046545 00:26:39.982 08:59:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:39.982 08:59:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:39.982 08:59:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1046545' 00:26:39.982 killing process with pid 1046545 00:26:39.982 08:59:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 1046545 00:26:39.982 08:59:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 1046545 00:26:39.982 Received shutdown signal, test time was about 0.975982 seconds 00:26:39.982 00:26:39.982 Latency(us) 00:26:39.982 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:39.982 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:39.982 Verification LBA range: start 0x0 length 0x400 00:26:39.982 Nvme1n1 : 0.95 201.33 12.58 0.00 0.00 314279.25 23398.78 296708.17 00:26:39.982 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:39.982 Verification LBA range: start 0x0 length 0x400 00:26:39.982 Nvme2n1 : 0.94 205.31 12.83 0.00 0.00 301746.25 23107.51 299815.06 00:26:39.982 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:39.982 Verification LBA range: start 0x0 length 0x400 00:26:39.983 Nvme3n1 : 0.95 202.61 12.66 0.00 0.00 300046.16 24660.95 293601.28 00:26:39.983 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:39.983 Verification LBA range: start 0x0 length 0x400 00:26:39.983 Nvme4n1 : 0.94 204.28 12.77 0.00 0.00 291219.28 21554.06 299815.06 00:26:39.983 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:39.983 Verification LBA range: start 0x0 length 0x400 00:26:39.983 Nvme5n1 : 0.92 207.94 13.00 0.00 0.00 279080.45 20000.62 302921.96 00:26:39.983 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:39.983 Verification LBA range: start 0x0 length 0x400 00:26:39.983 Nvme6n1 : 0.96 203.27 12.70 0.00 0.00 279695.44 5485.61 304475.40 00:26:39.983 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:39.983 Verification LBA range: start 0x0 length 0x400 00:26:39.983 Nvme7n1 : 0.97 263.64 16.48 0.00 0.00 211161.13 22913.33 276513.37 00:26:39.983 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:39.983 Verification LBA range: start 0x0 length 0x400 00:26:39.983 Nvme8n1 : 0.96 199.38 12.46 0.00 0.00 274976.36 21165.70 299815.06 00:26:39.983 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:39.983 Verification LBA range: start 0x0 length 0x400 00:26:39.983 Nvme9n1 : 0.97 203.83 12.74 0.00 0.00 261983.64 5679.79 298261.62 00:26:39.983 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:39.983 Verification LBA range: start 0x0 length 0x400 00:26:39.983 Nvme10n1 : 0.98 196.89 12.31 0.00 0.00 267617.72 23107.51 333990.87 00:26:39.983 =================================================================================================================== 00:26:39.983 Total : 2088.48 130.53 0.00 0.00 275990.82 5485.61 333990.87 00:26:40.243 08:59:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:26:41.180 08:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 1046379 00:26:41.180 08:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:26:41.180 08:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:26:41.180 08:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:26:41.180 08:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:41.180 08:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:26:41.180 08:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:41.180 08:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:26:41.180 08:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:41.180 08:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:26:41.180 08:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:41.180 08:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:41.180 rmmod nvme_tcp 00:26:41.180 rmmod nvme_fabrics 00:26:41.180 rmmod nvme_keyring 00:26:41.180 08:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:41.180 08:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:26:41.180 08:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:26:41.180 08:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 1046379 ']' 00:26:41.180 08:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 1046379 00:26:41.180 08:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 1046379 ']' 00:26:41.180 08:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 1046379 00:26:41.180 08:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:26:41.180 08:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:41.180 08:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1046379 00:26:41.438 08:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:26:41.438 08:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:26:41.438 08:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1046379' 00:26:41.438 killing process with pid 1046379 00:26:41.438 08:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 1046379 00:26:41.438 08:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 1046379 00:26:41.696 09:00:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:41.696 09:00:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:41.696 09:00:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:41.696 09:00:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:41.696 09:00:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:41.696 09:00:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:41.696 09:00:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:41.696 09:00:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:44.235 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:44.235 00:26:44.235 real 0m7.815s 00:26:44.235 user 0m23.813s 00:26:44.235 sys 0m1.493s 00:26:44.235 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:44.235 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:44.235 ************************************ 00:26:44.235 END TEST nvmf_shutdown_tc2 00:26:44.235 ************************************ 00:26:44.235 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:26:44.235 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:26:44.235 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:44.235 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:44.235 ************************************ 00:26:44.235 START TEST nvmf_shutdown_tc3 00:26:44.235 ************************************ 00:26:44.235 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc3 00:26:44.235 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:26:44.235 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:26:44.236 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:44.236 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:44.236 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:44.236 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:44.236 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:44.236 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:44.236 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:44.236 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:44.236 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:44.236 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:44.236 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:26:44.236 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:44.236 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:44.236 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:26:44.236 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:44.236 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:44.236 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:44.236 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:44.236 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:44.236 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:26:44.236 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:44.236 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:26:44.236 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:26:44.236 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:26:44.236 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:26:44.236 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:26:44.236 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:26:44.236 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:44.236 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:44.236 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:44.236 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:44.236 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:44.236 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:44.236 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:44.236 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:44.236 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:44.236 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:44.236 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:44.236 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:44.236 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:44.236 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:44.236 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:44.236 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:44.236 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:44.236 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:44.236 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:44.236 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:44.236 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:44.236 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:44.236 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:44.236 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:44.236 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:44.236 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:44.236 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:44.236 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:44.236 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:44.236 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:44.236 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:44.236 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:44.236 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:44.236 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:44.236 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:44.236 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:44.236 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:44.236 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:44.236 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:44.236 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:44.236 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:44.236 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:44.236 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:44.236 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:44.236 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:44.236 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:44.236 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:44.236 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:44.236 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:44.236 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:44.236 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:44.236 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:44.236 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:44.236 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:44.236 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:44.236 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:44.236 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:44.236 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:26:44.236 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:44.236 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:44.236 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:44.236 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:44.236 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:44.236 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:44.236 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:44.236 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:44.236 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:44.236 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:44.236 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:44.236 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:44.236 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:44.236 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:44.236 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:44.236 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:44.236 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:44.236 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:44.236 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:44.236 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:44.236 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:44.236 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:44.236 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:44.236 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:44.236 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.203 ms 00:26:44.236 00:26:44.236 --- 10.0.0.2 ping statistics --- 00:26:44.236 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:44.236 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:26:44.236 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:44.236 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:44.236 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.063 ms 00:26:44.236 00:26:44.236 --- 10.0.0.1 ping statistics --- 00:26:44.236 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:44.236 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:26:44.236 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:44.236 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:26:44.236 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:44.236 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:44.236 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:44.236 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:44.236 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:44.236 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:44.236 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:44.236 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:26:44.236 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:44.236 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:44.236 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:44.236 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=1047592 00:26:44.236 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:26:44.236 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 1047592 00:26:44.236 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 1047592 ']' 00:26:44.236 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:44.236 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:44.236 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:44.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:44.236 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:44.237 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:44.237 [2024-07-26 09:00:02.460927] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:26:44.237 [2024-07-26 09:00:02.461011] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:44.237 EAL: No free 2048 kB hugepages reported on node 1 00:26:44.237 [2024-07-26 09:00:02.500559] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:44.237 [2024-07-26 09:00:02.532553] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:44.237 [2024-07-26 09:00:02.623685] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:44.237 [2024-07-26 09:00:02.623748] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:44.237 [2024-07-26 09:00:02.623764] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:44.237 [2024-07-26 09:00:02.623778] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:44.237 [2024-07-26 09:00:02.623791] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:44.237 [2024-07-26 09:00:02.623896] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:44.237 [2024-07-26 09:00:02.623914] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:26:44.237 [2024-07-26 09:00:02.623987] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:26:44.237 [2024-07-26 09:00:02.623990] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:44.494 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:44.494 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:26:44.494 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:44.494 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:44.494 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:44.494 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:44.494 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:44.494 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:44.494 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:44.494 [2024-07-26 09:00:02.782547] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:44.494 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:44.494 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:26:44.494 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:26:44.494 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:44.494 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:44.494 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:44.494 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:44.494 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:26:44.494 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:44.494 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:26:44.494 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:44.494 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:26:44.494 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:44.494 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:26:44.494 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:44.494 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:26:44.494 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:44.494 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:26:44.494 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:44.494 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:26:44.494 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:44.494 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:26:44.494 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:44.494 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:26:44.494 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:44.494 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:26:44.494 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:26:44.494 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:44.494 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:44.494 Malloc1 00:26:44.494 [2024-07-26 09:00:02.870036] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:44.494 Malloc2 00:26:44.494 Malloc3 00:26:44.752 Malloc4 00:26:44.752 Malloc5 00:26:44.752 Malloc6 00:26:44.752 Malloc7 00:26:44.752 Malloc8 00:26:45.024 Malloc9 00:26:45.024 Malloc10 00:26:45.024 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:45.024 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:26:45.024 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:45.024 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:45.024 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=1047658 00:26:45.024 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 1047658 /var/tmp/bdevperf.sock 00:26:45.024 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 1047658 ']' 00:26:45.025 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:45.025 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:26:45.025 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:26:45.025 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:45.025 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:45.025 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:45.025 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:26:45.025 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:45.025 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:26:45.025 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:45.025 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:45.025 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:45.025 { 00:26:45.025 "params": { 00:26:45.025 "name": "Nvme$subsystem", 00:26:45.025 "trtype": "$TEST_TRANSPORT", 00:26:45.025 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:45.025 "adrfam": "ipv4", 00:26:45.025 "trsvcid": "$NVMF_PORT", 00:26:45.025 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:45.025 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:45.025 "hdgst": ${hdgst:-false}, 00:26:45.025 "ddgst": ${ddgst:-false} 00:26:45.025 }, 00:26:45.025 "method": "bdev_nvme_attach_controller" 00:26:45.026 } 00:26:45.026 EOF 00:26:45.026 )") 00:26:45.026 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:26:45.026 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:45.026 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:45.026 { 00:26:45.026 "params": { 00:26:45.026 "name": "Nvme$subsystem", 00:26:45.026 "trtype": "$TEST_TRANSPORT", 00:26:45.026 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:45.026 "adrfam": "ipv4", 00:26:45.026 "trsvcid": "$NVMF_PORT", 00:26:45.026 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:45.026 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:45.026 "hdgst": ${hdgst:-false}, 00:26:45.026 "ddgst": ${ddgst:-false} 00:26:45.026 }, 00:26:45.026 "method": "bdev_nvme_attach_controller" 00:26:45.026 } 00:26:45.026 EOF 00:26:45.026 )") 00:26:45.026 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:26:45.026 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:45.026 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:45.026 { 00:26:45.026 "params": { 00:26:45.026 "name": "Nvme$subsystem", 00:26:45.026 "trtype": "$TEST_TRANSPORT", 00:26:45.026 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:45.026 "adrfam": "ipv4", 00:26:45.026 "trsvcid": "$NVMF_PORT", 00:26:45.026 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:45.026 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:45.026 "hdgst": ${hdgst:-false}, 00:26:45.026 "ddgst": ${ddgst:-false} 00:26:45.026 }, 00:26:45.026 "method": "bdev_nvme_attach_controller" 00:26:45.026 } 00:26:45.026 EOF 00:26:45.026 )") 00:26:45.026 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:26:45.026 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:45.026 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:45.026 { 00:26:45.026 "params": { 00:26:45.026 "name": "Nvme$subsystem", 00:26:45.026 "trtype": "$TEST_TRANSPORT", 00:26:45.026 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:45.026 "adrfam": "ipv4", 00:26:45.027 "trsvcid": "$NVMF_PORT", 00:26:45.027 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:45.027 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:45.027 "hdgst": ${hdgst:-false}, 00:26:45.027 "ddgst": ${ddgst:-false} 00:26:45.027 }, 00:26:45.027 "method": "bdev_nvme_attach_controller" 00:26:45.027 } 00:26:45.027 EOF 00:26:45.027 )") 00:26:45.027 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:26:45.027 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:45.027 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:45.027 { 00:26:45.027 "params": { 00:26:45.027 "name": "Nvme$subsystem", 00:26:45.027 "trtype": "$TEST_TRANSPORT", 00:26:45.027 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:45.027 "adrfam": "ipv4", 00:26:45.027 "trsvcid": "$NVMF_PORT", 00:26:45.027 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:45.027 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:45.027 "hdgst": ${hdgst:-false}, 00:26:45.027 "ddgst": ${ddgst:-false} 00:26:45.027 }, 00:26:45.027 "method": "bdev_nvme_attach_controller" 00:26:45.027 } 00:26:45.027 EOF 00:26:45.027 )") 00:26:45.027 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:26:45.027 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:45.027 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:45.027 { 00:26:45.027 "params": { 00:26:45.027 "name": "Nvme$subsystem", 00:26:45.027 "trtype": "$TEST_TRANSPORT", 00:26:45.027 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:45.027 "adrfam": "ipv4", 00:26:45.027 "trsvcid": "$NVMF_PORT", 00:26:45.027 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:45.027 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:45.027 "hdgst": ${hdgst:-false}, 00:26:45.027 "ddgst": ${ddgst:-false} 00:26:45.027 }, 00:26:45.027 "method": "bdev_nvme_attach_controller" 00:26:45.027 } 00:26:45.027 EOF 00:26:45.027 )") 00:26:45.028 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:26:45.028 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:45.028 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:45.028 { 00:26:45.028 "params": { 00:26:45.028 "name": "Nvme$subsystem", 00:26:45.028 "trtype": "$TEST_TRANSPORT", 00:26:45.028 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:45.028 "adrfam": "ipv4", 00:26:45.028 "trsvcid": "$NVMF_PORT", 00:26:45.028 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:45.028 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:45.028 "hdgst": ${hdgst:-false}, 00:26:45.028 "ddgst": ${ddgst:-false} 00:26:45.028 }, 00:26:45.028 "method": "bdev_nvme_attach_controller" 00:26:45.028 } 00:26:45.028 EOF 00:26:45.028 )") 00:26:45.028 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:26:45.028 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:45.028 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:45.028 { 00:26:45.028 "params": { 00:26:45.028 "name": "Nvme$subsystem", 00:26:45.028 "trtype": "$TEST_TRANSPORT", 00:26:45.028 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:45.028 "adrfam": "ipv4", 00:26:45.028 "trsvcid": "$NVMF_PORT", 00:26:45.028 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:45.028 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:45.028 "hdgst": ${hdgst:-false}, 00:26:45.028 "ddgst": ${ddgst:-false} 00:26:45.028 }, 00:26:45.028 "method": "bdev_nvme_attach_controller" 00:26:45.028 } 00:26:45.028 EOF 00:26:45.028 )") 00:26:45.028 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:26:45.028 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:45.028 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:45.028 { 00:26:45.028 "params": { 00:26:45.028 "name": "Nvme$subsystem", 00:26:45.028 "trtype": "$TEST_TRANSPORT", 00:26:45.028 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:45.029 "adrfam": "ipv4", 00:26:45.029 "trsvcid": "$NVMF_PORT", 00:26:45.029 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:45.029 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:45.029 "hdgst": ${hdgst:-false}, 00:26:45.029 "ddgst": ${ddgst:-false} 00:26:45.029 }, 00:26:45.029 "method": "bdev_nvme_attach_controller" 00:26:45.029 } 00:26:45.029 EOF 00:26:45.029 )") 00:26:45.029 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:26:45.029 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:45.029 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:45.029 { 00:26:45.029 "params": { 00:26:45.029 "name": "Nvme$subsystem", 00:26:45.029 "trtype": "$TEST_TRANSPORT", 00:26:45.029 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:45.029 "adrfam": "ipv4", 00:26:45.029 "trsvcid": "$NVMF_PORT", 00:26:45.029 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:45.029 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:45.029 "hdgst": ${hdgst:-false}, 00:26:45.029 "ddgst": ${ddgst:-false} 00:26:45.029 }, 00:26:45.029 "method": "bdev_nvme_attach_controller" 00:26:45.029 } 00:26:45.029 EOF 00:26:45.029 )") 00:26:45.029 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:26:45.029 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:26:45.029 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:26:45.029 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:45.029 "params": { 00:26:45.029 "name": "Nvme1", 00:26:45.029 "trtype": "tcp", 00:26:45.029 "traddr": "10.0.0.2", 00:26:45.029 "adrfam": "ipv4", 00:26:45.029 "trsvcid": "4420", 00:26:45.029 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:45.029 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:45.029 "hdgst": false, 00:26:45.029 "ddgst": false 00:26:45.029 }, 00:26:45.029 "method": "bdev_nvme_attach_controller" 00:26:45.029 },{ 00:26:45.029 "params": { 00:26:45.029 "name": "Nvme2", 00:26:45.030 "trtype": "tcp", 00:26:45.030 "traddr": "10.0.0.2", 00:26:45.030 "adrfam": "ipv4", 00:26:45.030 "trsvcid": "4420", 00:26:45.030 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:45.030 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:26:45.030 "hdgst": false, 00:26:45.030 "ddgst": false 00:26:45.030 }, 00:26:45.030 "method": "bdev_nvme_attach_controller" 00:26:45.030 },{ 00:26:45.030 "params": { 00:26:45.030 "name": "Nvme3", 00:26:45.030 "trtype": "tcp", 00:26:45.030 "traddr": "10.0.0.2", 00:26:45.030 "adrfam": "ipv4", 00:26:45.030 "trsvcid": "4420", 00:26:45.030 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:26:45.030 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:26:45.030 "hdgst": false, 00:26:45.030 "ddgst": false 00:26:45.030 }, 00:26:45.030 "method": "bdev_nvme_attach_controller" 00:26:45.030 },{ 00:26:45.030 "params": { 00:26:45.030 "name": "Nvme4", 00:26:45.030 "trtype": "tcp", 00:26:45.030 "traddr": "10.0.0.2", 00:26:45.030 "adrfam": "ipv4", 00:26:45.030 "trsvcid": "4420", 00:26:45.030 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:26:45.030 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:26:45.030 "hdgst": false, 00:26:45.030 "ddgst": false 00:26:45.030 }, 00:26:45.030 "method": "bdev_nvme_attach_controller" 00:26:45.030 },{ 00:26:45.030 "params": { 00:26:45.030 "name": "Nvme5", 00:26:45.030 "trtype": "tcp", 00:26:45.030 "traddr": "10.0.0.2", 00:26:45.030 "adrfam": "ipv4", 00:26:45.030 "trsvcid": "4420", 00:26:45.030 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:26:45.030 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:26:45.030 "hdgst": false, 00:26:45.030 "ddgst": false 00:26:45.030 }, 00:26:45.030 "method": "bdev_nvme_attach_controller" 00:26:45.030 },{ 00:26:45.030 "params": { 00:26:45.030 "name": "Nvme6", 00:26:45.030 "trtype": "tcp", 00:26:45.030 "traddr": "10.0.0.2", 00:26:45.030 "adrfam": "ipv4", 00:26:45.030 "trsvcid": "4420", 00:26:45.030 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:26:45.030 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:26:45.030 "hdgst": false, 00:26:45.030 "ddgst": false 00:26:45.030 }, 00:26:45.030 "method": "bdev_nvme_attach_controller" 00:26:45.031 },{ 00:26:45.031 "params": { 00:26:45.031 "name": "Nvme7", 00:26:45.031 "trtype": "tcp", 00:26:45.031 "traddr": "10.0.0.2", 00:26:45.031 "adrfam": "ipv4", 00:26:45.031 "trsvcid": "4420", 00:26:45.031 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:26:45.031 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:26:45.031 "hdgst": false, 00:26:45.031 "ddgst": false 00:26:45.031 }, 00:26:45.031 "method": "bdev_nvme_attach_controller" 00:26:45.031 },{ 00:26:45.031 "params": { 00:26:45.031 "name": "Nvme8", 00:26:45.031 "trtype": "tcp", 00:26:45.031 "traddr": "10.0.0.2", 00:26:45.031 "adrfam": "ipv4", 00:26:45.031 "trsvcid": "4420", 00:26:45.031 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:26:45.031 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:26:45.031 "hdgst": false, 00:26:45.031 "ddgst": false 00:26:45.031 }, 00:26:45.031 "method": "bdev_nvme_attach_controller" 00:26:45.031 },{ 00:26:45.031 "params": { 00:26:45.031 "name": "Nvme9", 00:26:45.031 "trtype": "tcp", 00:26:45.031 "traddr": "10.0.0.2", 00:26:45.031 "adrfam": "ipv4", 00:26:45.031 "trsvcid": "4420", 00:26:45.031 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:26:45.031 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:26:45.031 "hdgst": false, 00:26:45.031 "ddgst": false 00:26:45.031 }, 00:26:45.031 "method": "bdev_nvme_attach_controller" 00:26:45.031 },{ 00:26:45.031 "params": { 00:26:45.031 "name": "Nvme10", 00:26:45.031 "trtype": "tcp", 00:26:45.031 "traddr": "10.0.0.2", 00:26:45.031 "adrfam": "ipv4", 00:26:45.031 "trsvcid": "4420", 00:26:45.031 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:26:45.031 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:26:45.031 "hdgst": false, 00:26:45.031 "ddgst": false 00:26:45.031 }, 00:26:45.031 "method": "bdev_nvme_attach_controller" 00:26:45.031 }' 00:26:45.032 [2024-07-26 09:00:03.373952] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:26:45.032 [2024-07-26 09:00:03.374032] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1047658 ] 00:26:45.032 EAL: No free 2048 kB hugepages reported on node 1 00:26:45.032 [2024-07-26 09:00:03.410571] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:45.032 [2024-07-26 09:00:03.440340] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:45.292 [2024-07-26 09:00:03.529015] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:46.670 Running I/O for 10 seconds... 00:26:46.928 09:00:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:46.928 09:00:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:26:46.928 09:00:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:26:46.928 09:00:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:46.928 09:00:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:46.928 09:00:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:46.928 09:00:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:46.928 09:00:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:26:46.928 09:00:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:26:46.928 09:00:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:26:46.928 09:00:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:26:46.928 09:00:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:26:46.928 09:00:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:26:46.928 09:00:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:26:46.928 09:00:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:26:46.928 09:00:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:26:46.928 09:00:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:46.928 09:00:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:47.186 09:00:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:47.186 09:00:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=19 00:26:47.186 09:00:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 19 -ge 100 ']' 00:26:47.186 09:00:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:26:47.444 09:00:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:26:47.445 09:00:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:26:47.445 09:00:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:26:47.445 09:00:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:26:47.445 09:00:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:47.445 09:00:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:47.445 09:00:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:47.445 09:00:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=85 00:26:47.445 09:00:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 85 -ge 100 ']' 00:26:47.445 09:00:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:26:47.718 09:00:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:26:47.718 09:00:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:26:47.718 09:00:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:26:47.718 09:00:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:26:47.718 09:00:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:47.718 09:00:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:47.718 09:00:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:47.718 09:00:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=136 00:26:47.718 09:00:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 136 -ge 100 ']' 00:26:47.718 09:00:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:26:47.718 09:00:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:26:47.718 09:00:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:26:47.719 09:00:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 1047592 00:26:47.719 09:00:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # '[' -z 1047592 ']' 00:26:47.719 09:00:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # kill -0 1047592 00:26:47.719 09:00:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # uname 00:26:47.719 09:00:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:47.719 09:00:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1047592 00:26:47.719 09:00:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:26:47.719 09:00:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:26:47.719 09:00:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1047592' 00:26:47.719 killing process with pid 1047592 00:26:47.719 09:00:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@969 -- # kill 1047592 00:26:47.719 09:00:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@974 -- # wait 1047592 00:26:47.719 [2024-07-26 09:00:06.033696] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233daf0 is same with the state(5) to be set 00:26:47.719 [2024-07-26 09:00:06.033868] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233daf0 is same with the state(5) to be set 00:26:47.719 [2024-07-26 09:00:06.033905] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233daf0 is same with the state(5) to be set 00:26:47.719 [2024-07-26 09:00:06.033919] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233daf0 is same with the state(5) to be set 00:26:47.719 [2024-07-26 09:00:06.033933] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233daf0 is same with the state(5) to be set 00:26:47.719 [2024-07-26 09:00:06.033950] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233daf0 is same with the state(5) to be set 00:26:47.719 [2024-07-26 09:00:06.033964] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233daf0 is same with the state(5) to be set 00:26:47.719 [2024-07-26 09:00:06.033976] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233daf0 is same with the state(5) to be set 00:26:47.719 [2024-07-26 09:00:06.033991] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233daf0 is same with the state(5) to be set 00:26:47.719 [2024-07-26 09:00:06.034003] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233daf0 is same with the state(5) to be set 00:26:47.719 [2024-07-26 09:00:06.034016] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233daf0 is same with the state(5) to be set 00:26:47.719 [2024-07-26 09:00:06.034029] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233daf0 is same with the state(5) to be set 00:26:47.719 [2024-07-26 09:00:06.034049] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233daf0 is same with the state(5) to be set 00:26:47.719 [2024-07-26 09:00:06.034069] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233daf0 is same with the state(5) to be set 00:26:47.719 [2024-07-26 09:00:06.034084] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233daf0 is same with the state(5) to be set 00:26:47.719 [2024-07-26 09:00:06.034096] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233daf0 is same with the state(5) to be set 00:26:47.719 [2024-07-26 09:00:06.034120] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233daf0 is same with the state(5) to be set 00:26:47.719 [2024-07-26 09:00:06.034133] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233daf0 is same with the state(5) to be set 00:26:47.719 [2024-07-26 09:00:06.034146] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233daf0 is same with the state(5) to be set 00:26:47.719 [2024-07-26 09:00:06.034158] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233daf0 is same with the state(5) to be set 00:26:47.719 [2024-07-26 09:00:06.034171] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233daf0 is same with the state(5) to be set 00:26:47.719 [2024-07-26 09:00:06.035527] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2340610 is same with the state(5) to be set 00:26:47.719 [2024-07-26 09:00:06.035562] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2340610 is same with the state(5) to be set 00:26:47.719 [2024-07-26 09:00:06.035577] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2340610 is same with the state(5) to be set 00:26:47.719 [2024-07-26 09:00:06.035591] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2340610 is same with the state(5) to be set 00:26:47.719 [2024-07-26 09:00:06.035604] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2340610 is same with the state(5) to be set 00:26:47.719 [2024-07-26 09:00:06.035616] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2340610 is same with the state(5) to be set 00:26:47.719 [2024-07-26 09:00:06.035639] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2340610 is same with the state(5) to be set 00:26:47.719 [2024-07-26 09:00:06.035652] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2340610 is same with the state(5) to be set 00:26:47.719 [2024-07-26 09:00:06.035664] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2340610 is same with the state(5) to be set 00:26:47.719 [2024-07-26 09:00:06.035676] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2340610 is same with the state(5) to be set 00:26:47.719 [2024-07-26 09:00:06.035688] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2340610 is same with the state(5) to be set 00:26:47.719 [2024-07-26 09:00:06.035700] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2340610 is same with the state(5) to be set 00:26:47.719 [2024-07-26 09:00:06.035713] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2340610 is same with the state(5) to be set 00:26:47.719 [2024-07-26 09:00:06.035725] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2340610 is same with the state(5) to be set 00:26:47.719 [2024-07-26 09:00:06.035738] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2340610 is same with the state(5) to be set 00:26:47.719 [2024-07-26 09:00:06.035750] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2340610 is same with the state(5) to be set 00:26:47.719 [2024-07-26 09:00:06.035763] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2340610 is same with the state(5) to be set 00:26:47.719 [2024-07-26 09:00:06.035775] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2340610 is same with the state(5) to be set 00:26:47.719 [2024-07-26 09:00:06.035787] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2340610 is same with the state(5) to be set 00:26:47.719 [2024-07-26 09:00:06.035799] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2340610 is same with the state(5) to be set 00:26:47.719 [2024-07-26 09:00:06.035812] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2340610 is same with the state(5) to be set 00:26:47.719 [2024-07-26 09:00:06.035824] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2340610 is same with the state(5) to be set 00:26:47.719 [2024-07-26 09:00:06.035881] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2340610 is same with the state(5) to be set 00:26:47.719 [2024-07-26 09:00:06.035896] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2340610 is same with the state(5) to be set 00:26:47.719 [2024-07-26 09:00:06.035910] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2340610 is same with the state(5) to be set 00:26:47.719 [2024-07-26 09:00:06.035922] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2340610 is same with the state(5) to be set 00:26:47.719 [2024-07-26 09:00:06.035935] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2340610 is same with the state(5) to be set 00:26:47.719 [2024-07-26 09:00:06.035947] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2340610 is same with the state(5) to be set 00:26:47.719 [2024-07-26 09:00:06.035959] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2340610 is same with the state(5) to be set 00:26:47.719 [2024-07-26 09:00:06.035972] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2340610 is same with the state(5) to be set 00:26:47.719 [2024-07-26 09:00:06.035983] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2340610 is same with the state(5) to be set 00:26:47.719 [2024-07-26 09:00:06.035996] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2340610 is same with the state(5) to be set 00:26:47.719 [2024-07-26 09:00:06.036008] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2340610 is same with the state(5) to be set 00:26:47.719 [2024-07-26 09:00:06.036025] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2340610 is same with the state(5) to be set 00:26:47.719 [2024-07-26 09:00:06.036038] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2340610 is same with the state(5) to be set 00:26:47.719 [2024-07-26 09:00:06.036050] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2340610 is same with the state(5) to be set 00:26:47.719 [2024-07-26 09:00:06.036069] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2340610 is same with the state(5) to be set 00:26:47.719 [2024-07-26 09:00:06.036083] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2340610 is same with the state(5) to be set 00:26:47.719 [2024-07-26 09:00:06.036096] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2340610 is same with the state(5) to be set 00:26:47.719 [2024-07-26 09:00:06.036117] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2340610 is same with the state(5) to be set 00:26:47.719 [2024-07-26 09:00:06.036129] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2340610 is same with the state(5) to be set 00:26:47.719 [2024-07-26 09:00:06.036141] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2340610 is same with the state(5) to be set 00:26:47.719 [2024-07-26 09:00:06.036153] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2340610 is same with the state(5) to be set 00:26:47.719 [2024-07-26 09:00:06.036165] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2340610 is same with the state(5) to be set 00:26:47.719 [2024-07-26 09:00:06.036177] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2340610 is same with the state(5) to be set 00:26:47.719 [2024-07-26 09:00:06.036189] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2340610 is same with the state(5) to be set 00:26:47.719 [2024-07-26 09:00:06.036202] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2340610 is same with the state(5) to be set 00:26:47.719 [2024-07-26 09:00:06.036214] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2340610 is same with the state(5) to be set 00:26:47.719 [2024-07-26 09:00:06.036226] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2340610 is same with the state(5) to be set 00:26:47.719 [2024-07-26 09:00:06.036238] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2340610 is same with the state(5) to be set 00:26:47.719 [2024-07-26 09:00:06.036251] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2340610 is same with the state(5) to be set 00:26:47.719 [2024-07-26 09:00:06.036262] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2340610 is same with the state(5) to be set 00:26:47.720 [2024-07-26 09:00:06.036274] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2340610 is same with the state(5) to be set 00:26:47.720 [2024-07-26 09:00:06.036286] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2340610 is same with the state(5) to be set 00:26:47.720 [2024-07-26 09:00:06.036298] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2340610 is same with the state(5) to be set 00:26:47.720 [2024-07-26 09:00:06.036310] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2340610 is same with the state(5) to be set 00:26:47.720 [2024-07-26 09:00:06.036322] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2340610 is same with the state(5) to be set 00:26:47.720 [2024-07-26 09:00:06.036335] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2340610 is same with the state(5) to be set 00:26:47.720 [2024-07-26 09:00:06.036347] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2340610 is same with the state(5) to be set 00:26:47.720 [2024-07-26 09:00:06.036359] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2340610 is same with the state(5) to be set 00:26:47.720 [2024-07-26 09:00:06.036380] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2340610 is same with the state(5) to be set 00:26:47.720 [2024-07-26 09:00:06.036394] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2340610 is same with the state(5) to be set 00:26:47.720 [2024-07-26 09:00:06.036412] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2340610 is same with the state(5) to be set 00:26:47.720 [2024-07-26 09:00:06.038136] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233dfb0 is same with the state(5) to be set 00:26:47.720 [2024-07-26 09:00:06.038161] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233dfb0 is same with the state(5) to be set 00:26:47.720 [2024-07-26 09:00:06.038175] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233dfb0 is same with the state(5) to be set 00:26:47.720 [2024-07-26 09:00:06.038187] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233dfb0 is same with the state(5) to be set 00:26:47.720 [2024-07-26 09:00:06.038200] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233dfb0 is same with the state(5) to be set 00:26:47.720 [2024-07-26 09:00:06.038213] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233dfb0 is same with the state(5) to be set 00:26:47.720 [2024-07-26 09:00:06.038225] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233dfb0 is same with the state(5) to be set 00:26:47.720 [2024-07-26 09:00:06.038237] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233dfb0 is same with the state(5) to be set 00:26:47.720 [2024-07-26 09:00:06.038249] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233dfb0 is same with the state(5) to be set 00:26:47.720 [2024-07-26 09:00:06.038261] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233dfb0 is same with the state(5) to be set 00:26:47.720 [2024-07-26 09:00:06.038273] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233dfb0 is same with the state(5) to be set 00:26:47.720 [2024-07-26 09:00:06.038285] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233dfb0 is same with the state(5) to be set 00:26:47.720 [2024-07-26 09:00:06.038298] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233dfb0 is same with the state(5) to be set 00:26:47.720 [2024-07-26 09:00:06.038310] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233dfb0 is same with the state(5) to be set 00:26:47.720 [2024-07-26 09:00:06.038322] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233dfb0 is same with the state(5) to be set 00:26:47.720 [2024-07-26 09:00:06.038334] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233dfb0 is same with the state(5) to be set 00:26:47.720 [2024-07-26 09:00:06.038346] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233dfb0 is same with the state(5) to be set 00:26:47.720 [2024-07-26 09:00:06.038369] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233dfb0 is same with the state(5) to be set 00:26:47.720 [2024-07-26 09:00:06.038381] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233dfb0 is same with the state(5) to be set 00:26:47.720 [2024-07-26 09:00:06.038393] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233dfb0 is same with the state(5) to be set 00:26:47.720 [2024-07-26 09:00:06.038405] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233dfb0 is same with the state(5) to be set 00:26:47.720 [2024-07-26 09:00:06.038417] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233dfb0 is same with the state(5) to be set 00:26:47.720 [2024-07-26 09:00:06.038432] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233dfb0 is same with the state(5) to be set 00:26:47.720 [2024-07-26 09:00:06.038444] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233dfb0 is same with the state(5) to be set 00:26:47.720 [2024-07-26 09:00:06.038462] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233dfb0 is same with the state(5) to be set 00:26:47.720 [2024-07-26 09:00:06.038474] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233dfb0 is same with the state(5) to be set 00:26:47.720 [2024-07-26 09:00:06.038487] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233dfb0 is same with the state(5) to be set 00:26:47.720 [2024-07-26 09:00:06.038499] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233dfb0 is same with the state(5) to be set 00:26:47.720 [2024-07-26 09:00:06.038511] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233dfb0 is same with the state(5) to be set 00:26:47.720 [2024-07-26 09:00:06.038524] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233dfb0 is same with the state(5) to be set 00:26:47.720 [2024-07-26 09:00:06.038537] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233dfb0 is same with the state(5) to be set 00:26:47.720 [2024-07-26 09:00:06.038548] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233dfb0 is same with the state(5) to be set 00:26:47.720 [2024-07-26 09:00:06.038561] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233dfb0 is same with the state(5) to be set 00:26:47.720 [2024-07-26 09:00:06.038573] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233dfb0 is same with the state(5) to be set 00:26:47.720 [2024-07-26 09:00:06.038591] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233dfb0 is same with the state(5) to be set 00:26:47.720 [2024-07-26 09:00:06.038604] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233dfb0 is same with the state(5) to be set 00:26:47.720 [2024-07-26 09:00:06.038616] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233dfb0 is same with the state(5) to be set 00:26:47.720 [2024-07-26 09:00:06.038628] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233dfb0 is same with the state(5) to be set 00:26:47.720 [2024-07-26 09:00:06.038640] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233dfb0 is same with the state(5) to be set 00:26:47.720 [2024-07-26 09:00:06.038653] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233dfb0 is same with the state(5) to be set 00:26:47.720 [2024-07-26 09:00:06.038665] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233dfb0 is same with the state(5) to be set 00:26:47.720 [2024-07-26 09:00:06.038677] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233dfb0 is same with the state(5) to be set 00:26:47.720 [2024-07-26 09:00:06.038689] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233dfb0 is same with the state(5) to be set 00:26:47.720 [2024-07-26 09:00:06.038702] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233dfb0 is same with the state(5) to be set 00:26:47.720 [2024-07-26 09:00:06.038714] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233dfb0 is same with the state(5) to be set 00:26:47.720 [2024-07-26 09:00:06.038726] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233dfb0 is same with the state(5) to be set 00:26:47.720 [2024-07-26 09:00:06.038738] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233dfb0 is same with the state(5) to be set 00:26:47.720 [2024-07-26 09:00:06.038750] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233dfb0 is same with the state(5) to be set 00:26:47.720 [2024-07-26 09:00:06.038762] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233dfb0 is same with the state(5) to be set 00:26:47.720 [2024-07-26 09:00:06.038774] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233dfb0 is same with the state(5) to be set 00:26:47.720 [2024-07-26 09:00:06.038786] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233dfb0 is same with the state(5) to be set 00:26:47.720 [2024-07-26 09:00:06.038810] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233dfb0 is same with the state(5) to be set 00:26:47.720 [2024-07-26 09:00:06.038823] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233dfb0 is same with the state(5) to be set 00:26:47.720 [2024-07-26 09:00:06.038835] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233dfb0 is same with the state(5) to be set 00:26:47.720 [2024-07-26 09:00:06.038847] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233dfb0 is same with the state(5) to be set 00:26:47.720 [2024-07-26 09:00:06.038859] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233dfb0 is same with the state(5) to be set 00:26:47.720 [2024-07-26 09:00:06.038871] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233dfb0 is same with the state(5) to be set 00:26:47.720 [2024-07-26 09:00:06.038885] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233dfb0 is same with the state(5) to be set 00:26:47.720 [2024-07-26 09:00:06.038898] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233dfb0 is same with the state(5) to be set 00:26:47.720 [2024-07-26 09:00:06.038910] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233dfb0 is same with the state(5) to be set 00:26:47.720 [2024-07-26 09:00:06.038922] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233dfb0 is same with the state(5) to be set 00:26:47.720 [2024-07-26 09:00:06.038934] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233dfb0 is same with the state(5) to be set 00:26:47.720 [2024-07-26 09:00:06.038949] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233dfb0 is same with the state(5) to be set 00:26:47.720 [2024-07-26 09:00:06.040142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.720 [2024-07-26 09:00:06.040195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.720 [2024-07-26 09:00:06.040236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.720 [2024-07-26 09:00:06.040264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.720 [2024-07-26 09:00:06.040285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.720 [2024-07-26 09:00:06.040299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.720 [2024-07-26 09:00:06.040315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.720 [2024-07-26 09:00:06.040329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.721 [2024-07-26 09:00:06.040344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.721 [2024-07-26 09:00:06.040365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.721 [2024-07-26 09:00:06.040381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.721 [2024-07-26 09:00:06.040394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.721 [2024-07-26 09:00:06.040410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.721 [2024-07-26 09:00:06.040423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.721 [2024-07-26 09:00:06.040445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.721 [2024-07-26 09:00:06.040459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.721 [2024-07-26 09:00:06.040474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.721 [2024-07-26 09:00:06.040488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.721 [2024-07-26 09:00:06.040504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.721 [2024-07-26 09:00:06.040517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.721 [2024-07-26 09:00:06.040534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.721 [2024-07-26 09:00:06.040548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.721 [2024-07-26 09:00:06.040563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.721 [2024-07-26 09:00:06.040577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.721 [2024-07-26 09:00:06.040598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.721 [2024-07-26 09:00:06.040612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.721 [2024-07-26 09:00:06.040627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.721 [2024-07-26 09:00:06.040640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.721 [2024-07-26 09:00:06.040656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.721 [2024-07-26 09:00:06.040669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.721 [2024-07-26 09:00:06.040685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.721 [2024-07-26 09:00:06.040698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.721 [2024-07-26 09:00:06.040714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.721 [2024-07-26 09:00:06.040727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.721 [2024-07-26 09:00:06.040743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.721 [2024-07-26 09:00:06.040756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.721 [2024-07-26 09:00:06.040772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.721 [2024-07-26 09:00:06.040785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.721 [2024-07-26 09:00:06.040807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.721 [2024-07-26 09:00:06.040824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.721 [2024-07-26 09:00:06.040840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.721 [2024-07-26 09:00:06.040854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.721 [2024-07-26 09:00:06.040869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.721 [2024-07-26 09:00:06.040882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.721 [2024-07-26 09:00:06.040898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.721 [2024-07-26 09:00:06.040911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.721 [2024-07-26 09:00:06.040926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.721 [2024-07-26 09:00:06.040939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.721 [2024-07-26 09:00:06.040955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.721 [2024-07-26 09:00:06.040968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.721 [2024-07-26 09:00:06.040984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.721 [2024-07-26 09:00:06.040997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.721 [2024-07-26 09:00:06.041013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.721 [2024-07-26 09:00:06.041026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.721 [2024-07-26 09:00:06.041042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.721 [2024-07-26 09:00:06.041055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.721 [2024-07-26 09:00:06.041080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.721 [2024-07-26 09:00:06.041094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.721 [2024-07-26 09:00:06.041110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.721 [2024-07-26 09:00:06.041123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.721 [2024-07-26 09:00:06.041140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.721 [2024-07-26 09:00:06.041153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.721 [2024-07-26 09:00:06.041169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.721 [2024-07-26 09:00:06.041182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.721 [2024-07-26 09:00:06.041201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.721 [2024-07-26 09:00:06.041215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.721 [2024-07-26 09:00:06.041231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.721 [2024-07-26 09:00:06.041244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.721 [2024-07-26 09:00:06.041259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.721 [2024-07-26 09:00:06.041272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.721 [2024-07-26 09:00:06.041287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.721 [2024-07-26 09:00:06.041300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.721 [2024-07-26 09:00:06.041316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.721 [2024-07-26 09:00:06.041329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.721 [2024-07-26 09:00:06.041344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.721 [2024-07-26 09:00:06.041357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.721 [2024-07-26 09:00:06.041373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.721 [2024-07-26 09:00:06.041385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.721 [2024-07-26 09:00:06.041401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.721 [2024-07-26 09:00:06.041414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.721 [2024-07-26 09:00:06.041429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.721 [2024-07-26 09:00:06.041443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.721 [2024-07-26 09:00:06.041465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.721 [2024-07-26 09:00:06.041478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.721 [2024-07-26 09:00:06.041494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.722 [2024-07-26 09:00:06.041507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.722 [2024-07-26 09:00:06.041531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.722 [2024-07-26 09:00:06.041544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.722 [2024-07-26 09:00:06.041559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.722 [2024-07-26 09:00:06.041576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.722 [2024-07-26 09:00:06.041592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.722 [2024-07-26 09:00:06.041605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.722 [2024-07-26 09:00:06.041621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.722 [2024-07-26 09:00:06.041634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.722 [2024-07-26 09:00:06.041649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.722 [2024-07-26 09:00:06.041662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.722 [2024-07-26 09:00:06.041679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.722 [2024-07-26 09:00:06.041692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.722 [2024-07-26 09:00:06.041707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.722 [2024-07-26 09:00:06.041720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.722 [2024-07-26 09:00:06.041743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.722 [2024-07-26 09:00:06.041757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.722 [2024-07-26 09:00:06.041772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.722 [2024-07-26 09:00:06.041786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.722 [2024-07-26 09:00:06.041801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.722 [2024-07-26 09:00:06.041814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.722 [2024-07-26 09:00:06.041829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.722 [2024-07-26 09:00:06.041842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.722 [2024-07-26 09:00:06.041857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.722 [2024-07-26 09:00:06.041871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.722 [2024-07-26 09:00:06.041889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.722 [2024-07-26 09:00:06.041902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.722 [2024-07-26 09:00:06.041918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.722 [2024-07-26 09:00:06.041931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.722 [2024-07-26 09:00:06.041959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.722 [2024-07-26 09:00:06.041973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.722 [2024-07-26 09:00:06.041988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.722 [2024-07-26 09:00:06.042002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.722 [2024-07-26 09:00:06.042017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.722 [2024-07-26 09:00:06.042030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.722 [2024-07-26 09:00:06.042045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.722 [2024-07-26 09:00:06.042064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.722 [2024-07-26 09:00:06.042082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.722 [2024-07-26 09:00:06.042096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.722 [2024-07-26 09:00:06.042119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.722 [2024-07-26 09:00:06.042131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.722 [2024-07-26 09:00:06.042146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.722 [2024-07-26 09:00:06.042159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.722 [2024-07-26 09:00:06.042259] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x23766a0 was disconnected and freed. reset controller. 00:26:47.722 [2024-07-26 09:00:06.044639] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:47.722 [2024-07-26 09:00:06.044741] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21e9f10 (9): Bad file descriptor 00:26:47.722 [2024-07-26 09:00:06.044819] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:47.722 [2024-07-26 09:00:06.044848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.722 [2024-07-26 09:00:06.044874] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:47.722 [2024-07-26 09:00:06.044899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.722 [2024-07-26 09:00:06.044917] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:47.722 [2024-07-26 09:00:06.044931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.722 [2024-07-26 09:00:06.044951] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:47.722 [2024-07-26 09:00:06.044964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.722 [2024-07-26 09:00:06.044977] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x238e920 is same with the state(5) to be set 00:26:47.722 [2024-07-26 09:00:06.045646] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:26:47.722 [2024-07-26 09:00:06.046329] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233e470 is same with the state(5) to be set 00:26:47.722 [2024-07-26 09:00:06.046364] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233e470 is same with the state(5) to be set 00:26:47.722 [2024-07-26 09:00:06.046387] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233e470 is same with the state(5) to be set 00:26:47.722 [2024-07-26 09:00:06.046400] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233e470 is same with the state(5) to be set 00:26:47.722 [2024-07-26 09:00:06.046413] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233e470 is same with the state(5) to be set 00:26:47.722 [2024-07-26 09:00:06.046425] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233e470 is same with the state(5) to be set 00:26:47.722 [2024-07-26 09:00:06.046446] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233e470 is same with the state(5) to be set 00:26:47.722 [2024-07-26 09:00:06.046458] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233e470 is same with the state(5) to be set 00:26:47.722 [2024-07-26 09:00:06.046471] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233e470 is same with the state(5) to be set 00:26:47.722 [2024-07-26 09:00:06.046483] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233e470 is same with the state(5) to be set 00:26:47.722 [2024-07-26 09:00:06.046495] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233e470 is same with the state(5) to be set 00:26:47.722 [2024-07-26 09:00:06.046507] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233e470 is same with the state(5) to be set 00:26:47.722 [2024-07-26 09:00:06.046519] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233e470 is same with the state(5) to be set 00:26:47.722 [2024-07-26 09:00:06.046532] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233e470 is same with the state(5) to be set 00:26:47.722 [2024-07-26 09:00:06.046544] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233e470 is same with the state(5) to be set 00:26:47.722 [2024-07-26 09:00:06.046556] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233e470 is same with the state(5) to be set 00:26:47.722 [2024-07-26 09:00:06.046568] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233e470 is same with the state(5) to be set 00:26:47.722 [2024-07-26 09:00:06.046582] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233e470 is same with the state(5) to be set 00:26:47.722 [2024-07-26 09:00:06.046594] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233e470 is same with the state(5) to be set 00:26:47.722 [2024-07-26 09:00:06.046606] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233e470 is same with the state(5) to be set 00:26:47.722 [2024-07-26 09:00:06.046602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.722 [2024-07-26 09:00:06.046619] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233e470 is same with the state(5) to be set 00:26:47.722 [2024-07-26 09:00:06.046632] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233e470 is same with the state(5) to be set 00:26:47.722 [2024-07-26 09:00:06.046633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21e9f10 with addr=10.0.0.2, port=4420 00:26:47.723 [2024-07-26 09:00:06.046646] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233e470 is same with the state(5) to be set 00:26:47.723 [2024-07-26 09:00:06.046655] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e9f10 is same with the state(5) to be set 00:26:47.723 [2024-07-26 09:00:06.046659] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233e470 is same with the state(5) to be set 00:26:47.723 [2024-07-26 09:00:06.046678] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233e470 is same with the state(5) to be set 00:26:47.723 [2024-07-26 09:00:06.046691] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233e470 is same with the state(5) to be set 00:26:47.723 [2024-07-26 09:00:06.046704] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233e470 is same with the state(5) to be set 00:26:47.723 [2024-07-26 09:00:06.046716] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233e470 is same with the state(5) to be set 00:26:47.723 [2024-07-26 09:00:06.046728] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233e470 is same with the state(5) to be set 00:26:47.723 [2024-07-26 09:00:06.046731] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:26:47.723 [2024-07-26 09:00:06.046740] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233e470 is same with the state(5) to be set 00:26:47.723 [2024-07-26 09:00:06.046753] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233e470 is same with the state(5) to be set 00:26:47.723 [2024-07-26 09:00:06.046765] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233e470 is same with the state(5) to be set 00:26:47.723 [2024-07-26 09:00:06.046778] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233e470 is same with the state(5) to be set 00:26:47.723 [2024-07-26 09:00:06.046790] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233e470 is same with the state(5) to be set 00:26:47.723 [2024-07-26 09:00:06.046803] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233e470 is same with the state(5) to be set 00:26:47.723 [2024-07-26 09:00:06.046820] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233e470 is same with the state(5) to be set 00:26:47.723 [2024-07-26 09:00:06.046833] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233e470 is same with the state(5) to be set 00:26:47.723 [2024-07-26 09:00:06.046858] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233e470 is same with the state(5) to be set 00:26:47.723 [2024-07-26 09:00:06.046871] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233e470 is same with the state(5) to be set 00:26:47.723 [2024-07-26 09:00:06.046884] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233e470 is same with the state(5) to be set 00:26:47.723 [2024-07-26 09:00:06.046899] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233e470 is same with the state(5) to be set 00:26:47.723 [2024-07-26 09:00:06.046912] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233e470 is same with the state(5) to be set 00:26:47.723 [2024-07-26 09:00:06.046925] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233e470 is same with the state(5) to be set 00:26:47.723 [2024-07-26 09:00:06.046937] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233e470 is same with the state(5) to be set 00:26:47.723 [2024-07-26 09:00:06.046949] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233e470 is same with the state(5) to be set 00:26:47.723 [2024-07-26 09:00:06.046962] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233e470 is same with the state(5) to be set 00:26:47.723 [2024-07-26 09:00:06.046974] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233e470 is same with the state(5) to be set 00:26:47.723 [2024-07-26 09:00:06.046986] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233e470 is same with the state(5) to be set 00:26:47.723 [2024-07-26 09:00:06.047008] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233e470 is same with the state(5) to be set 00:26:47.723 [2024-07-26 09:00:06.047020] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233e470 is same with the state(5) to be set 00:26:47.723 [2024-07-26 09:00:06.047036] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233e470 is same with the state(5) to be set 00:26:47.723 [2024-07-26 09:00:06.047049] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233e470 is same with the state(5) to be set 00:26:47.723 [2024-07-26 09:00:06.047070] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233e470 is same with the state(5) to be set 00:26:47.723 [2024-07-26 09:00:06.047084] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233e470 is same with the state(5) to be set 00:26:47.723 [2024-07-26 09:00:06.047103] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233e470 is same with the state(5) to be set 00:26:47.723 [2024-07-26 09:00:06.047116] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233e470 is same with the state(5) to be set 00:26:47.723 [2024-07-26 09:00:06.047128] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233e470 is same with the state(5) to be set 00:26:47.723 [2024-07-26 09:00:06.047141] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233e470 is same with the state(5) to be set 00:26:47.723 [2024-07-26 09:00:06.047153] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233e470 is same with the state(5) to be set 00:26:47.724 [2024-07-26 09:00:06.047165] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233e470 is same with the state(5) to be set 00:26:47.724 [2024-07-26 09:00:06.047177] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233e470 is same with the state(5) to be set 00:26:47.724 [2024-07-26 09:00:06.047190] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233e470 is same with the state(5) to be set 00:26:47.724 [2024-07-26 09:00:06.047202] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233e470 is same with the state(5) to be set 00:26:47.724 [2024-07-26 09:00:06.047810] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21e9f10 (9): Bad file descriptor 00:26:47.724 [2024-07-26 09:00:06.048324] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:47.724 [2024-07-26 09:00:06.048347] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:47.724 [2024-07-26 09:00:06.048377] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:47.724 [2024-07-26 09:00:06.048743] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:47.724 [2024-07-26 09:00:06.049243] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233e950 is same with the state(5) to be set 00:26:47.724 [2024-07-26 09:00:06.049295] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233e950 is same with the state(5) to be set 00:26:47.724 [2024-07-26 09:00:06.049314] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233e950 is same with the state(5) to be set 00:26:47.724 [2024-07-26 09:00:06.049327] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233e950 is same with the state(5) to be set 00:26:47.724 [2024-07-26 09:00:06.049339] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233e950 is same with the state(5) to be set 00:26:47.724 [2024-07-26 09:00:06.049359] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233e950 is same with the state(5) to be set 00:26:47.724 [2024-07-26 09:00:06.049386] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233e950 is same with the state(5) to be set 00:26:47.724 [2024-07-26 09:00:06.049402] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233e950 is same with the state(5) to be set 00:26:47.724 [2024-07-26 09:00:06.049415] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233e950 is same with the state(5) to be set 00:26:47.724 [2024-07-26 09:00:06.049427] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233e950 is same with the state(5) to be set 00:26:47.724 [2024-07-26 09:00:06.049446] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233e950 is same with the state(5) to be set 00:26:47.724 [2024-07-26 09:00:06.049459] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233e950 is same with the state(5) to be set 00:26:47.724 [2024-07-26 09:00:06.049472] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233e950 is same with the state(5) to be set 00:26:47.724 [2024-07-26 09:00:06.049485] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233e950 is same with the state(5) to be set 00:26:47.724 [2024-07-26 09:00:06.049497] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233e950 is same with the state(5) to be set 00:26:47.724 [2024-07-26 09:00:06.049512] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233e950 is same with the state(5) to be set 00:26:47.724 [2024-07-26 09:00:06.049524] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233e950 is same with the state(5) to be set 00:26:47.724 [2024-07-26 09:00:06.049537] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233e950 is same with the state(5) to be set 00:26:47.724 [2024-07-26 09:00:06.049549] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233e950 is same with the state(5) to be set 00:26:47.724 [2024-07-26 09:00:06.049562] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233e950 is same with the state(5) to be set 00:26:47.724 [2024-07-26 09:00:06.049577] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233e950 is same with the state(5) to be set 00:26:47.724 [2024-07-26 09:00:06.049589] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233e950 is same with the state(5) to be set 00:26:47.724 [2024-07-26 09:00:06.049602] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233e950 is same with the state(5) to be set 00:26:47.724 [2024-07-26 09:00:06.049614] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233e950 is same with the state(5) to be set 00:26:47.724 [2024-07-26 09:00:06.049627] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233e950 is same with the state(5) to be set 00:26:47.724 [2024-07-26 09:00:06.049639] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233e950 is same with the state(5) to be set 00:26:47.724 [2024-07-26 09:00:06.049652] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233e950 is same with the state(5) to be set 00:26:47.724 [2024-07-26 09:00:06.049664] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233e950 is same with the state(5) to be set 00:26:47.724 [2024-07-26 09:00:06.049676] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233e950 is same with the state(5) to be set 00:26:47.724 [2024-07-26 09:00:06.049689] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233e950 is same with the state(5) to be set 00:26:47.724 [2024-07-26 09:00:06.049701] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233e950 is same with the state(5) to be set 00:26:47.724 [2024-07-26 09:00:06.049725] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233e950 is same with the state(5) to be set 00:26:47.724 [2024-07-26 09:00:06.049737] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233e950 is same with the state(5) to be set 00:26:47.724 [2024-07-26 09:00:06.049750] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233e950 is same with the state(5) to be set 00:26:47.724 [2024-07-26 09:00:06.049762] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233e950 is same with the state(5) to be set 00:26:47.724 [2024-07-26 09:00:06.049774] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233e950 is same with the state(5) to be set 00:26:47.724 [2024-07-26 09:00:06.049790] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233e950 is same with the state(5) to be set 00:26:47.724 [2024-07-26 09:00:06.049806] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233e950 is same with the state(5) to be set 00:26:47.724 [2024-07-26 09:00:06.049818] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233e950 is same with the state(5) to be set 00:26:47.724 [2024-07-26 09:00:06.049831] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233e950 is same with the state(5) to be set 00:26:47.724 [2024-07-26 09:00:06.049843] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233e950 is same with the state(5) to be set 00:26:47.724 [2024-07-26 09:00:06.049855] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233e950 is same with the state(5) to be set 00:26:47.724 [2024-07-26 09:00:06.049867] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233e950 is same with the state(5) to be set 00:26:47.724 [2024-07-26 09:00:06.049880] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233e950 is same with the state(5) to be set 00:26:47.724 [2024-07-26 09:00:06.049892] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233e950 is same with the state(5) to be set 00:26:47.724 [2024-07-26 09:00:06.049904] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233e950 is same with the state(5) to be set 00:26:47.724 [2024-07-26 09:00:06.049916] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233e950 is same with the state(5) to be set 00:26:47.724 [2024-07-26 09:00:06.049928] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233e950 is same with the state(5) to be set 00:26:47.724 [2024-07-26 09:00:06.049943] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233e950 is same with the state(5) to be set 00:26:47.724 [2024-07-26 09:00:06.049955] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233e950 is same with the state(5) to be set 00:26:47.724 [2024-07-26 09:00:06.049967] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233e950 is same with the state(5) to be set 00:26:47.724 [2024-07-26 09:00:06.049979] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233e950 is same with the state(5) to be set 00:26:47.724 [2024-07-26 09:00:06.049992] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233e950 is same with the state(5) to be set 00:26:47.724 [2024-07-26 09:00:06.050007] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233e950 is same with the state(5) to be set 00:26:47.724 [2024-07-26 09:00:06.050020] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233e950 is same with the state(5) to be set 00:26:47.724 [2024-07-26 09:00:06.050032] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233e950 is same with the state(5) to be set 00:26:47.724 [2024-07-26 09:00:06.050069] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233e950 is same with the state(5) to be set 00:26:47.724 [2024-07-26 09:00:06.050087] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233e950 is same with the state(5) to be set 00:26:47.724 [2024-07-26 09:00:06.050100] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233e950 is same with the state(5) to be set 00:26:47.724 [2024-07-26 09:00:06.050118] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233e950 is same with the state(5) to be set 00:26:47.724 [2024-07-26 09:00:06.050131] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233e950 is same with the state(5) to be set 00:26:47.724 [2024-07-26 09:00:06.050143] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233e950 is same with the state(5) to be set 00:26:47.724 [2024-07-26 09:00:06.050156] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233e950 is same with the state(5) to be set 00:26:47.724 [2024-07-26 09:00:06.050249] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:26:47.724 [2024-07-26 09:00:06.051675] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233ee10 is same with the state(5) to be set 00:26:47.724 [2024-07-26 09:00:06.051711] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233ee10 is same with the state(5) to be set 00:26:47.724 [2024-07-26 09:00:06.051727] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233ee10 is same with the state(5) to be set 00:26:47.724 [2024-07-26 09:00:06.051742] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233ee10 is same with the state(5) to be set 00:26:47.724 [2024-07-26 09:00:06.051754] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233ee10 is same with the state(5) to be set 00:26:47.724 [2024-07-26 09:00:06.051766] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233ee10 is same with the state(5) to be set 00:26:47.724 [2024-07-26 09:00:06.051779] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233ee10 is same with the state(5) to be set 00:26:47.724 [2024-07-26 09:00:06.051792] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233ee10 is same with the state(5) to be set 00:26:47.724 [2024-07-26 09:00:06.051805] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233ee10 is same with the state(5) to be set 00:26:47.724 [2024-07-26 09:00:06.051817] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233ee10 is same with the state(5) to be set 00:26:47.724 [2024-07-26 09:00:06.051836] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233ee10 is same with the state(5) to be set 00:26:47.725 [2024-07-26 09:00:06.051850] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233ee10 is same with the state(5) to be set 00:26:47.725 [2024-07-26 09:00:06.051862] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233ee10 is same with the state(5) to be set 00:26:47.725 [2024-07-26 09:00:06.051875] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233ee10 is same with the state(5) to be set 00:26:47.725 [2024-07-26 09:00:06.051887] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233ee10 is same with the state(5) to be set 00:26:47.725 [2024-07-26 09:00:06.051899] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233ee10 is same with the state(5) to be set 00:26:47.725 [2024-07-26 09:00:06.051912] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233ee10 is same with the state(5) to be set 00:26:47.725 [2024-07-26 09:00:06.051924] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233ee10 is same with the state(5) to be set 00:26:47.725 [2024-07-26 09:00:06.051936] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233ee10 is same with the state(5) to be set 00:26:47.725 [2024-07-26 09:00:06.051960] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233ee10 is same with the state(5) to be set 00:26:47.725 [2024-07-26 09:00:06.051973] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233ee10 is same with the state(5) to be set 00:26:47.725 [2024-07-26 09:00:06.051985] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233ee10 is same with the state(5) to be set 00:26:47.725 [2024-07-26 09:00:06.051998] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233ee10 is same with the state(5) to be set 00:26:47.725 [2024-07-26 09:00:06.052016] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233ee10 is same with the state(5) to be set 00:26:47.725 [2024-07-26 09:00:06.052029] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233ee10 is same with the state(5) to be set 00:26:47.725 [2024-07-26 09:00:06.052041] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233ee10 is same with the state(5) to be set 00:26:47.725 [2024-07-26 09:00:06.052053] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233ee10 is same with the state(5) to be set 00:26:47.725 [2024-07-26 09:00:06.052074] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233ee10 is same with the state(5) to be set 00:26:47.725 [2024-07-26 09:00:06.052108] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233ee10 is same with the state(5) to be set 00:26:47.725 [2024-07-26 09:00:06.052122] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233ee10 is same with the state(5) to be set 00:26:47.725 [2024-07-26 09:00:06.052135] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233ee10 is same with the state(5) to be set 00:26:47.725 [2024-07-26 09:00:06.052148] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233ee10 is same with the state(5) to be set 00:26:47.725 [2024-07-26 09:00:06.052160] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233ee10 is same with the state(5) to be set 00:26:47.725 [2024-07-26 09:00:06.052173] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233ee10 is same with the state(5) to be set 00:26:47.725 [2024-07-26 09:00:06.052185] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233ee10 is same with the state(5) to be set 00:26:47.725 [2024-07-26 09:00:06.052197] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233ee10 is same with the state(5) to be set 00:26:47.725 [2024-07-26 09:00:06.052209] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233ee10 is same with the state(5) to be set 00:26:47.725 [2024-07-26 09:00:06.052221] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233ee10 is same with the state(5) to be set 00:26:47.725 [2024-07-26 09:00:06.052233] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233ee10 is same with the state(5) to be set 00:26:47.725 [2024-07-26 09:00:06.052246] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233ee10 is same with the state(5) to be set 00:26:47.725 [2024-07-26 09:00:06.052258] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233ee10 is same with the state(5) to be set 00:26:47.725 [2024-07-26 09:00:06.052270] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233ee10 is same with the state(5) to be set 00:26:47.725 [2024-07-26 09:00:06.052283] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233ee10 is same with the state(5) to be set 00:26:47.725 [2024-07-26 09:00:06.052295] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233ee10 is same with the state(5) to be set 00:26:47.725 [2024-07-26 09:00:06.052308] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233ee10 is same with the state(5) to be set 00:26:47.725 [2024-07-26 09:00:06.052320] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233ee10 is same with the state(5) to be set 00:26:47.725 [2024-07-26 09:00:06.052332] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233ee10 is same with the state(5) to be set 00:26:47.725 [2024-07-26 09:00:06.052344] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233ee10 is same with the state(5) to be set 00:26:47.725 [2024-07-26 09:00:06.052368] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233ee10 is same with the state(5) to be set 00:26:47.725 [2024-07-26 09:00:06.052383] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233ee10 is same with the state(5) to be set 00:26:47.725 [2024-07-26 09:00:06.052395] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233ee10 is same with the state(5) to be set 00:26:47.725 [2024-07-26 09:00:06.052411] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233ee10 is same with the state(5) to be set 00:26:47.725 [2024-07-26 09:00:06.052424] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233ee10 is same with the state(5) to be set 00:26:47.725 [2024-07-26 09:00:06.052437] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233ee10 is same with the state(5) to be set 00:26:47.725 [2024-07-26 09:00:06.052449] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233ee10 is same with the state(5) to be set 00:26:47.725 [2024-07-26 09:00:06.052465] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233ee10 is same with the state(5) to be set 00:26:47.725 [2024-07-26 09:00:06.052471] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:26:47.725 [2024-07-26 09:00:06.052478] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233ee10 is same with the state(5) to be set 00:26:47.725 [2024-07-26 09:00:06.052494] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233ee10 is same with the state(5) to be set 00:26:47.725 [2024-07-26 09:00:06.052506] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233ee10 is same with the state(5) to be set 00:26:47.725 [2024-07-26 09:00:06.052518] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233ee10 is same with the state(5) to be set 00:26:47.725 [2024-07-26 09:00:06.052531] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233ee10 is same with the state(5) to be set 00:26:47.725 [2024-07-26 09:00:06.052543] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233ee10 is same with the state(5) to be set 00:26:47.725 [2024-07-26 09:00:06.052555] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233ee10 is same with the state(5) to be set 00:26:47.725 [2024-07-26 09:00:06.054586] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233f7b0 is same with the state(5) to be set 00:26:47.725 [2024-07-26 09:00:06.054623] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233f7b0 is same with the state(5) to be set 00:26:47.725 [2024-07-26 09:00:06.054638] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233f7b0 is same with the state(5) to be set 00:26:47.725 [2024-07-26 09:00:06.054650] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233f7b0 is same with the state(5) to be set 00:26:47.725 [2024-07-26 09:00:06.054662] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233f7b0 is same with the state(5) to be set 00:26:47.725 [2024-07-26 09:00:06.054684] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233f7b0 is same with the state(5) to be set 00:26:47.725 [2024-07-26 09:00:06.054696] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233f7b0 is same with the state(5) to be set 00:26:47.725 [2024-07-26 09:00:06.054708] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233f7b0 is same with the state(5) to be set 00:26:47.725 [2024-07-26 09:00:06.054720] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233f7b0 is same with the state(5) to be set 00:26:47.725 [2024-07-26 09:00:06.054732] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233f7b0 is same with the state(5) to be set 00:26:47.725 [2024-07-26 09:00:06.054745] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233f7b0 is same with the state(5) to be set 00:26:47.725 [2024-07-26 09:00:06.054757] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233f7b0 is same with the state(5) to be set 00:26:47.725 [2024-07-26 09:00:06.054769] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233f7b0 is same with the state(5) to be set 00:26:47.725 [2024-07-26 09:00:06.054782] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233f7b0 is same with the state(5) to be set 00:26:47.725 [2024-07-26 09:00:06.054794] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233f7b0 is same with the state(5) to be set 00:26:47.725 [2024-07-26 09:00:06.054806] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233f7b0 is same with the state(5) to be set 00:26:47.725 [2024-07-26 09:00:06.054819] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233f7b0 is same with the state(5) to be set 00:26:47.725 [2024-07-26 09:00:06.054831] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233f7b0 is same with the state(5) to be set 00:26:47.725 [2024-07-26 09:00:06.054849] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233f7b0 is same with the state(5) to be set 00:26:47.725 [2024-07-26 09:00:06.054862] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233f7b0 is same with the state(5) to be set 00:26:47.725 [2024-07-26 09:00:06.054874] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233f7b0 is same with the state(5) to be set 00:26:47.725 [2024-07-26 09:00:06.054896] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233f7b0 is same with the state(5) to be set 00:26:47.725 [2024-07-26 09:00:06.054908] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233f7b0 is same with the state(5) to be set 00:26:47.725 [2024-07-26 09:00:06.054920] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233f7b0 is same with the state(5) to be set 00:26:47.725 [2024-07-26 09:00:06.054932] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233f7b0 is same with the state(5) to be set 00:26:47.725 [2024-07-26 09:00:06.054945] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233f7b0 is same with the state(5) to be set 00:26:47.725 [2024-07-26 09:00:06.054957] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233f7b0 is same with the state(5) to be set 00:26:47.725 [2024-07-26 09:00:06.054969] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233f7b0 is same with the state(5) to be set 00:26:47.725 [2024-07-26 09:00:06.054981] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233f7b0 is same with the state(5) to be set 00:26:47.725 [2024-07-26 09:00:06.054993] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233f7b0 is same with the state(5) to be set 00:26:47.726 [2024-07-26 09:00:06.055005] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233f7b0 is same with the state(5) to be set 00:26:47.726 [2024-07-26 09:00:06.055017] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233f7b0 is same with the state(5) to be set 00:26:47.726 [2024-07-26 09:00:06.055029] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233f7b0 is same with the state(5) to be set 00:26:47.726 [2024-07-26 09:00:06.055043] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233f7b0 is same with the state(5) to be set 00:26:47.726 [2024-07-26 09:00:06.055055] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233f7b0 is same with the state(5) to be set 00:26:47.726 [2024-07-26 09:00:06.055075] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233f7b0 is same with the state(5) to be set 00:26:47.726 [2024-07-26 09:00:06.055088] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233f7b0 is same with the state(5) to be set 00:26:47.726 [2024-07-26 09:00:06.055108] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233f7b0 is same with the state(5) to be set 00:26:47.726 [2024-07-26 09:00:06.055095] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x238e920 (9): Bad file descriptor 00:26:47.726 [2024-07-26 09:00:06.055121] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233f7b0 is same with the state(5) to be set 00:26:47.726 [2024-07-26 09:00:06.055133] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233f7b0 is same with the state(5) to be set 00:26:47.726 [2024-07-26 09:00:06.055145] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233f7b0 is same with the state(5) to be set 00:26:47.726 [2024-07-26 09:00:06.055157] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233f7b0 is same with the state(5) to be set 00:26:47.726 [2024-07-26 09:00:06.055169] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233f7b0 is same with the state(5) to be set 00:26:47.726 [2024-07-26 09:00:06.055176] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 ns[2024-07-26 09:00:06.055182] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233f7b0 is same with id:0 cdw10:00000000 cdw11:00000000 00:26:47.726 the state(5) to be set 00:26:47.726 [2024-07-26 09:00:06.055201] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233f7b0 is same with the state(5) to be set 00:26:47.726 [2024-07-26 09:00:06.055205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.726 [2024-07-26 09:00:06.055220] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233f7b0 is same with the state(5) to be set 00:26:47.726 [2024-07-26 09:00:06.055222] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:47.726 [2024-07-26 09:00:06.055234] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233f7b0 is same with the state(5) to be set 00:26:47.726 [2024-07-26 09:00:06.055237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.726 [2024-07-26 09:00:06.055247] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233f7b0 is same with the state(5) to be set 00:26:47.726 [2024-07-26 09:00:06.055251] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:47.726 [2024-07-26 09:00:06.055260] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233f7b0 is same with the state(5) to be set 00:26:47.726 [2024-07-26 09:00:06.055265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.726 [2024-07-26 09:00:06.055273] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233f7b0 is same with the state(5) to be set 00:26:47.726 [2024-07-26 09:00:06.055279] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:47.726 [2024-07-26 09:00:06.055285] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233f7b0 is same with the state(5) to be set 00:26:47.726 [2024-07-26 09:00:06.055292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.726 [2024-07-26 09:00:06.055299] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233f7b0 is same with the state(5) to be set 00:26:47.726 [2024-07-26 09:00:06.055306] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b5ad0 is same with the state(5) to be set 00:26:47.726 [2024-07-26 09:00:06.055312] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233f7b0 is same with the state(5) to be set 00:26:47.726 [2024-07-26 09:00:06.055325] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233f7b0 is same with the state(5) to be set 00:26:47.726 [2024-07-26 09:00:06.055336] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233f7b0 is same with the state(5) to be set 00:26:47.726 [2024-07-26 09:00:06.055348] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233f7b0 is same with the state(5) to be set 00:26:47.726 [2024-07-26 09:00:06.055360] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233f7b0 is same with the state(5) to be set 00:26:47.726 [2024-07-26 09:00:06.055359] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:47.726 [2024-07-26 09:00:06.055372] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233f7b0 is same with the state(5) to be set 00:26:47.726 [2024-07-26 09:00:06.055380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.726 [2024-07-26 09:00:06.055384] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233f7b0 is same with the state(5) to be set 00:26:47.726 [2024-07-26 09:00:06.055395] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 ns[2024-07-26 09:00:06.055397] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233f7b0 is same with id:0 cdw10:00000000 cdw11:00000000 00:26:47.726 the state(5) to be set 00:26:47.726 [2024-07-26 09:00:06.055414] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233f7b0 is same with the state(5) to be set 00:26:47.726 [2024-07-26 09:00:06.055415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.726 [2024-07-26 09:00:06.055426] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233f7b0 is same with the state(5) to be set 00:26:47.726 [2024-07-26 09:00:06.055431] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:47.726 [2024-07-26 09:00:06.055439] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233f7b0 is same with the state(5) to be set 00:26:47.726 [2024-07-26 09:00:06.055445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.726 [2024-07-26 09:00:06.055459] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:47.726 [2024-07-26 09:00:06.055472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.726 [2024-07-26 09:00:06.055485] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2216300 is same with the state(5) to be set 00:26:47.726 [2024-07-26 09:00:06.055530] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:47.726 [2024-07-26 09:00:06.055550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.726 [2024-07-26 09:00:06.055565] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:47.726 [2024-07-26 09:00:06.055578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.726 [2024-07-26 09:00:06.055592] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:47.726 [2024-07-26 09:00:06.055605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.726 [2024-07-26 09:00:06.055619] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:47.726 [2024-07-26 09:00:06.055631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.726 [2024-07-26 09:00:06.055644] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2383cc0 is same with the state(5) to be set 00:26:47.726 [2024-07-26 09:00:06.055708] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:47.726 [2024-07-26 09:00:06.055729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.726 [2024-07-26 09:00:06.055746] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:47.726 [2024-07-26 09:00:06.055759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.726 [2024-07-26 09:00:06.055773] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:47.726 [2024-07-26 09:00:06.055786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.726 [2024-07-26 09:00:06.055805] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:47.726 [2024-07-26 09:00:06.055820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.726 [2024-07-26 09:00:06.055833] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220cce0 is same with the state(5) to be set 00:26:47.726 [2024-07-26 09:00:06.055878] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:47.726 [2024-07-26 09:00:06.055898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.726 [2024-07-26 09:00:06.055913] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:47.726 [2024-07-26 09:00:06.055927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.726 [2024-07-26 09:00:06.055940] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:47.726 [2024-07-26 09:00:06.055961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.726 [2024-07-26 09:00:06.055975] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:47.726 [2024-07-26 09:00:06.055988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.726 [2024-07-26 09:00:06.056000] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdf610 is same with the state(5) to be set 00:26:47.726 [2024-07-26 09:00:06.056054] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:47.726 [2024-07-26 09:00:06.056081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.726 [2024-07-26 09:00:06.056097] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:47.727 [2024-07-26 09:00:06.056115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.727 [2024-07-26 09:00:06.056129] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:47.727 [2024-07-26 09:00:06.056142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.727 [2024-07-26 09:00:06.056156] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:47.727 [2024-07-26 09:00:06.056169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.727 [2024-07-26 09:00:06.056181] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a3070 is same with the state(5) to be set 00:26:47.727 [2024-07-26 09:00:06.056530] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233fc90 is same with the state(5) to be set 00:26:47.727 [2024-07-26 09:00:06.056557] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233fc90 is same with the state(5) to be set 00:26:47.727 [2024-07-26 09:00:06.056571] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233fc90 is same with the state(5) to be set 00:26:47.727 [2024-07-26 09:00:06.056870] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2340150 is same with the state(5) to be set 00:26:47.727 [2024-07-26 09:00:06.056896] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2340150 is same with the state(5) to be set 00:26:47.727 [2024-07-26 09:00:06.056915] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2340150 is same with the state(5) to be set 00:26:47.727 [2024-07-26 09:00:06.056928] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2340150 is same with the state(5) to be set 00:26:47.727 [2024-07-26 09:00:06.056940] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2340150 is same with the state(5) to be set 00:26:47.727 [2024-07-26 09:00:06.056952] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2340150 is same with the state(5) to be set 00:26:47.727 [2024-07-26 09:00:06.056964] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2340150 is same with the state(5) to be set 00:26:47.727 [2024-07-26 09:00:06.056976] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2340150 is same with the state(5) to be set 00:26:47.727 [2024-07-26 09:00:06.056988] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2340150 is same with the state(5) to be set 00:26:47.727 [2024-07-26 09:00:06.057000] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2340150 is same with the state(5) to be set 00:26:47.727 [2024-07-26 09:00:06.057012] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2340150 is same with the state(5) to be set 00:26:47.727 [2024-07-26 09:00:06.057024] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2340150 is same with the state(5) to be set 00:26:47.727 [2024-07-26 09:00:06.057036] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2340150 is same with the state(5) to be set 00:26:47.727 [2024-07-26 09:00:06.057048] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2340150 is same with the state(5) to be set 00:26:47.727 [2024-07-26 09:00:06.057067] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2340150 is same with the state(5) to be set 00:26:47.727 [2024-07-26 09:00:06.057081] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2340150 is same with the state(5) to be set 00:26:47.727 [2024-07-26 09:00:06.057094] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2340150 is same with the state(5) to be set 00:26:47.727 [2024-07-26 09:00:06.057110] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2340150 is same with the state(5) to be set 00:26:47.727 [2024-07-26 09:00:06.057122] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2340150 is same with the state(5) to be set 00:26:47.727 [2024-07-26 09:00:06.057134] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2340150 is same with the state(5) to be set 00:26:47.727 [2024-07-26 09:00:06.057146] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2340150 is same with the state(5) to be set 00:26:47.727 [2024-07-26 09:00:06.057159] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2340150 is same with the state(5) to be set 00:26:47.727 [2024-07-26 09:00:06.057171] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2340150 is same with the state(5) to be set 00:26:47.727 [2024-07-26 09:00:06.057183] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2340150 is same with the state(5) to be set 00:26:47.727 [2024-07-26 09:00:06.057195] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2340150 is same with the state(5) to be set 00:26:47.727 [2024-07-26 09:00:06.057207] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2340150 is same with the state(5) to be set 00:26:47.727 [2024-07-26 09:00:06.057219] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2340150 is same with the state(5) to be set 00:26:47.727 [2024-07-26 09:00:06.057231] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2340150 is same with the state(5) to be set 00:26:47.727 [2024-07-26 09:00:06.057247] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2340150 is same with the state(5) to be set 00:26:47.727 [2024-07-26 09:00:06.057263] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2340150 is same with the state(5) to be set 00:26:47.727 [2024-07-26 09:00:06.057276] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2340150 is same with the state(5) to be set 00:26:47.727 [2024-07-26 09:00:06.057288] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2340150 is same with the state(5) to be set 00:26:47.727 [2024-07-26 09:00:06.057300] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2340150 is same with the state(5) to be set 00:26:47.727 [2024-07-26 09:00:06.057314] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2340150 is same with the state(5) to be set 00:26:47.727 [2024-07-26 09:00:06.057326] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2340150 is same with the state(5) to be set 00:26:47.727 [2024-07-26 09:00:06.057338] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2340150 is same with the state(5) to be set 00:26:47.727 [2024-07-26 09:00:06.057361] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2340150 is same with the state(5) to be set 00:26:47.727 [2024-07-26 09:00:06.057377] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2340150 is same with the state(5) to be set 00:26:47.727 [2024-07-26 09:00:06.057390] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2340150 is same with the state(5) to be set 00:26:47.727 [2024-07-26 09:00:06.057403] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2340150 is same with the state(5) to be set 00:26:47.727 [2024-07-26 09:00:06.057416] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2340150 is same with the state(5) to be set 00:26:47.727 [2024-07-26 09:00:06.057428] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2340150 is same with the state(5) to be set 00:26:47.727 [2024-07-26 09:00:06.057441] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2340150 is same with the state(5) to be set 00:26:47.727 [2024-07-26 09:00:06.057453] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2340150 is same with the state(5) to be set 00:26:47.727 [2024-07-26 09:00:06.057466] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2340150 is same with the state(5) to be set 00:26:47.727 [2024-07-26 09:00:06.057479] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2340150 is same with the state(5) to be set 00:26:47.727 [2024-07-26 09:00:06.057491] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2340150 is same with the state(5) to be set 00:26:47.727 [2024-07-26 09:00:06.057504] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2340150 is same with the state(5) to be set 00:26:47.727 [2024-07-26 09:00:06.057524] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2340150 is same with the state(5) to be set 00:26:47.727 [2024-07-26 09:00:06.057536] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2340150 is same with the state(5) to be set 00:26:47.727 [2024-07-26 09:00:06.057548] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2340150 is same with the state(5) to be set 00:26:47.727 [2024-07-26 09:00:06.057560] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2340150 is same with the state(5) to be set 00:26:47.727 [2024-07-26 09:00:06.057582] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2340150 is same with the state(5) to be set 00:26:47.727 [2024-07-26 09:00:06.057595] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2340150 is same with the state(5) to be set 00:26:47.727 [2024-07-26 09:00:06.057607] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2340150 is same with the state(5) to be set 00:26:47.727 [2024-07-26 09:00:06.057619] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2340150 is same with the state(5) to be set 00:26:47.727 [2024-07-26 09:00:06.057624] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x21e5c00 was disconnected and freed. reset controller. 00:26:47.727 [2024-07-26 09:00:06.057635] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2340150 is same with the state(5) to be set 00:26:47.727 [2024-07-26 09:00:06.057648] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2340150 is same with the state(5) to be set 00:26:47.727 [2024-07-26 09:00:06.057661] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2340150 is same with the state(5) to be set 00:26:47.727 [2024-07-26 09:00:06.057673] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2340150 is same with the state(5) to be set 00:26:47.727 [2024-07-26 09:00:06.057685] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2340150 is same with the state(5) to be set 00:26:47.727 [2024-07-26 09:00:06.057700] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2340150 is same with the state(5) to be set 00:26:47.727 [2024-07-26 09:00:06.057712] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2340150 is same with the state(5) to be set 00:26:47.727 [2024-07-26 09:00:06.057847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.727 [2024-07-26 09:00:06.057870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.727 [2024-07-26 09:00:06.057896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.727 [2024-07-26 09:00:06.057912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.727 [2024-07-26 09:00:06.057929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.727 [2024-07-26 09:00:06.057944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.727 [2024-07-26 09:00:06.057960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.727 [2024-07-26 09:00:06.057973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.727 [2024-07-26 09:00:06.057989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.727 [2024-07-26 09:00:06.058003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.728 [2024-07-26 09:00:06.058018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.728 [2024-07-26 09:00:06.058031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.728 [2024-07-26 09:00:06.058047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.728 [2024-07-26 09:00:06.058067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.728 [2024-07-26 09:00:06.058084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.728 [2024-07-26 09:00:06.058098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.728 [2024-07-26 09:00:06.058117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.728 [2024-07-26 09:00:06.058130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.728 [2024-07-26 09:00:06.058151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.728 [2024-07-26 09:00:06.058165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.728 [2024-07-26 09:00:06.058181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.728 [2024-07-26 09:00:06.058194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.728 [2024-07-26 09:00:06.058210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.728 [2024-07-26 09:00:06.058223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.728 [2024-07-26 09:00:06.058239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.728 [2024-07-26 09:00:06.058252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.728 [2024-07-26 09:00:06.058268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.728 [2024-07-26 09:00:06.058281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.728 [2024-07-26 09:00:06.058297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.728 [2024-07-26 09:00:06.058310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.728 [2024-07-26 09:00:06.058325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.728 [2024-07-26 09:00:06.058338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.728 [2024-07-26 09:00:06.058355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.728 [2024-07-26 09:00:06.058378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.728 [2024-07-26 09:00:06.058394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.728 [2024-07-26 09:00:06.058407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.728 [2024-07-26 09:00:06.058423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.728 [2024-07-26 09:00:06.058436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.728 [2024-07-26 09:00:06.058452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.728 [2024-07-26 09:00:06.058465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.728 [2024-07-26 09:00:06.058481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.728 [2024-07-26 09:00:06.058494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.728 [2024-07-26 09:00:06.058510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.728 [2024-07-26 09:00:06.058527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.728 [2024-07-26 09:00:06.058543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.728 [2024-07-26 09:00:06.058557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.728 [2024-07-26 09:00:06.058572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.728 [2024-07-26 09:00:06.058586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.728 [2024-07-26 09:00:06.058601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.728 [2024-07-26 09:00:06.058615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.728 [2024-07-26 09:00:06.058630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.728 [2024-07-26 09:00:06.058643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.728 [2024-07-26 09:00:06.058659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.728 [2024-07-26 09:00:06.058672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.728 [2024-07-26 09:00:06.058688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.728 [2024-07-26 09:00:06.058701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.728 [2024-07-26 09:00:06.058717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.728 [2024-07-26 09:00:06.058730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.728 [2024-07-26 09:00:06.058746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.728 [2024-07-26 09:00:06.058759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.728 [2024-07-26 09:00:06.058774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.728 [2024-07-26 09:00:06.058787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.728 [2024-07-26 09:00:06.058809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.728 [2024-07-26 09:00:06.058822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.728 [2024-07-26 09:00:06.058838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.728 [2024-07-26 09:00:06.058852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.728 [2024-07-26 09:00:06.058868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.728 [2024-07-26 09:00:06.058881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.728 [2024-07-26 09:00:06.058900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.728 [2024-07-26 09:00:06.058914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.728 [2024-07-26 09:00:06.058930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.728 [2024-07-26 09:00:06.058951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.728 [2024-07-26 09:00:06.058967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.728 [2024-07-26 09:00:06.058980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.728 [2024-07-26 09:00:06.058996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.728 [2024-07-26 09:00:06.059016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.728 [2024-07-26 09:00:06.059031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.729 [2024-07-26 09:00:06.059045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.729 [2024-07-26 09:00:06.059068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.729 [2024-07-26 09:00:06.059084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.729 [2024-07-26 09:00:06.059100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.729 [2024-07-26 09:00:06.059113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.729 [2024-07-26 09:00:06.059129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.729 [2024-07-26 09:00:06.059142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.729 [2024-07-26 09:00:06.059158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.729 [2024-07-26 09:00:06.059172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.729 [2024-07-26 09:00:06.059187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.729 [2024-07-26 09:00:06.059200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.729 [2024-07-26 09:00:06.059216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.729 [2024-07-26 09:00:06.059229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.729 [2024-07-26 09:00:06.059245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.729 [2024-07-26 09:00:06.059258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.729 [2024-07-26 09:00:06.059273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.729 [2024-07-26 09:00:06.059294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.729 [2024-07-26 09:00:06.059310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.729 [2024-07-26 09:00:06.059324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.729 [2024-07-26 09:00:06.059340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.729 [2024-07-26 09:00:06.059353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.729 [2024-07-26 09:00:06.059379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.729 [2024-07-26 09:00:06.059392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.729 [2024-07-26 09:00:06.059408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.729 [2024-07-26 09:00:06.059421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.729 [2024-07-26 09:00:06.059437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.729 [2024-07-26 09:00:06.059450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.729 [2024-07-26 09:00:06.059466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.729 [2024-07-26 09:00:06.059480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.729 [2024-07-26 09:00:06.059495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.729 [2024-07-26 09:00:06.059508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.729 [2024-07-26 09:00:06.059524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.729 [2024-07-26 09:00:06.059537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.729 [2024-07-26 09:00:06.059553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.729 [2024-07-26 09:00:06.059577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.729 [2024-07-26 09:00:06.059592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.729 [2024-07-26 09:00:06.059606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.729 [2024-07-26 09:00:06.059621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.729 [2024-07-26 09:00:06.059642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.729 [2024-07-26 09:00:06.059657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.729 [2024-07-26 09:00:06.059670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.729 [2024-07-26 09:00:06.059689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.729 [2024-07-26 09:00:06.059703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.729 [2024-07-26 09:00:06.059718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.729 [2024-07-26 09:00:06.059732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.729 [2024-07-26 09:00:06.059747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.729 [2024-07-26 09:00:06.059761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.729 [2024-07-26 09:00:06.059777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.729 [2024-07-26 09:00:06.059791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.729 [2024-07-26 09:00:06.059807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.729 [2024-07-26 09:00:06.059821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.729 [2024-07-26 09:00:06.059835] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e4770 is same with the state(5) to be set 00:26:47.729 [2024-07-26 09:00:06.059913] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x21e4770 was disconnected and freed. reset controller. 00:26:47.729 [2024-07-26 09:00:06.060313] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:47.729 [2024-07-26 09:00:06.060357] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:26:47.729 [2024-07-26 09:00:06.060381] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cdf610 (9): Bad file descriptor 00:26:47.729 [2024-07-26 09:00:06.061718] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:26:47.729 [2024-07-26 09:00:06.061751] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23a3070 (9): Bad file descriptor 00:26:47.729 [2024-07-26 09:00:06.061913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.729 [2024-07-26 09:00:06.061941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21e9f10 with addr=10.0.0.2, port=4420 00:26:47.729 [2024-07-26 09:00:06.061957] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e9f10 is same with the state(5) to be set 00:26:47.729 [2024-07-26 09:00:06.062089] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:26:47.729 [2024-07-26 09:00:06.062394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.729 [2024-07-26 09:00:06.062421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdf610 with addr=10.0.0.2, port=4420 00:26:47.729 [2024-07-26 09:00:06.062437] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdf610 is same with the state(5) to be set 00:26:47.729 [2024-07-26 09:00:06.062466] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21e9f10 (9): Bad file descriptor 00:26:47.729 [2024-07-26 09:00:06.062973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.729 [2024-07-26 09:00:06.063000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a3070 with addr=10.0.0.2, port=4420 00:26:47.729 [2024-07-26 09:00:06.063016] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a3070 is same with the state(5) to be set 00:26:47.729 [2024-07-26 09:00:06.063040] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cdf610 (9): Bad file descriptor 00:26:47.729 [2024-07-26 09:00:06.063067] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:47.729 [2024-07-26 09:00:06.063082] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:47.729 [2024-07-26 09:00:06.063108] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:47.729 [2024-07-26 09:00:06.063198] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:26:47.729 [2024-07-26 09:00:06.063233] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:47.729 [2024-07-26 09:00:06.063254] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23a3070 (9): Bad file descriptor 00:26:47.729 [2024-07-26 09:00:06.063272] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:26:47.729 [2024-07-26 09:00:06.063285] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:26:47.729 [2024-07-26 09:00:06.063297] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:26:47.729 [2024-07-26 09:00:06.063377] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:47.729 [2024-07-26 09:00:06.063398] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:26:47.729 [2024-07-26 09:00:06.063412] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:26:47.729 [2024-07-26 09:00:06.063425] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:26:47.730 [2024-07-26 09:00:06.063478] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:47.730 [2024-07-26 09:00:06.065136] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23b5ad0 (9): Bad file descriptor 00:26:47.730 [2024-07-26 09:00:06.065173] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2216300 (9): Bad file descriptor 00:26:47.730 [2024-07-26 09:00:06.065204] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2383cc0 (9): Bad file descriptor 00:26:47.730 [2024-07-26 09:00:06.065260] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:47.730 [2024-07-26 09:00:06.065282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.730 [2024-07-26 09:00:06.065300] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:47.730 [2024-07-26 09:00:06.065313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.730 [2024-07-26 09:00:06.065327] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:47.730 [2024-07-26 09:00:06.065340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.730 [2024-07-26 09:00:06.065365] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:47.730 [2024-07-26 09:00:06.065378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.730 [2024-07-26 09:00:06.065391] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2343180 is same with the state(5) to be set 00:26:47.730 [2024-07-26 09:00:06.065418] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220cce0 (9): Bad file descriptor 00:26:47.730 [2024-07-26 09:00:06.065470] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:47.730 [2024-07-26 09:00:06.065496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.730 [2024-07-26 09:00:06.065511] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:47.730 [2024-07-26 09:00:06.065525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.730 [2024-07-26 09:00:06.065539] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:47.730 [2024-07-26 09:00:06.065552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.730 [2024-07-26 09:00:06.065566] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:47.730 [2024-07-26 09:00:06.065580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.730 [2024-07-26 09:00:06.065592] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b3ab0 is same with the state(5) to be set 00:26:47.730 [2024-07-26 09:00:06.065754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.730 [2024-07-26 09:00:06.065776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.730 [2024-07-26 09:00:06.065805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.730 [2024-07-26 09:00:06.065820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.730 [2024-07-26 09:00:06.065837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.730 [2024-07-26 09:00:06.065850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.730 [2024-07-26 09:00:06.065866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.730 [2024-07-26 09:00:06.065879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.730 [2024-07-26 09:00:06.065895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.730 [2024-07-26 09:00:06.065908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.730 [2024-07-26 09:00:06.065923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.730 [2024-07-26 09:00:06.065936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.730 [2024-07-26 09:00:06.065952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.730 [2024-07-26 09:00:06.065966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.730 [2024-07-26 09:00:06.065981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.730 [2024-07-26 09:00:06.065994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.730 [2024-07-26 09:00:06.066017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.730 [2024-07-26 09:00:06.066035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.730 [2024-07-26 09:00:06.066051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.730 [2024-07-26 09:00:06.066073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.730 [2024-07-26 09:00:06.066090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.730 [2024-07-26 09:00:06.066104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.730 [2024-07-26 09:00:06.066119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.730 [2024-07-26 09:00:06.066133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.730 [2024-07-26 09:00:06.066148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.730 [2024-07-26 09:00:06.066161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.730 [2024-07-26 09:00:06.066177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.730 [2024-07-26 09:00:06.066190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.730 [2024-07-26 09:00:06.066205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.730 [2024-07-26 09:00:06.066218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.730 [2024-07-26 09:00:06.066234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.730 [2024-07-26 09:00:06.066247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.730 [2024-07-26 09:00:06.066262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.730 [2024-07-26 09:00:06.066276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.730 [2024-07-26 09:00:06.066291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.730 [2024-07-26 09:00:06.066304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.730 [2024-07-26 09:00:06.066319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.730 [2024-07-26 09:00:06.066333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.730 [2024-07-26 09:00:06.066348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.730 [2024-07-26 09:00:06.066361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.730 [2024-07-26 09:00:06.066376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.730 [2024-07-26 09:00:06.066390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.730 [2024-07-26 09:00:06.066409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.730 [2024-07-26 09:00:06.066423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.730 [2024-07-26 09:00:06.066438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.730 [2024-07-26 09:00:06.066452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.730 [2024-07-26 09:00:06.066467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.730 [2024-07-26 09:00:06.066481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.730 [2024-07-26 09:00:06.066498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.730 [2024-07-26 09:00:06.066511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.730 [2024-07-26 09:00:06.066527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.730 [2024-07-26 09:00:06.066541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.730 [2024-07-26 09:00:06.066557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.730 [2024-07-26 09:00:06.066570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.730 [2024-07-26 09:00:06.066591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.730 [2024-07-26 09:00:06.066605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.730 [2024-07-26 09:00:06.066620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.731 [2024-07-26 09:00:06.066634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.731 [2024-07-26 09:00:06.066650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.731 [2024-07-26 09:00:06.066663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.731 [2024-07-26 09:00:06.066679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.731 [2024-07-26 09:00:06.066692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.731 [2024-07-26 09:00:06.066708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.731 [2024-07-26 09:00:06.066722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.731 [2024-07-26 09:00:06.066745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.731 [2024-07-26 09:00:06.066758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.731 [2024-07-26 09:00:06.066774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.731 [2024-07-26 09:00:06.066791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.731 [2024-07-26 09:00:06.066814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.731 [2024-07-26 09:00:06.066828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.731 [2024-07-26 09:00:06.066844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.731 [2024-07-26 09:00:06.066857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.731 [2024-07-26 09:00:06.066873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.731 [2024-07-26 09:00:06.066887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.731 [2024-07-26 09:00:06.066902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.731 [2024-07-26 09:00:06.066916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.731 [2024-07-26 09:00:06.066932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.731 [2024-07-26 09:00:06.066953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.731 [2024-07-26 09:00:06.066969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.731 [2024-07-26 09:00:06.066982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.731 [2024-07-26 09:00:06.066998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.731 [2024-07-26 09:00:06.067018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.731 [2024-07-26 09:00:06.067034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.731 [2024-07-26 09:00:06.067048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.731 [2024-07-26 09:00:06.067070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.731 [2024-07-26 09:00:06.067086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.731 [2024-07-26 09:00:06.067102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.731 [2024-07-26 09:00:06.067115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.731 [2024-07-26 09:00:06.067131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.731 [2024-07-26 09:00:06.067145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.731 [2024-07-26 09:00:06.067161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.731 [2024-07-26 09:00:06.067175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.731 [2024-07-26 09:00:06.067194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.731 [2024-07-26 09:00:06.067208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.731 [2024-07-26 09:00:06.067224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.731 [2024-07-26 09:00:06.067238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.731 [2024-07-26 09:00:06.067254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.731 [2024-07-26 09:00:06.067267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.731 [2024-07-26 09:00:06.067283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.731 [2024-07-26 09:00:06.067296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.731 [2024-07-26 09:00:06.067312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.731 [2024-07-26 09:00:06.067325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.731 [2024-07-26 09:00:06.067341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.731 [2024-07-26 09:00:06.067354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.731 [2024-07-26 09:00:06.067377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.731 [2024-07-26 09:00:06.067391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.731 [2024-07-26 09:00:06.067407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.731 [2024-07-26 09:00:06.067420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.731 [2024-07-26 09:00:06.067435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.731 [2024-07-26 09:00:06.067449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.731 [2024-07-26 09:00:06.067464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.731 [2024-07-26 09:00:06.067478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.731 [2024-07-26 09:00:06.067493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.731 [2024-07-26 09:00:06.067507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.731 [2024-07-26 09:00:06.067522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.731 [2024-07-26 09:00:06.067536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.731 [2024-07-26 09:00:06.067552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.731 [2024-07-26 09:00:06.067569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.731 [2024-07-26 09:00:06.067585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.731 [2024-07-26 09:00:06.067599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.731 [2024-07-26 09:00:06.067615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.731 [2024-07-26 09:00:06.067629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.731 [2024-07-26 09:00:06.067644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.731 [2024-07-26 09:00:06.067658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.731 [2024-07-26 09:00:06.067674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.731 [2024-07-26 09:00:06.067687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.731 [2024-07-26 09:00:06.067703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.731 [2024-07-26 09:00:06.067716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.731 [2024-07-26 09:00:06.067740] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236fff0 is same with the state(5) to be set 00:26:47.731 [2024-07-26 09:00:06.069546] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:26:47.731 [2024-07-26 09:00:06.069850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.731 [2024-07-26 09:00:06.069882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x238e920 with addr=10.0.0.2, port=4420 00:26:47.731 [2024-07-26 09:00:06.069899] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x238e920 is same with the state(5) to be set 00:26:47.731 [2024-07-26 09:00:06.070251] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x238e920 (9): Bad file descriptor 00:26:47.731 [2024-07-26 09:00:06.070322] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:26:47.731 [2024-07-26 09:00:06.070342] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:26:47.732 [2024-07-26 09:00:06.070358] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:26:47.732 [2024-07-26 09:00:06.070422] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:47.732 [2024-07-26 09:00:06.070541] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:47.732 [2024-07-26 09:00:06.070734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.732 [2024-07-26 09:00:06.070762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21e9f10 with addr=10.0.0.2, port=4420 00:26:47.732 [2024-07-26 09:00:06.070778] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e9f10 is same with the state(5) to be set 00:26:47.732 [2024-07-26 09:00:06.070827] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21e9f10 (9): Bad file descriptor 00:26:47.732 [2024-07-26 09:00:06.070878] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:47.732 [2024-07-26 09:00:06.070894] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:47.732 [2024-07-26 09:00:06.070913] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:47.732 [2024-07-26 09:00:06.070963] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:47.732 [2024-07-26 09:00:06.072094] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:26:47.732 [2024-07-26 09:00:06.072283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.732 [2024-07-26 09:00:06.072312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdf610 with addr=10.0.0.2, port=4420 00:26:47.732 [2024-07-26 09:00:06.072328] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdf610 is same with the state(5) to be set 00:26:47.732 [2024-07-26 09:00:06.072379] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cdf610 (9): Bad file descriptor 00:26:47.732 [2024-07-26 09:00:06.072429] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:26:47.732 [2024-07-26 09:00:06.072447] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:26:47.732 [2024-07-26 09:00:06.072460] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:26:47.732 [2024-07-26 09:00:06.072509] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:47.732 [2024-07-26 09:00:06.072598] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:26:47.732 [2024-07-26 09:00:06.072786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.732 [2024-07-26 09:00:06.072813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a3070 with addr=10.0.0.2, port=4420 00:26:47.732 [2024-07-26 09:00:06.072828] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a3070 is same with the state(5) to be set 00:26:47.732 [2024-07-26 09:00:06.072878] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23a3070 (9): Bad file descriptor 00:26:47.732 [2024-07-26 09:00:06.072927] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:26:47.732 [2024-07-26 09:00:06.072945] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:26:47.732 [2024-07-26 09:00:06.072958] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:26:47.732 [2024-07-26 09:00:06.073007] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:47.732 [2024-07-26 09:00:06.075189] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2343180 (9): Bad file descriptor 00:26:47.732 [2024-07-26 09:00:06.075239] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23b3ab0 (9): Bad file descriptor 00:26:47.732 [2024-07-26 09:00:06.075385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.732 [2024-07-26 09:00:06.075411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.732 [2024-07-26 09:00:06.075439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.732 [2024-07-26 09:00:06.075454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.732 [2024-07-26 09:00:06.075470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.732 [2024-07-26 09:00:06.075483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.732 [2024-07-26 09:00:06.075499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.732 [2024-07-26 09:00:06.075519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.732 [2024-07-26 09:00:06.075535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.732 [2024-07-26 09:00:06.075549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.732 [2024-07-26 09:00:06.075564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.732 [2024-07-26 09:00:06.075578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.732 [2024-07-26 09:00:06.075593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.732 [2024-07-26 09:00:06.075606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.732 [2024-07-26 09:00:06.075621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.732 [2024-07-26 09:00:06.075635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.732 [2024-07-26 09:00:06.075651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.732 [2024-07-26 09:00:06.075664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.732 [2024-07-26 09:00:06.075679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.732 [2024-07-26 09:00:06.075692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.732 [2024-07-26 09:00:06.075707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.732 [2024-07-26 09:00:06.075720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.732 [2024-07-26 09:00:06.075736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.732 [2024-07-26 09:00:06.075749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.732 [2024-07-26 09:00:06.075764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.732 [2024-07-26 09:00:06.075778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.732 [2024-07-26 09:00:06.075794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.732 [2024-07-26 09:00:06.075807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.732 [2024-07-26 09:00:06.075823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.732 [2024-07-26 09:00:06.075836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.732 [2024-07-26 09:00:06.075851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.732 [2024-07-26 09:00:06.075865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.732 [2024-07-26 09:00:06.075884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.732 [2024-07-26 09:00:06.075898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.732 [2024-07-26 09:00:06.075918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.732 [2024-07-26 09:00:06.075931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.732 [2024-07-26 09:00:06.075947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.732 [2024-07-26 09:00:06.075960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.732 [2024-07-26 09:00:06.075975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.732 [2024-07-26 09:00:06.075989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.732 [2024-07-26 09:00:06.076004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.732 [2024-07-26 09:00:06.076018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.733 [2024-07-26 09:00:06.076033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.733 [2024-07-26 09:00:06.076047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.733 [2024-07-26 09:00:06.076081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.733 [2024-07-26 09:00:06.076098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.733 [2024-07-26 09:00:06.076113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.733 [2024-07-26 09:00:06.076132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.733 [2024-07-26 09:00:06.076149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.733 [2024-07-26 09:00:06.076162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.733 [2024-07-26 09:00:06.076177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.733 [2024-07-26 09:00:06.076190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.733 [2024-07-26 09:00:06.076205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.733 [2024-07-26 09:00:06.076219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.733 [2024-07-26 09:00:06.076235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.733 [2024-07-26 09:00:06.076248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.733 [2024-07-26 09:00:06.076263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.733 [2024-07-26 09:00:06.076289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.733 [2024-07-26 09:00:06.076305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.733 [2024-07-26 09:00:06.076319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.733 [2024-07-26 09:00:06.076340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.733 [2024-07-26 09:00:06.076354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.733 [2024-07-26 09:00:06.076369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.733 [2024-07-26 09:00:06.076383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.733 [2024-07-26 09:00:06.076399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.733 [2024-07-26 09:00:06.076412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.733 [2024-07-26 09:00:06.076428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.733 [2024-07-26 09:00:06.076442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.733 [2024-07-26 09:00:06.076458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.733 [2024-07-26 09:00:06.076472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.733 [2024-07-26 09:00:06.076497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.733 [2024-07-26 09:00:06.076510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.733 [2024-07-26 09:00:06.076525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.733 [2024-07-26 09:00:06.076538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.733 [2024-07-26 09:00:06.076562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.733 [2024-07-26 09:00:06.076575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.733 [2024-07-26 09:00:06.076590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.733 [2024-07-26 09:00:06.076605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.733 [2024-07-26 09:00:06.076621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.733 [2024-07-26 09:00:06.076634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.733 [2024-07-26 09:00:06.076650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.733 [2024-07-26 09:00:06.076663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.733 [2024-07-26 09:00:06.076682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.733 [2024-07-26 09:00:06.076696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.733 [2024-07-26 09:00:06.076712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.733 [2024-07-26 09:00:06.076725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.733 [2024-07-26 09:00:06.076741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.733 [2024-07-26 09:00:06.076754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.733 [2024-07-26 09:00:06.076774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.733 [2024-07-26 09:00:06.076787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.733 [2024-07-26 09:00:06.076804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.733 [2024-07-26 09:00:06.076817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.733 [2024-07-26 09:00:06.076832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.733 [2024-07-26 09:00:06.076846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.733 [2024-07-26 09:00:06.076861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.733 [2024-07-26 09:00:06.076875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.733 [2024-07-26 09:00:06.076890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.733 [2024-07-26 09:00:06.076904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.733 [2024-07-26 09:00:06.076924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.733 [2024-07-26 09:00:06.076938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.733 [2024-07-26 09:00:06.076954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.733 [2024-07-26 09:00:06.076967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.733 [2024-07-26 09:00:06.076988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.733 [2024-07-26 09:00:06.077001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.733 [2024-07-26 09:00:06.077016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.733 [2024-07-26 09:00:06.077030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.733 [2024-07-26 09:00:06.077045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.733 [2024-07-26 09:00:06.077068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.733 [2024-07-26 09:00:06.077085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.733 [2024-07-26 09:00:06.077099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.733 [2024-07-26 09:00:06.077115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.733 [2024-07-26 09:00:06.077128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.733 [2024-07-26 09:00:06.077144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.733 [2024-07-26 09:00:06.077162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.733 [2024-07-26 09:00:06.077177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.733 [2024-07-26 09:00:06.077191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.733 [2024-07-26 09:00:06.077207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.733 [2024-07-26 09:00:06.077220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.733 [2024-07-26 09:00:06.077236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.733 [2024-07-26 09:00:06.077249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.733 [2024-07-26 09:00:06.077265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.734 [2024-07-26 09:00:06.077278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.734 [2024-07-26 09:00:06.077294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.734 [2024-07-26 09:00:06.077317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.734 [2024-07-26 09:00:06.077333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.734 [2024-07-26 09:00:06.077346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.734 [2024-07-26 09:00:06.077362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.734 [2024-07-26 09:00:06.077383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.734 [2024-07-26 09:00:06.077397] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2377950 is same with the state(5) to be set 00:26:47.734 [2024-07-26 09:00:06.078682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.734 [2024-07-26 09:00:06.078705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.734 [2024-07-26 09:00:06.078737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.734 [2024-07-26 09:00:06.078757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.734 [2024-07-26 09:00:06.078774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.734 [2024-07-26 09:00:06.078795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.734 [2024-07-26 09:00:06.078811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.734 [2024-07-26 09:00:06.078825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.734 [2024-07-26 09:00:06.078840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.734 [2024-07-26 09:00:06.078854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.734 [2024-07-26 09:00:06.078869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.734 [2024-07-26 09:00:06.078883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.734 [2024-07-26 09:00:06.078898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.734 [2024-07-26 09:00:06.078912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.734 [2024-07-26 09:00:06.078927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.734 [2024-07-26 09:00:06.078947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.734 [2024-07-26 09:00:06.078962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.734 [2024-07-26 09:00:06.078976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.734 [2024-07-26 09:00:06.078991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.734 [2024-07-26 09:00:06.079011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.734 [2024-07-26 09:00:06.079026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.734 [2024-07-26 09:00:06.079040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.734 [2024-07-26 09:00:06.079056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.734 [2024-07-26 09:00:06.079084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.734 [2024-07-26 09:00:06.079105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.734 [2024-07-26 09:00:06.079118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.734 [2024-07-26 09:00:06.079134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.734 [2024-07-26 09:00:06.079148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.734 [2024-07-26 09:00:06.079177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.734 [2024-07-26 09:00:06.079192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.734 [2024-07-26 09:00:06.079208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.734 [2024-07-26 09:00:06.079221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.734 [2024-07-26 09:00:06.079237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.734 [2024-07-26 09:00:06.079251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.734 [2024-07-26 09:00:06.079266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.734 [2024-07-26 09:00:06.079280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.734 [2024-07-26 09:00:06.079296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.734 [2024-07-26 09:00:06.079314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.734 [2024-07-26 09:00:06.079330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.734 [2024-07-26 09:00:06.079343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.734 [2024-07-26 09:00:06.079359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.734 [2024-07-26 09:00:06.079379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.734 [2024-07-26 09:00:06.079395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.734 [2024-07-26 09:00:06.079408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.734 [2024-07-26 09:00:06.079423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.734 [2024-07-26 09:00:06.079437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.734 [2024-07-26 09:00:06.079452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.734 [2024-07-26 09:00:06.079466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.734 [2024-07-26 09:00:06.079481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.734 [2024-07-26 09:00:06.079495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.734 [2024-07-26 09:00:06.079510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.734 [2024-07-26 09:00:06.079533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.734 [2024-07-26 09:00:06.079548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.734 [2024-07-26 09:00:06.079565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.734 [2024-07-26 09:00:06.079581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.734 [2024-07-26 09:00:06.079598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.734 [2024-07-26 09:00:06.079614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.734 [2024-07-26 09:00:06.079627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.734 [2024-07-26 09:00:06.079643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.734 [2024-07-26 09:00:06.079656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.734 [2024-07-26 09:00:06.079672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.734 [2024-07-26 09:00:06.079685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.734 [2024-07-26 09:00:06.079701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.734 [2024-07-26 09:00:06.079715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.734 [2024-07-26 09:00:06.079731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.734 [2024-07-26 09:00:06.079744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.734 [2024-07-26 09:00:06.079759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.734 [2024-07-26 09:00:06.079772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.734 [2024-07-26 09:00:06.079788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.734 [2024-07-26 09:00:06.079808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.735 [2024-07-26 09:00:06.079823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.735 [2024-07-26 09:00:06.079837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.735 [2024-07-26 09:00:06.079853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.735 [2024-07-26 09:00:06.079866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.735 [2024-07-26 09:00:06.079882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.735 [2024-07-26 09:00:06.079896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.735 [2024-07-26 09:00:06.079912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.735 [2024-07-26 09:00:06.079925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.735 [2024-07-26 09:00:06.079944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.735 [2024-07-26 09:00:06.079959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.735 [2024-07-26 09:00:06.079974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.735 [2024-07-26 09:00:06.079988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.735 [2024-07-26 09:00:06.080004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.735 [2024-07-26 09:00:06.080017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.735 [2024-07-26 09:00:06.080032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.735 [2024-07-26 09:00:06.080046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.735 [2024-07-26 09:00:06.080067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.735 [2024-07-26 09:00:06.080083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.735 [2024-07-26 09:00:06.080106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.735 [2024-07-26 09:00:06.080119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.735 [2024-07-26 09:00:06.080135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.735 [2024-07-26 09:00:06.080149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.735 [2024-07-26 09:00:06.080171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.735 [2024-07-26 09:00:06.080185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.735 [2024-07-26 09:00:06.080201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.735 [2024-07-26 09:00:06.080215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.735 [2024-07-26 09:00:06.080231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.735 [2024-07-26 09:00:06.080245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.735 [2024-07-26 09:00:06.080261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.735 [2024-07-26 09:00:06.080274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.735 [2024-07-26 09:00:06.080290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.735 [2024-07-26 09:00:06.080303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.735 [2024-07-26 09:00:06.080324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.735 [2024-07-26 09:00:06.080341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.735 [2024-07-26 09:00:06.080357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.735 [2024-07-26 09:00:06.080371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.735 [2024-07-26 09:00:06.080389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.735 [2024-07-26 09:00:06.080402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.735 [2024-07-26 09:00:06.080418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.735 [2024-07-26 09:00:06.080431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.735 [2024-07-26 09:00:06.080447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.735 [2024-07-26 09:00:06.080461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.735 [2024-07-26 09:00:06.080477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.735 [2024-07-26 09:00:06.080490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.735 [2024-07-26 09:00:06.080506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.735 [2024-07-26 09:00:06.080520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.735 [2024-07-26 09:00:06.080536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.735 [2024-07-26 09:00:06.080549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.735 [2024-07-26 09:00:06.080565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.735 [2024-07-26 09:00:06.080578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.735 [2024-07-26 09:00:06.080600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.735 [2024-07-26 09:00:06.080615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.735 [2024-07-26 09:00:06.080631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.735 [2024-07-26 09:00:06.080645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.735 [2024-07-26 09:00:06.080660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.735 [2024-07-26 09:00:06.080674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.735 [2024-07-26 09:00:06.080690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.735 [2024-07-26 09:00:06.080704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.735 [2024-07-26 09:00:06.080721] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378de0 is same with the state(5) to be set 00:26:47.735 [2024-07-26 09:00:06.082049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.735 [2024-07-26 09:00:06.082078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.735 [2024-07-26 09:00:06.082100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.735 [2024-07-26 09:00:06.082116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.735 [2024-07-26 09:00:06.082132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.735 [2024-07-26 09:00:06.082146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.735 [2024-07-26 09:00:06.082162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.735 [2024-07-26 09:00:06.082175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.735 [2024-07-26 09:00:06.082191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.735 [2024-07-26 09:00:06.082205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.735 [2024-07-26 09:00:06.082220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.735 [2024-07-26 09:00:06.082234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.735 [2024-07-26 09:00:06.082250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.735 [2024-07-26 09:00:06.082263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.735 [2024-07-26 09:00:06.082279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.735 [2024-07-26 09:00:06.082293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.735 [2024-07-26 09:00:06.082317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.735 [2024-07-26 09:00:06.082331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.735 [2024-07-26 09:00:06.082346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.735 [2024-07-26 09:00:06.082360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.736 [2024-07-26 09:00:06.082382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.736 [2024-07-26 09:00:06.082396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.736 [2024-07-26 09:00:06.082411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.736 [2024-07-26 09:00:06.082425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.736 [2024-07-26 09:00:06.082440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.736 [2024-07-26 09:00:06.082458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.736 [2024-07-26 09:00:06.082474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.736 [2024-07-26 09:00:06.082488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.736 [2024-07-26 09:00:06.082505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.736 [2024-07-26 09:00:06.082519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.736 [2024-07-26 09:00:06.082543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.736 [2024-07-26 09:00:06.082557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.736 [2024-07-26 09:00:06.082572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.736 [2024-07-26 09:00:06.082596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.736 [2024-07-26 09:00:06.082611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.736 [2024-07-26 09:00:06.082624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.736 [2024-07-26 09:00:06.082640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.736 [2024-07-26 09:00:06.082653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.736 [2024-07-26 09:00:06.082669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.736 [2024-07-26 09:00:06.082682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.736 [2024-07-26 09:00:06.082698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.736 [2024-07-26 09:00:06.082711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.736 [2024-07-26 09:00:06.082727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.736 [2024-07-26 09:00:06.082740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.736 [2024-07-26 09:00:06.082756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.736 [2024-07-26 09:00:06.082769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.736 [2024-07-26 09:00:06.082785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.736 [2024-07-26 09:00:06.082807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.736 [2024-07-26 09:00:06.082823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.736 [2024-07-26 09:00:06.082836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.736 [2024-07-26 09:00:06.082856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.736 [2024-07-26 09:00:06.082870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.736 [2024-07-26 09:00:06.082885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.736 [2024-07-26 09:00:06.082899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.736 [2024-07-26 09:00:06.082914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.736 [2024-07-26 09:00:06.082928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.736 [2024-07-26 09:00:06.082943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.736 [2024-07-26 09:00:06.082956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.736 [2024-07-26 09:00:06.082972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.736 [2024-07-26 09:00:06.082985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.736 [2024-07-26 09:00:06.083002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.736 [2024-07-26 09:00:06.083020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.736 [2024-07-26 09:00:06.083036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.736 [2024-07-26 09:00:06.083049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.736 [2024-07-26 09:00:06.083080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.736 [2024-07-26 09:00:06.083099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.736 [2024-07-26 09:00:06.083115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.736 [2024-07-26 09:00:06.083129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.736 [2024-07-26 09:00:06.083145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.736 [2024-07-26 09:00:06.083165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.736 [2024-07-26 09:00:06.083181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.736 [2024-07-26 09:00:06.083195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.736 [2024-07-26 09:00:06.083210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.736 [2024-07-26 09:00:06.083224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.736 [2024-07-26 09:00:06.083240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.736 [2024-07-26 09:00:06.083262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.736 [2024-07-26 09:00:06.083278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.736 [2024-07-26 09:00:06.083291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.736 [2024-07-26 09:00:06.083307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.736 [2024-07-26 09:00:06.083320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.736 [2024-07-26 09:00:06.083337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.736 [2024-07-26 09:00:06.083350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.736 [2024-07-26 09:00:06.083366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.736 [2024-07-26 09:00:06.083379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.736 [2024-07-26 09:00:06.083395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.736 [2024-07-26 09:00:06.083411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.736 [2024-07-26 09:00:06.083427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.736 [2024-07-26 09:00:06.083440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.736 [2024-07-26 09:00:06.083456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.736 [2024-07-26 09:00:06.083476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.736 [2024-07-26 09:00:06.083492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.736 [2024-07-26 09:00:06.083505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.736 [2024-07-26 09:00:06.083521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.736 [2024-07-26 09:00:06.083534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.736 [2024-07-26 09:00:06.083550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.736 [2024-07-26 09:00:06.083563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.736 [2024-07-26 09:00:06.083579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.736 [2024-07-26 09:00:06.083592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.736 [2024-07-26 09:00:06.083607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.737 [2024-07-26 09:00:06.083621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.737 [2024-07-26 09:00:06.083639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.737 [2024-07-26 09:00:06.083653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.737 [2024-07-26 09:00:06.083669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.737 [2024-07-26 09:00:06.083685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.737 [2024-07-26 09:00:06.083700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.737 [2024-07-26 09:00:06.083713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.737 [2024-07-26 09:00:06.083729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.737 [2024-07-26 09:00:06.083742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.737 [2024-07-26 09:00:06.083757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.737 [2024-07-26 09:00:06.083771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.737 [2024-07-26 09:00:06.083786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.737 [2024-07-26 09:00:06.083799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.737 [2024-07-26 09:00:06.083815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.737 [2024-07-26 09:00:06.083835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.737 [2024-07-26 09:00:06.083851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.737 [2024-07-26 09:00:06.083864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.737 [2024-07-26 09:00:06.083879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.737 [2024-07-26 09:00:06.083900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.737 [2024-07-26 09:00:06.083916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.737 [2024-07-26 09:00:06.083929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.737 [2024-07-26 09:00:06.083944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.737 [2024-07-26 09:00:06.083957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.737 [2024-07-26 09:00:06.083973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.737 [2024-07-26 09:00:06.083987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.737 [2024-07-26 09:00:06.084002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.737 [2024-07-26 09:00:06.084019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.737 [2024-07-26 09:00:06.084035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.737 [2024-07-26 09:00:06.084052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.737 [2024-07-26 09:00:06.084084] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e3430 is same with the state(5) to be set 00:26:47.737 [2024-07-26 09:00:06.085362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.737 [2024-07-26 09:00:06.085385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.737 [2024-07-26 09:00:06.085407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.737 [2024-07-26 09:00:06.085422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.737 [2024-07-26 09:00:06.085438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.737 [2024-07-26 09:00:06.085452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.737 [2024-07-26 09:00:06.085467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.737 [2024-07-26 09:00:06.085481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.737 [2024-07-26 09:00:06.085496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.737 [2024-07-26 09:00:06.085510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.737 [2024-07-26 09:00:06.085525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.737 [2024-07-26 09:00:06.085538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.737 [2024-07-26 09:00:06.085554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.737 [2024-07-26 09:00:06.085568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.737 [2024-07-26 09:00:06.085583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.737 [2024-07-26 09:00:06.085596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.737 [2024-07-26 09:00:06.085612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.737 [2024-07-26 09:00:06.085626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.737 [2024-07-26 09:00:06.085641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.737 [2024-07-26 09:00:06.085655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.737 [2024-07-26 09:00:06.085670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.737 [2024-07-26 09:00:06.085688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.737 [2024-07-26 09:00:06.085704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.737 [2024-07-26 09:00:06.085717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.737 [2024-07-26 09:00:06.085734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.737 [2024-07-26 09:00:06.085747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.737 [2024-07-26 09:00:06.085763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.737 [2024-07-26 09:00:06.085777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.737 [2024-07-26 09:00:06.085793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.737 [2024-07-26 09:00:06.085807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.737 [2024-07-26 09:00:06.085822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.737 [2024-07-26 09:00:06.085836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.737 [2024-07-26 09:00:06.085851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.737 [2024-07-26 09:00:06.085865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.737 [2024-07-26 09:00:06.085880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.737 [2024-07-26 09:00:06.085893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.737 [2024-07-26 09:00:06.085909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.737 [2024-07-26 09:00:06.085922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.737 [2024-07-26 09:00:06.085938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.738 [2024-07-26 09:00:06.085951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.738 [2024-07-26 09:00:06.085967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.738 [2024-07-26 09:00:06.085980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.738 [2024-07-26 09:00:06.085996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.738 [2024-07-26 09:00:06.086009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.738 [2024-07-26 09:00:06.086025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.738 [2024-07-26 09:00:06.086038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.738 [2024-07-26 09:00:06.086057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.738 [2024-07-26 09:00:06.086078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.738 [2024-07-26 09:00:06.086094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.738 [2024-07-26 09:00:06.086108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.738 [2024-07-26 09:00:06.086124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.738 [2024-07-26 09:00:06.086137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.738 [2024-07-26 09:00:06.086153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.738 [2024-07-26 09:00:06.086166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.738 [2024-07-26 09:00:06.086181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.738 [2024-07-26 09:00:06.086194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.738 [2024-07-26 09:00:06.086209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.738 [2024-07-26 09:00:06.086223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.738 [2024-07-26 09:00:06.086238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.738 [2024-07-26 09:00:06.086251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.738 [2024-07-26 09:00:06.086267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.738 [2024-07-26 09:00:06.086281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.738 [2024-07-26 09:00:06.086296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.738 [2024-07-26 09:00:06.086309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.738 [2024-07-26 09:00:06.086325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.738 [2024-07-26 09:00:06.086339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.738 [2024-07-26 09:00:06.086354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.738 [2024-07-26 09:00:06.086367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.738 [2024-07-26 09:00:06.086383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.738 [2024-07-26 09:00:06.086396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.738 [2024-07-26 09:00:06.086411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.738 [2024-07-26 09:00:06.086429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.738 [2024-07-26 09:00:06.086445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.738 [2024-07-26 09:00:06.086458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.738 [2024-07-26 09:00:06.086473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.738 [2024-07-26 09:00:06.086487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.738 [2024-07-26 09:00:06.086502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.738 [2024-07-26 09:00:06.086515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.738 [2024-07-26 09:00:06.086530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.738 [2024-07-26 09:00:06.086544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.738 [2024-07-26 09:00:06.086559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.738 [2024-07-26 09:00:06.086572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.738 [2024-07-26 09:00:06.086588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.738 [2024-07-26 09:00:06.086601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.738 [2024-07-26 09:00:06.086616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.738 [2024-07-26 09:00:06.086630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.738 [2024-07-26 09:00:06.086645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.738 [2024-07-26 09:00:06.086658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.738 [2024-07-26 09:00:06.086673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.738 [2024-07-26 09:00:06.086686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.738 [2024-07-26 09:00:06.086702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.738 [2024-07-26 09:00:06.086716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.738 [2024-07-26 09:00:06.086732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.738 [2024-07-26 09:00:06.086746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.738 [2024-07-26 09:00:06.086761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.738 [2024-07-26 09:00:06.086775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.738 [2024-07-26 09:00:06.086794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.738 [2024-07-26 09:00:06.086808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.738 [2024-07-26 09:00:06.086823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.738 [2024-07-26 09:00:06.086837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.738 [2024-07-26 09:00:06.086852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.738 [2024-07-26 09:00:06.086865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.738 [2024-07-26 09:00:06.086881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.738 [2024-07-26 09:00:06.086894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.738 [2024-07-26 09:00:06.086909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.738 [2024-07-26 09:00:06.086922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.738 [2024-07-26 09:00:06.086938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.738 [2024-07-26 09:00:06.086951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.738 [2024-07-26 09:00:06.086967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.738 [2024-07-26 09:00:06.086980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.738 [2024-07-26 09:00:06.086995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.738 [2024-07-26 09:00:06.087009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.738 [2024-07-26 09:00:06.087024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.738 [2024-07-26 09:00:06.087037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.738 [2024-07-26 09:00:06.087053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.738 [2024-07-26 09:00:06.087077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.738 [2024-07-26 09:00:06.087101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.738 [2024-07-26 09:00:06.087115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.739 [2024-07-26 09:00:06.087130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.739 [2024-07-26 09:00:06.087143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.739 [2024-07-26 09:00:06.087165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.739 [2024-07-26 09:00:06.087182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.739 [2024-07-26 09:00:06.087198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.739 [2024-07-26 09:00:06.087212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.739 [2024-07-26 09:00:06.087227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.739 [2024-07-26 09:00:06.087240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.739 [2024-07-26 09:00:06.087256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.739 [2024-07-26 09:00:06.087269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.739 [2024-07-26 09:00:06.087283] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2b173d0 is same with the state(5) to be set 00:26:47.739 [2024-07-26 09:00:06.088565] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:26:47.739 [2024-07-26 09:00:06.088605] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:26:47.739 [2024-07-26 09:00:06.088623] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:26:47.739 [2024-07-26 09:00:06.088641] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:26:47.739 [2024-07-26 09:00:06.089171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.739 [2024-07-26 09:00:06.089203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b5ad0 with addr=10.0.0.2, port=4420 00:26:47.739 [2024-07-26 09:00:06.089220] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b5ad0 is same with the state(5) to be set 00:26:47.739 [2024-07-26 09:00:06.089355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.739 [2024-07-26 09:00:06.089383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2216300 with addr=10.0.0.2, port=4420 00:26:47.739 [2024-07-26 09:00:06.089399] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2216300 is same with the state(5) to be set 00:26:47.739 [2024-07-26 09:00:06.089516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.739 [2024-07-26 09:00:06.089542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220cce0 with addr=10.0.0.2, port=4420 00:26:47.739 [2024-07-26 09:00:06.089557] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220cce0 is same with the state(5) to be set 00:26:47.739 [2024-07-26 09:00:06.089677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.739 [2024-07-26 09:00:06.089700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2383cc0 with addr=10.0.0.2, port=4420 00:26:47.739 [2024-07-26 09:00:06.089715] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2383cc0 is same with the state(5) to be set 00:26:47.739 [2024-07-26 09:00:06.090851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.739 [2024-07-26 09:00:06.090875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.739 [2024-07-26 09:00:06.090908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.739 [2024-07-26 09:00:06.090924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.739 [2024-07-26 09:00:06.090945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.739 [2024-07-26 09:00:06.090968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.739 [2024-07-26 09:00:06.090983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.739 [2024-07-26 09:00:06.090997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.739 [2024-07-26 09:00:06.091012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.739 [2024-07-26 09:00:06.091026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.739 [2024-07-26 09:00:06.091041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.739 [2024-07-26 09:00:06.091054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.739 [2024-07-26 09:00:06.091080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.739 [2024-07-26 09:00:06.091105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.739 [2024-07-26 09:00:06.091120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.739 [2024-07-26 09:00:06.091133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.739 [2024-07-26 09:00:06.091148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.739 [2024-07-26 09:00:06.091161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.739 [2024-07-26 09:00:06.091177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.739 [2024-07-26 09:00:06.091190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.739 [2024-07-26 09:00:06.091205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.739 [2024-07-26 09:00:06.091218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.739 [2024-07-26 09:00:06.091234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.739 [2024-07-26 09:00:06.091247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.739 [2024-07-26 09:00:06.091262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.739 [2024-07-26 09:00:06.091276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.739 [2024-07-26 09:00:06.091291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.739 [2024-07-26 09:00:06.091315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.739 [2024-07-26 09:00:06.091331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.739 [2024-07-26 09:00:06.091348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.739 [2024-07-26 09:00:06.091364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.739 [2024-07-26 09:00:06.091380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.739 [2024-07-26 09:00:06.091395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.739 [2024-07-26 09:00:06.091409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.739 [2024-07-26 09:00:06.091424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.739 [2024-07-26 09:00:06.091438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.739 [2024-07-26 09:00:06.091454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.739 [2024-07-26 09:00:06.091467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.739 [2024-07-26 09:00:06.091482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.739 [2024-07-26 09:00:06.091496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.739 [2024-07-26 09:00:06.091511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.739 [2024-07-26 09:00:06.091524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.739 [2024-07-26 09:00:06.091540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.739 [2024-07-26 09:00:06.091553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.739 [2024-07-26 09:00:06.091569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.739 [2024-07-26 09:00:06.091582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.739 [2024-07-26 09:00:06.091598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.739 [2024-07-26 09:00:06.091611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.739 [2024-07-26 09:00:06.091626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.739 [2024-07-26 09:00:06.091640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.739 [2024-07-26 09:00:06.091655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.739 [2024-07-26 09:00:06.091669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.739 [2024-07-26 09:00:06.091684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.739 [2024-07-26 09:00:06.091697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.740 [2024-07-26 09:00:06.091717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.740 [2024-07-26 09:00:06.091731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.740 [2024-07-26 09:00:06.091746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.740 [2024-07-26 09:00:06.091760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.740 [2024-07-26 09:00:06.091775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.740 [2024-07-26 09:00:06.091789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.740 [2024-07-26 09:00:06.091804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.740 [2024-07-26 09:00:06.091818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.740 [2024-07-26 09:00:06.091834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.740 [2024-07-26 09:00:06.091847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.740 [2024-07-26 09:00:06.091863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.740 [2024-07-26 09:00:06.091876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.740 [2024-07-26 09:00:06.091892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.740 [2024-07-26 09:00:06.091906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.740 [2024-07-26 09:00:06.091921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.740 [2024-07-26 09:00:06.091935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.740 [2024-07-26 09:00:06.091951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.740 [2024-07-26 09:00:06.091964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.740 [2024-07-26 09:00:06.091979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.740 [2024-07-26 09:00:06.091993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.740 [2024-07-26 09:00:06.092008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.740 [2024-07-26 09:00:06.092021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.740 [2024-07-26 09:00:06.092037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.740 [2024-07-26 09:00:06.092050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.740 [2024-07-26 09:00:06.092072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.740 [2024-07-26 09:00:06.092090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.740 [2024-07-26 09:00:06.092107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.740 [2024-07-26 09:00:06.092121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.740 [2024-07-26 09:00:06.092136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.740 [2024-07-26 09:00:06.092149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.740 [2024-07-26 09:00:06.092165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.740 [2024-07-26 09:00:06.092178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.740 [2024-07-26 09:00:06.092194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.740 [2024-07-26 09:00:06.092207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.740 [2024-07-26 09:00:06.092223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.740 [2024-07-26 09:00:06.092236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.740 [2024-07-26 09:00:06.092251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.740 [2024-07-26 09:00:06.092265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.740 [2024-07-26 09:00:06.092280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.740 [2024-07-26 09:00:06.092294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.740 [2024-07-26 09:00:06.092309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.740 [2024-07-26 09:00:06.092322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.740 [2024-07-26 09:00:06.092338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.740 [2024-07-26 09:00:06.092351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.740 [2024-07-26 09:00:06.092367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.740 [2024-07-26 09:00:06.092380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.740 [2024-07-26 09:00:06.092396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.740 [2024-07-26 09:00:06.092409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.740 [2024-07-26 09:00:06.092425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.740 [2024-07-26 09:00:06.092438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.740 [2024-07-26 09:00:06.092456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.740 [2024-07-26 09:00:06.092470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.740 [2024-07-26 09:00:06.092486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.740 [2024-07-26 09:00:06.092499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.740 [2024-07-26 09:00:06.092515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.740 [2024-07-26 09:00:06.092528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.740 [2024-07-26 09:00:06.092543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.740 [2024-07-26 09:00:06.092556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.740 [2024-07-26 09:00:06.092572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.740 [2024-07-26 09:00:06.092585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.740 [2024-07-26 09:00:06.092601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.740 [2024-07-26 09:00:06.092614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.740 [2024-07-26 09:00:06.092630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.740 [2024-07-26 09:00:06.092643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.740 [2024-07-26 09:00:06.092658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.740 [2024-07-26 09:00:06.092672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.740 [2024-07-26 09:00:06.092687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.740 [2024-07-26 09:00:06.092700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.740 [2024-07-26 09:00:06.092716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.740 [2024-07-26 09:00:06.092729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.740 [2024-07-26 09:00:06.092744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.740 [2024-07-26 09:00:06.092758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.740 [2024-07-26 09:00:06.092773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.740 [2024-07-26 09:00:06.092786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.740 [2024-07-26 09:00:06.092800] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236eb10 is same with the state(5) to be set 00:26:47.740 [2024-07-26 09:00:06.094484] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:26:47.740 [2024-07-26 09:00:06.094527] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:47.740 [2024-07-26 09:00:06.094544] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:26:47.740 [2024-07-26 09:00:06.094561] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:26:47.740 [2024-07-26 09:00:06.094577] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:26:47.741 task offset: 32384 on job bdev=Nvme1n1 fails 00:26:47.741 00:26:47.741 Latency(us) 00:26:47.741 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:47.741 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:47.741 Job: Nvme1n1 ended in about 0.96 seconds with error 00:26:47.741 Verification LBA range: start 0x0 length 0x400 00:26:47.741 Nvme1n1 : 0.96 203.74 12.73 66.53 0.00 234277.34 3932.16 257872.02 00:26:47.741 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:47.741 Job: Nvme2n1 ended in about 1.00 seconds with error 00:26:47.741 Verification LBA range: start 0x0 length 0x400 00:26:47.741 Nvme2n1 : 1.00 192.70 12.04 64.23 0.00 241950.53 21845.33 245444.46 00:26:47.741 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:47.741 Job: Nvme3n1 ended in about 1.00 seconds with error 00:26:47.741 Verification LBA range: start 0x0 length 0x400 00:26:47.741 Nvme3n1 : 1.00 192.06 12.00 64.02 0.00 238134.80 17087.91 251658.24 00:26:47.741 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:47.741 Job: Nvme4n1 ended in about 1.00 seconds with error 00:26:47.741 Verification LBA range: start 0x0 length 0x400 00:26:47.741 Nvme4n1 : 1.00 191.42 11.96 63.81 0.00 234307.13 23592.96 256318.58 00:26:47.741 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:47.741 Job: Nvme5n1 ended in about 0.98 seconds with error 00:26:47.741 Verification LBA range: start 0x0 length 0x400 00:26:47.741 Nvme5n1 : 0.98 130.69 8.17 65.35 0.00 298441.39 22039.51 264085.81 00:26:47.741 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:47.741 Verification LBA range: start 0x0 length 0x400 00:26:47.741 Nvme6n1 : 0.98 203.07 12.69 0.00 0.00 280318.64 1529.17 264085.81 00:26:47.741 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:47.741 Job: Nvme7n1 ended in about 1.01 seconds with error 00:26:47.741 Verification LBA range: start 0x0 length 0x400 00:26:47.741 Nvme7n1 : 1.01 190.81 11.93 63.60 0.00 221524.39 17767.54 265639.25 00:26:47.741 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:47.741 Verification LBA range: start 0x0 length 0x400 00:26:47.741 Nvme8n1 : 0.97 202.65 12.67 0.00 0.00 269342.24 1868.99 270299.59 00:26:47.741 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:47.741 Job: Nvme9n1 ended in about 1.01 seconds with error 00:26:47.741 Verification LBA range: start 0x0 length 0x400 00:26:47.741 Nvme9n1 : 1.01 126.52 7.91 63.26 0.00 285468.19 22524.97 298261.62 00:26:47.741 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:47.741 Job: Nvme10n1 ended in about 0.99 seconds with error 00:26:47.741 Verification LBA range: start 0x0 length 0x400 00:26:47.741 Nvme10n1 : 0.99 133.77 8.36 64.86 0.00 265795.04 20486.07 268746.15 00:26:47.741 =================================================================================================================== 00:26:47.741 Total : 1767.43 110.46 515.65 0.00 253774.59 1529.17 298261.62 00:26:47.741 [2024-07-26 09:00:06.123497] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:26:47.741 [2024-07-26 09:00:06.123585] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:26:47.741 [2024-07-26 09:00:06.123727] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23b5ad0 (9): Bad file descriptor 00:26:47.741 [2024-07-26 09:00:06.123772] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2216300 (9): Bad file descriptor 00:26:47.741 [2024-07-26 09:00:06.123791] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220cce0 (9): Bad file descriptor 00:26:47.741 [2024-07-26 09:00:06.123809] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2383cc0 (9): Bad file descriptor 00:26:47.741 [2024-07-26 09:00:06.124230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.741 [2024-07-26 09:00:06.124270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x238e920 with addr=10.0.0.2, port=4420 00:26:47.741 [2024-07-26 09:00:06.124292] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x238e920 is same with the state(5) to be set 00:26:47.741 [2024-07-26 09:00:06.124440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.741 [2024-07-26 09:00:06.124466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21e9f10 with addr=10.0.0.2, port=4420 00:26:47.741 [2024-07-26 09:00:06.124482] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e9f10 is same with the state(5) to be set 00:26:47.741 [2024-07-26 09:00:06.124611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.741 [2024-07-26 09:00:06.124637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdf610 with addr=10.0.0.2, port=4420 00:26:47.741 [2024-07-26 09:00:06.124653] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdf610 is same with the state(5) to be set 00:26:47.741 [2024-07-26 09:00:06.124771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.741 [2024-07-26 09:00:06.124796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a3070 with addr=10.0.0.2, port=4420 00:26:47.741 [2024-07-26 09:00:06.124815] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a3070 is same with the state(5) to be set 00:26:47.741 [2024-07-26 09:00:06.124933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.741 [2024-07-26 09:00:06.124957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b3ab0 with addr=10.0.0.2, port=4420 00:26:47.741 [2024-07-26 09:00:06.124973] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b3ab0 is same with the state(5) to be set 00:26:47.741 [2024-07-26 09:00:06.125106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.741 [2024-07-26 09:00:06.125133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2343180 with addr=10.0.0.2, port=4420 00:26:47.741 [2024-07-26 09:00:06.125148] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2343180 is same with the state(5) to be set 00:26:47.741 [2024-07-26 09:00:06.125165] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:26:47.741 [2024-07-26 09:00:06.125178] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:26:47.741 [2024-07-26 09:00:06.125196] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:26:47.741 [2024-07-26 09:00:06.125215] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:26:47.741 [2024-07-26 09:00:06.125229] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:26:47.741 [2024-07-26 09:00:06.125242] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:26:47.741 [2024-07-26 09:00:06.125259] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:26:47.741 [2024-07-26 09:00:06.125273] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:26:47.741 [2024-07-26 09:00:06.125291] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:26:47.741 [2024-07-26 09:00:06.125308] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:26:47.741 [2024-07-26 09:00:06.125321] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:26:47.741 [2024-07-26 09:00:06.125334] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:26:47.741 [2024-07-26 09:00:06.125395] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:47.741 [2024-07-26 09:00:06.125420] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:47.741 [2024-07-26 09:00:06.125443] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:47.741 [2024-07-26 09:00:06.125461] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:47.741 [2024-07-26 09:00:06.125809] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:47.741 [2024-07-26 09:00:06.125834] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:47.741 [2024-07-26 09:00:06.125847] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:47.741 [2024-07-26 09:00:06.125859] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:47.741 [2024-07-26 09:00:06.125876] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x238e920 (9): Bad file descriptor 00:26:47.741 [2024-07-26 09:00:06.125896] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21e9f10 (9): Bad file descriptor 00:26:47.741 [2024-07-26 09:00:06.125914] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cdf610 (9): Bad file descriptor 00:26:47.741 [2024-07-26 09:00:06.125931] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23a3070 (9): Bad file descriptor 00:26:47.741 [2024-07-26 09:00:06.125948] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23b3ab0 (9): Bad file descriptor 00:26:47.741 [2024-07-26 09:00:06.125965] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2343180 (9): Bad file descriptor 00:26:47.741 [2024-07-26 09:00:06.126020] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:26:47.742 [2024-07-26 09:00:06.126040] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:26:47.742 [2024-07-26 09:00:06.126053] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:26:47.742 [2024-07-26 09:00:06.126080] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:47.742 [2024-07-26 09:00:06.126095] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:47.742 [2024-07-26 09:00:06.126107] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:47.742 [2024-07-26 09:00:06.126124] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:26:47.742 [2024-07-26 09:00:06.126137] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:26:47.742 [2024-07-26 09:00:06.126150] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:26:47.742 [2024-07-26 09:00:06.126169] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:26:47.742 [2024-07-26 09:00:06.126182] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:26:47.742 [2024-07-26 09:00:06.126200] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:26:47.742 [2024-07-26 09:00:06.126216] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:26:47.742 [2024-07-26 09:00:06.126229] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:26:47.742 [2024-07-26 09:00:06.126242] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:26:47.742 [2024-07-26 09:00:06.126257] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:26:47.742 [2024-07-26 09:00:06.126271] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:26:47.742 [2024-07-26 09:00:06.126283] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:26:47.742 [2024-07-26 09:00:06.126335] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:47.742 [2024-07-26 09:00:06.126354] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:47.742 [2024-07-26 09:00:06.126366] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:47.742 [2024-07-26 09:00:06.126377] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:47.742 [2024-07-26 09:00:06.126391] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:47.742 [2024-07-26 09:00:06.126402] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:48.306 09:00:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:26:48.306 09:00:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:26:49.239 09:00:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 1047658 00:26:49.239 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (1047658) - No such process 00:26:49.239 09:00:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:26:49.239 09:00:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:26:49.239 09:00:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:26:49.239 09:00:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:26:49.239 09:00:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:49.239 09:00:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:26:49.239 09:00:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:49.239 09:00:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:26:49.239 09:00:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:49.239 09:00:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:26:49.239 09:00:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:49.239 09:00:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:49.239 rmmod nvme_tcp 00:26:49.240 rmmod nvme_fabrics 00:26:49.240 rmmod nvme_keyring 00:26:49.498 09:00:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:49.498 09:00:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:26:49.498 09:00:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:26:49.498 09:00:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:26:49.498 09:00:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:49.498 09:00:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:49.498 09:00:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:49.498 09:00:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:49.498 09:00:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:49.498 09:00:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:49.498 09:00:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:49.498 09:00:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:51.404 09:00:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:51.404 00:26:51.404 real 0m7.519s 00:26:51.404 user 0m18.293s 00:26:51.404 sys 0m1.485s 00:26:51.404 09:00:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:51.404 09:00:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:51.404 ************************************ 00:26:51.404 END TEST nvmf_shutdown_tc3 00:26:51.404 ************************************ 00:26:51.404 09:00:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:26:51.404 00:26:51.404 real 0m27.453s 00:26:51.404 user 1m16.627s 00:26:51.404 sys 0m6.409s 00:26:51.404 09:00:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:51.404 09:00:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:51.404 ************************************ 00:26:51.404 END TEST nvmf_shutdown 00:26:51.404 ************************************ 00:26:51.404 09:00:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@66 -- # trap - SIGINT SIGTERM EXIT 00:26:51.404 00:26:51.404 real 16m47.457s 00:26:51.404 user 47m14.671s 00:26:51.404 sys 3m52.673s 00:26:51.404 09:00:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:51.404 09:00:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:51.404 ************************************ 00:26:51.404 END TEST nvmf_target_extra 00:26:51.404 ************************************ 00:26:51.404 09:00:09 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:26:51.404 09:00:09 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:51.404 09:00:09 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:51.404 09:00:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:51.404 ************************************ 00:26:51.404 START TEST nvmf_host 00:26:51.404 ************************************ 00:26:51.404 09:00:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:26:51.662 * Looking for test storage... 00:26:51.662 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:26:51.662 09:00:09 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:51.662 09:00:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:26:51.662 09:00:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:51.662 09:00:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:51.662 09:00:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:51.662 09:00:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:51.662 09:00:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:51.662 09:00:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:51.662 09:00:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:51.662 09:00:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:51.662 09:00:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:51.662 09:00:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:51.662 09:00:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:26:51.662 09:00:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:26:51.662 09:00:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:51.662 09:00:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:51.662 09:00:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:51.662 09:00:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:51.662 09:00:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:51.662 09:00:09 nvmf_tcp.nvmf_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:51.662 09:00:09 nvmf_tcp.nvmf_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:51.662 09:00:09 nvmf_tcp.nvmf_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:51.662 09:00:09 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:51.662 09:00:09 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:51.662 09:00:09 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:51.662 09:00:09 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:26:51.662 09:00:09 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:51.662 09:00:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@47 -- # : 0 00:26:51.662 09:00:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:51.662 09:00:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:51.662 09:00:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:51.662 09:00:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:51.662 09:00:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:51.662 09:00:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:51.662 09:00:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:51.662 09:00:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:51.662 09:00:09 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:26:51.662 09:00:09 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:26:51.662 09:00:09 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:26:51.662 09:00:09 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:26:51.662 09:00:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:51.662 09:00:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:51.662 09:00:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.662 ************************************ 00:26:51.662 START TEST nvmf_multicontroller 00:26:51.662 ************************************ 00:26:51.662 09:00:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:26:51.662 * Looking for test storage... 00:26:51.662 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:51.662 09:00:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:51.662 09:00:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:26:51.662 09:00:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:51.662 09:00:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:51.663 09:00:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:51.663 09:00:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:51.663 09:00:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:51.663 09:00:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:51.663 09:00:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:51.663 09:00:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:51.663 09:00:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:51.663 09:00:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:51.663 09:00:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:26:51.663 09:00:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:26:51.663 09:00:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:51.663 09:00:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:51.663 09:00:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:51.663 09:00:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:51.663 09:00:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:51.663 09:00:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:51.663 09:00:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:51.663 09:00:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:51.663 09:00:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:51.663 09:00:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:51.663 09:00:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:51.663 09:00:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:26:51.663 09:00:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:51.663 09:00:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:26:51.663 09:00:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:51.663 09:00:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:51.663 09:00:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:51.663 09:00:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:51.663 09:00:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:51.663 09:00:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:51.663 09:00:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:51.663 09:00:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:51.663 09:00:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:51.663 09:00:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:51.663 09:00:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:26:51.663 09:00:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:26:51.663 09:00:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:51.663 09:00:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:26:51.663 09:00:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:26:51.663 09:00:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:51.663 09:00:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:51.663 09:00:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:51.663 09:00:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:51.663 09:00:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:51.663 09:00:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:51.663 09:00:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:51.663 09:00:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:51.663 09:00:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:51.663 09:00:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:51.663 09:00:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:26:51.663 09:00:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:53.563 09:00:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:53.564 09:00:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:26:53.564 09:00:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:53.564 09:00:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:53.564 09:00:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:53.564 09:00:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:53.564 09:00:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:53.564 09:00:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:26:53.564 09:00:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:53.564 09:00:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:26:53.564 09:00:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:26:53.564 09:00:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:26:53.564 09:00:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:26:53.564 09:00:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:26:53.564 09:00:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:26:53.564 09:00:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:53.564 09:00:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:53.564 09:00:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:53.564 09:00:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:53.564 09:00:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:53.564 09:00:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:53.564 09:00:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:53.564 09:00:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:53.564 09:00:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:53.564 09:00:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:53.564 09:00:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:53.564 09:00:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:53.564 09:00:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:53.564 09:00:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:53.564 09:00:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:53.564 09:00:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:53.564 09:00:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:53.564 09:00:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:53.564 09:00:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:53.564 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:53.564 09:00:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:53.564 09:00:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:53.564 09:00:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:53.564 09:00:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:53.564 09:00:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:53.564 09:00:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:53.564 09:00:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:53.564 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:53.564 09:00:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:53.564 09:00:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:53.564 09:00:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:53.564 09:00:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:53.564 09:00:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:53.564 09:00:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:53.564 09:00:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:53.564 09:00:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:53.564 09:00:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:53.564 09:00:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:53.564 09:00:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:53.564 09:00:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:53.564 09:00:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:53.564 09:00:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:53.564 09:00:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:53.564 09:00:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:53.564 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:53.564 09:00:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:53.564 09:00:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:53.564 09:00:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:53.564 09:00:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:53.564 09:00:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:53.564 09:00:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:53.564 09:00:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:53.564 09:00:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:53.564 09:00:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:53.564 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:53.564 09:00:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:53.564 09:00:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:53.564 09:00:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:26:53.564 09:00:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:53.564 09:00:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:53.564 09:00:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:53.564 09:00:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:53.564 09:00:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:53.564 09:00:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:53.564 09:00:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:53.564 09:00:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:53.564 09:00:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:53.564 09:00:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:53.564 09:00:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:53.564 09:00:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:53.564 09:00:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:53.564 09:00:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:53.564 09:00:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:53.564 09:00:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:53.564 09:00:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:53.564 09:00:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:53.564 09:00:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:53.564 09:00:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:53.564 09:00:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:53.564 09:00:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:53.564 09:00:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:53.564 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:53.564 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.265 ms 00:26:53.564 00:26:53.564 --- 10.0.0.2 ping statistics --- 00:26:53.564 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:53.564 rtt min/avg/max/mdev = 0.265/0.265/0.265/0.000 ms 00:26:53.564 09:00:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:53.564 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:53.564 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.186 ms 00:26:53.564 00:26:53.564 --- 10.0.0.1 ping statistics --- 00:26:53.564 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:53.565 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:26:53.565 09:00:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:53.565 09:00:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:26:53.565 09:00:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:53.565 09:00:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:53.565 09:00:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:53.565 09:00:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:53.565 09:00:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:53.565 09:00:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:53.565 09:00:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:53.565 09:00:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:26:53.565 09:00:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:53.565 09:00:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:53.565 09:00:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:53.824 09:00:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=1050706 00:26:53.824 09:00:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:53.824 09:00:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 1050706 00:26:53.824 09:00:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 1050706 ']' 00:26:53.824 09:00:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:53.824 09:00:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:53.824 09:00:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:53.824 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:53.824 09:00:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:53.824 09:00:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:53.824 [2024-07-26 09:00:12.075512] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:26:53.824 [2024-07-26 09:00:12.075593] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:53.824 EAL: No free 2048 kB hugepages reported on node 1 00:26:53.824 [2024-07-26 09:00:12.120990] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:53.824 [2024-07-26 09:00:12.148624] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:53.824 [2024-07-26 09:00:12.236317] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:53.824 [2024-07-26 09:00:12.236387] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:53.824 [2024-07-26 09:00:12.236414] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:53.824 [2024-07-26 09:00:12.236426] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:53.825 [2024-07-26 09:00:12.236436] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:53.825 [2024-07-26 09:00:12.236518] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:53.825 [2024-07-26 09:00:12.236584] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:26:53.825 [2024-07-26 09:00:12.236586] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:54.089 09:00:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:54.089 09:00:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:26:54.089 09:00:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:54.089 09:00:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:54.089 09:00:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:54.089 09:00:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:54.089 09:00:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:54.089 09:00:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:54.089 09:00:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:54.089 [2024-07-26 09:00:12.375931] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:54.089 09:00:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:54.089 09:00:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:54.089 09:00:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:54.089 09:00:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:54.089 Malloc0 00:26:54.089 09:00:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:54.089 09:00:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:54.089 09:00:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:54.089 09:00:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:54.089 09:00:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:54.089 09:00:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:54.089 09:00:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:54.089 09:00:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:54.090 09:00:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:54.090 09:00:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:54.090 09:00:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:54.090 09:00:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:54.090 [2024-07-26 09:00:12.434206] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:54.090 09:00:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:54.090 09:00:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:54.090 09:00:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:54.090 09:00:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:54.090 [2024-07-26 09:00:12.442053] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:54.090 09:00:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:54.090 09:00:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:26:54.090 09:00:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:54.090 09:00:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:54.090 Malloc1 00:26:54.090 09:00:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:54.090 09:00:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:26:54.090 09:00:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:54.090 09:00:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:54.090 09:00:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:54.090 09:00:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:26:54.090 09:00:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:54.090 09:00:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:54.090 09:00:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:54.090 09:00:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:26:54.090 09:00:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:54.090 09:00:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:54.090 09:00:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:54.090 09:00:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:26:54.090 09:00:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:54.090 09:00:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:54.090 09:00:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:54.091 09:00:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=1050814 00:26:54.091 09:00:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:26:54.091 09:00:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:54.091 09:00:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 1050814 /var/tmp/bdevperf.sock 00:26:54.091 09:00:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 1050814 ']' 00:26:54.091 09:00:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:54.091 09:00:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:54.091 09:00:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:54.091 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:54.091 09:00:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:54.091 09:00:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:54.352 09:00:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:54.352 09:00:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:26:54.352 09:00:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:26:54.352 09:00:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:54.352 09:00:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:54.952 NVMe0n1 00:26:54.952 09:00:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:54.952 09:00:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:54.952 09:00:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:26:54.952 09:00:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:54.952 09:00:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:54.952 09:00:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:54.952 1 00:26:54.952 09:00:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:26:54.952 09:00:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:26:54.952 09:00:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:26:54.952 09:00:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:26:54.952 09:00:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:54.952 09:00:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:26:54.952 09:00:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:54.952 09:00:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:26:54.952 09:00:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:54.952 09:00:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:54.952 request: 00:26:54.952 { 00:26:54.952 "name": "NVMe0", 00:26:54.952 "trtype": "tcp", 00:26:54.952 "traddr": "10.0.0.2", 00:26:54.952 "adrfam": "ipv4", 00:26:54.952 "trsvcid": "4420", 00:26:54.952 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:54.952 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:26:54.952 "hostaddr": "10.0.0.2", 00:26:54.952 "hostsvcid": "60000", 00:26:54.952 "prchk_reftag": false, 00:26:54.952 "prchk_guard": false, 00:26:54.952 "hdgst": false, 00:26:54.952 "ddgst": false, 00:26:54.952 "method": "bdev_nvme_attach_controller", 00:26:54.952 "req_id": 1 00:26:54.952 } 00:26:54.952 Got JSON-RPC error response 00:26:54.952 response: 00:26:54.952 { 00:26:54.952 "code": -114, 00:26:54.952 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:26:54.952 } 00:26:54.952 09:00:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:26:54.952 09:00:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:26:54.952 09:00:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:54.952 09:00:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:54.952 09:00:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:54.952 09:00:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:26:54.952 09:00:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:26:54.952 09:00:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:26:54.952 09:00:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:26:54.952 09:00:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:54.952 09:00:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:26:54.952 09:00:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:54.952 09:00:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:26:54.952 09:00:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:54.952 09:00:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:54.952 request: 00:26:54.952 { 00:26:54.952 "name": "NVMe0", 00:26:54.952 "trtype": "tcp", 00:26:54.952 "traddr": "10.0.0.2", 00:26:54.952 "adrfam": "ipv4", 00:26:54.952 "trsvcid": "4420", 00:26:54.952 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:54.952 "hostaddr": "10.0.0.2", 00:26:54.952 "hostsvcid": "60000", 00:26:54.952 "prchk_reftag": false, 00:26:54.952 "prchk_guard": false, 00:26:54.952 "hdgst": false, 00:26:54.952 "ddgst": false, 00:26:54.952 "method": "bdev_nvme_attach_controller", 00:26:54.952 "req_id": 1 00:26:54.952 } 00:26:54.952 Got JSON-RPC error response 00:26:54.952 response: 00:26:54.952 { 00:26:54.952 "code": -114, 00:26:54.952 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:26:54.952 } 00:26:54.952 09:00:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:26:54.952 09:00:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:26:54.952 09:00:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:54.952 09:00:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:54.952 09:00:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:54.952 09:00:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:26:54.952 09:00:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:26:54.952 09:00:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:26:54.952 09:00:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:26:54.952 09:00:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:54.952 09:00:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:26:54.952 09:00:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:54.952 09:00:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:26:54.952 09:00:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:54.952 09:00:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:54.952 request: 00:26:54.952 { 00:26:54.952 "name": "NVMe0", 00:26:54.952 "trtype": "tcp", 00:26:54.952 "traddr": "10.0.0.2", 00:26:54.952 "adrfam": "ipv4", 00:26:54.952 "trsvcid": "4420", 00:26:54.952 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:54.952 "hostaddr": "10.0.0.2", 00:26:54.952 "hostsvcid": "60000", 00:26:54.952 "prchk_reftag": false, 00:26:54.952 "prchk_guard": false, 00:26:54.952 "hdgst": false, 00:26:54.952 "ddgst": false, 00:26:54.952 "multipath": "disable", 00:26:54.952 "method": "bdev_nvme_attach_controller", 00:26:54.952 "req_id": 1 00:26:54.952 } 00:26:54.952 Got JSON-RPC error response 00:26:54.952 response: 00:26:54.952 { 00:26:54.952 "code": -114, 00:26:54.952 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:26:54.952 } 00:26:54.952 09:00:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:26:54.952 09:00:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:26:54.952 09:00:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:54.952 09:00:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:54.952 09:00:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:54.952 09:00:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:26:54.952 09:00:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:26:54.952 09:00:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:26:54.952 09:00:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:26:54.952 09:00:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:54.952 09:00:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:26:54.952 09:00:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:54.953 09:00:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:26:54.953 09:00:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:54.953 09:00:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:54.953 request: 00:26:54.953 { 00:26:54.953 "name": "NVMe0", 00:26:54.953 "trtype": "tcp", 00:26:54.953 "traddr": "10.0.0.2", 00:26:54.953 "adrfam": "ipv4", 00:26:54.953 "trsvcid": "4420", 00:26:54.953 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:54.953 "hostaddr": "10.0.0.2", 00:26:54.953 "hostsvcid": "60000", 00:26:54.953 "prchk_reftag": false, 00:26:54.953 "prchk_guard": false, 00:26:54.953 "hdgst": false, 00:26:54.953 "ddgst": false, 00:26:54.953 "multipath": "failover", 00:26:54.953 "method": "bdev_nvme_attach_controller", 00:26:54.953 "req_id": 1 00:26:54.953 } 00:26:54.953 Got JSON-RPC error response 00:26:54.953 response: 00:26:54.953 { 00:26:54.953 "code": -114, 00:26:54.953 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:26:54.953 } 00:26:54.953 09:00:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:26:54.953 09:00:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:26:54.953 09:00:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:54.953 09:00:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:54.953 09:00:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:54.953 09:00:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:54.953 09:00:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:54.953 09:00:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:54.953 00:26:54.953 09:00:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:54.953 09:00:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:54.953 09:00:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:54.953 09:00:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:54.953 09:00:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:54.953 09:00:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:26:54.953 09:00:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:54.953 09:00:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:55.211 00:26:55.211 09:00:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:55.211 09:00:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:55.211 09:00:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:26:55.211 09:00:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:55.211 09:00:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:55.211 09:00:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:55.211 09:00:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:26:55.211 09:00:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:56.589 0 00:26:56.589 09:00:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:26:56.589 09:00:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:56.589 09:00:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:56.589 09:00:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:56.589 09:00:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 1050814 00:26:56.589 09:00:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 1050814 ']' 00:26:56.589 09:00:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 1050814 00:26:56.589 09:00:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:26:56.589 09:00:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:56.589 09:00:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1050814 00:26:56.589 09:00:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:56.589 09:00:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:56.589 09:00:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1050814' 00:26:56.589 killing process with pid 1050814 00:26:56.589 09:00:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 1050814 00:26:56.589 09:00:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 1050814 00:26:56.589 09:00:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:56.589 09:00:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:56.589 09:00:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:56.589 09:00:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:56.589 09:00:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:26:56.589 09:00:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:56.589 09:00:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:56.589 09:00:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:56.589 09:00:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:26:56.589 09:00:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:56.589 09:00:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:26:56.589 09:00:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:26:56.589 09:00:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # sort -u 00:26:56.589 09:00:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1613 -- # cat 00:26:56.589 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:26:56.589 [2024-07-26 09:00:12.546216] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:26:56.589 [2024-07-26 09:00:12.546310] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1050814 ] 00:26:56.589 EAL: No free 2048 kB hugepages reported on node 1 00:26:56.589 [2024-07-26 09:00:12.579386] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:56.589 [2024-07-26 09:00:12.608553] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:56.589 [2024-07-26 09:00:12.696265] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:56.589 [2024-07-26 09:00:13.503498] bdev.c:4633:bdev_name_add: *ERROR*: Bdev name 32dc21cf-b09a-40e4-985c-40352e4b6d7e already exists 00:26:56.589 [2024-07-26 09:00:13.503543] bdev.c:7755:bdev_register: *ERROR*: Unable to add uuid:32dc21cf-b09a-40e4-985c-40352e4b6d7e alias for bdev NVMe1n1 00:26:56.589 [2024-07-26 09:00:13.503574] bdev_nvme.c:4318:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:26:56.589 Running I/O for 1 seconds... 00:26:56.589 00:26:56.589 Latency(us) 00:26:56.589 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:56.589 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:26:56.589 NVMe0n1 : 1.01 19414.12 75.84 0.00 0.00 6581.78 4199.16 11942.12 00:26:56.589 =================================================================================================================== 00:26:56.589 Total : 19414.12 75.84 0.00 0.00 6581.78 4199.16 11942.12 00:26:56.589 Received shutdown signal, test time was about 1.000000 seconds 00:26:56.589 00:26:56.589 Latency(us) 00:26:56.589 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:56.589 =================================================================================================================== 00:26:56.589 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:56.589 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:26:56.589 09:00:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1618 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:56.589 09:00:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:26:56.589 09:00:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:26:56.589 09:00:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:56.589 09:00:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:26:56.589 09:00:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:56.589 09:00:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:26:56.589 09:00:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:56.589 09:00:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:56.589 rmmod nvme_tcp 00:26:56.589 rmmod nvme_fabrics 00:26:56.589 rmmod nvme_keyring 00:26:56.589 09:00:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:56.589 09:00:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:26:56.589 09:00:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:26:56.589 09:00:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 1050706 ']' 00:26:56.589 09:00:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 1050706 00:26:56.589 09:00:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 1050706 ']' 00:26:56.589 09:00:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 1050706 00:26:56.589 09:00:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:26:56.589 09:00:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:56.589 09:00:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1050706 00:26:56.589 09:00:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:26:56.589 09:00:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:26:56.589 09:00:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1050706' 00:26:56.589 killing process with pid 1050706 00:26:56.589 09:00:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 1050706 00:26:56.589 09:00:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 1050706 00:26:56.847 09:00:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:56.847 09:00:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:56.847 09:00:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:56.847 09:00:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:56.847 09:00:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:56.847 09:00:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:56.847 09:00:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:56.847 09:00:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:59.385 09:00:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:59.385 00:26:59.385 real 0m7.377s 00:26:59.385 user 0m12.068s 00:26:59.385 sys 0m2.238s 00:26:59.385 09:00:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:59.385 09:00:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:59.385 ************************************ 00:26:59.385 END TEST nvmf_multicontroller 00:26:59.385 ************************************ 00:26:59.385 09:00:17 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:26:59.385 09:00:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:59.385 09:00:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:59.385 09:00:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.385 ************************************ 00:26:59.385 START TEST nvmf_aer 00:26:59.385 ************************************ 00:26:59.385 09:00:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:26:59.385 * Looking for test storage... 00:26:59.385 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:59.385 09:00:17 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:59.385 09:00:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:26:59.385 09:00:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:59.385 09:00:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:59.385 09:00:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:59.385 09:00:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:59.385 09:00:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:59.385 09:00:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:59.385 09:00:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:59.385 09:00:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:59.385 09:00:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:59.385 09:00:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:59.385 09:00:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:26:59.385 09:00:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:26:59.385 09:00:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:59.386 09:00:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:59.386 09:00:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:59.386 09:00:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:59.386 09:00:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:59.386 09:00:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:59.386 09:00:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:59.386 09:00:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:59.386 09:00:17 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:59.386 09:00:17 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:59.386 09:00:17 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:59.386 09:00:17 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:26:59.386 09:00:17 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:59.386 09:00:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:26:59.386 09:00:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:59.386 09:00:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:59.386 09:00:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:59.386 09:00:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:59.386 09:00:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:59.386 09:00:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:59.386 09:00:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:59.386 09:00:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:59.386 09:00:17 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:26:59.386 09:00:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:59.386 09:00:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:59.386 09:00:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:59.386 09:00:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:59.386 09:00:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:59.386 09:00:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:59.386 09:00:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:59.386 09:00:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:59.386 09:00:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:59.386 09:00:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:59.386 09:00:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:26:59.386 09:00:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:01.285 09:00:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:01.285 09:00:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:27:01.285 09:00:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:01.285 09:00:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:01.286 09:00:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:01.286 09:00:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:01.286 09:00:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:01.286 09:00:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:27:01.286 09:00:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:01.286 09:00:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:27:01.286 09:00:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:27:01.286 09:00:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:27:01.286 09:00:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:27:01.286 09:00:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:27:01.286 09:00:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:27:01.286 09:00:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:01.286 09:00:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:01.286 09:00:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:01.286 09:00:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:01.286 09:00:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:01.286 09:00:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:01.286 09:00:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:01.286 09:00:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:01.286 09:00:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:01.286 09:00:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:01.286 09:00:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:01.286 09:00:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:01.286 09:00:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:01.286 09:00:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:01.286 09:00:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:01.286 09:00:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:01.286 09:00:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:01.286 09:00:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:01.286 09:00:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:01.286 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:01.286 09:00:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:01.286 09:00:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:01.286 09:00:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:01.286 09:00:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:01.286 09:00:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:01.286 09:00:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:01.286 09:00:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:01.286 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:01.286 09:00:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:01.286 09:00:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:01.286 09:00:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:01.286 09:00:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:01.286 09:00:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:01.286 09:00:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:01.286 09:00:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:01.286 09:00:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:01.286 09:00:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:01.286 09:00:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:01.286 09:00:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:01.286 09:00:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:01.286 09:00:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:01.286 09:00:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:01.286 09:00:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:01.286 09:00:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:01.286 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:01.286 09:00:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:01.286 09:00:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:01.286 09:00:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:01.286 09:00:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:01.286 09:00:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:01.286 09:00:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:01.286 09:00:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:01.286 09:00:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:01.286 09:00:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:01.286 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:01.286 09:00:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:01.286 09:00:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:01.286 09:00:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:27:01.286 09:00:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:01.286 09:00:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:01.286 09:00:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:01.286 09:00:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:01.286 09:00:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:01.286 09:00:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:01.286 09:00:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:01.286 09:00:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:01.286 09:00:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:01.286 09:00:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:01.286 09:00:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:01.286 09:00:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:01.286 09:00:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:01.286 09:00:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:01.286 09:00:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:01.286 09:00:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:01.286 09:00:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:01.286 09:00:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:01.286 09:00:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:01.286 09:00:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:01.286 09:00:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:01.286 09:00:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:01.286 09:00:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:01.286 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:01.286 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.251 ms 00:27:01.286 00:27:01.286 --- 10.0.0.2 ping statistics --- 00:27:01.286 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:01.286 rtt min/avg/max/mdev = 0.251/0.251/0.251/0.000 ms 00:27:01.286 09:00:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:01.286 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:01.286 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.108 ms 00:27:01.286 00:27:01.286 --- 10.0.0.1 ping statistics --- 00:27:01.286 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:01.286 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:27:01.286 09:00:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:01.286 09:00:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:27:01.286 09:00:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:01.286 09:00:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:01.286 09:00:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:01.286 09:00:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:01.286 09:00:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:01.286 09:00:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:01.286 09:00:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:01.286 09:00:19 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:27:01.286 09:00:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:01.286 09:00:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:01.286 09:00:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:01.286 09:00:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=1053061 00:27:01.286 09:00:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 1053061 00:27:01.286 09:00:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:01.286 09:00:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@831 -- # '[' -z 1053061 ']' 00:27:01.287 09:00:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:01.287 09:00:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:01.287 09:00:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:01.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:01.287 09:00:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:01.287 09:00:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:01.287 [2024-07-26 09:00:19.638679] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:27:01.287 [2024-07-26 09:00:19.638758] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:01.287 EAL: No free 2048 kB hugepages reported on node 1 00:27:01.287 [2024-07-26 09:00:19.676778] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:01.287 [2024-07-26 09:00:19.709551] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:01.547 [2024-07-26 09:00:19.800954] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:01.547 [2024-07-26 09:00:19.801017] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:01.547 [2024-07-26 09:00:19.801042] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:01.547 [2024-07-26 09:00:19.801075] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:01.547 [2024-07-26 09:00:19.801095] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:01.547 [2024-07-26 09:00:19.801173] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:01.547 [2024-07-26 09:00:19.801234] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:01.547 [2024-07-26 09:00:19.801361] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:01.547 [2024-07-26 09:00:19.801369] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:01.547 09:00:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:01.547 09:00:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # return 0 00:27:01.547 09:00:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:01.547 09:00:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:01.547 09:00:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:01.547 09:00:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:01.547 09:00:19 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:01.547 09:00:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:01.547 09:00:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:01.547 [2024-07-26 09:00:19.960602] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:01.547 09:00:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:01.547 09:00:19 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:27:01.547 09:00:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:01.547 09:00:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:01.547 Malloc0 00:27:01.547 09:00:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:01.547 09:00:19 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:27:01.547 09:00:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:01.547 09:00:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:01.547 09:00:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:01.547 09:00:20 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:01.547 09:00:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:01.547 09:00:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:01.806 09:00:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:01.806 09:00:20 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:01.806 09:00:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:01.806 09:00:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:01.806 [2024-07-26 09:00:20.014067] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:01.806 09:00:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:01.806 09:00:20 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:27:01.806 09:00:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:01.806 09:00:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:01.806 [ 00:27:01.806 { 00:27:01.806 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:27:01.806 "subtype": "Discovery", 00:27:01.806 "listen_addresses": [], 00:27:01.806 "allow_any_host": true, 00:27:01.806 "hosts": [] 00:27:01.806 }, 00:27:01.806 { 00:27:01.806 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:27:01.806 "subtype": "NVMe", 00:27:01.806 "listen_addresses": [ 00:27:01.806 { 00:27:01.806 "trtype": "TCP", 00:27:01.806 "adrfam": "IPv4", 00:27:01.806 "traddr": "10.0.0.2", 00:27:01.806 "trsvcid": "4420" 00:27:01.806 } 00:27:01.806 ], 00:27:01.806 "allow_any_host": true, 00:27:01.806 "hosts": [], 00:27:01.806 "serial_number": "SPDK00000000000001", 00:27:01.806 "model_number": "SPDK bdev Controller", 00:27:01.806 "max_namespaces": 2, 00:27:01.806 "min_cntlid": 1, 00:27:01.806 "max_cntlid": 65519, 00:27:01.806 "namespaces": [ 00:27:01.806 { 00:27:01.806 "nsid": 1, 00:27:01.806 "bdev_name": "Malloc0", 00:27:01.806 "name": "Malloc0", 00:27:01.806 "nguid": "FC24FFDA70454404BEDCCC388F68AAA1", 00:27:01.806 "uuid": "fc24ffda-7045-4404-bedc-cc388f68aaa1" 00:27:01.806 } 00:27:01.806 ] 00:27:01.806 } 00:27:01.806 ] 00:27:01.806 09:00:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:01.806 09:00:20 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:27:01.806 09:00:20 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:27:01.806 09:00:20 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=1053090 00:27:01.806 09:00:20 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:27:01.806 09:00:20 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:27:01.806 09:00:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:27:01.806 09:00:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:27:01.806 09:00:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:27:01.806 09:00:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:27:01.806 09:00:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:27:01.806 EAL: No free 2048 kB hugepages reported on node 1 00:27:01.806 09:00:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:27:01.806 09:00:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:27:01.806 09:00:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:27:01.806 09:00:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:27:01.806 09:00:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:27:01.806 09:00:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:27:01.806 09:00:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:27:01.806 09:00:20 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:27:01.806 09:00:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:01.806 09:00:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:02.065 Malloc1 00:27:02.065 09:00:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.065 09:00:20 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:27:02.065 09:00:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.065 09:00:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:02.065 09:00:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.065 09:00:20 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:27:02.065 09:00:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.065 09:00:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:02.065 Asynchronous Event Request test 00:27:02.065 Attaching to 10.0.0.2 00:27:02.065 Attached to 10.0.0.2 00:27:02.065 Registering asynchronous event callbacks... 00:27:02.065 Starting namespace attribute notice tests for all controllers... 00:27:02.065 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:27:02.065 aer_cb - Changed Namespace 00:27:02.065 Cleaning up... 00:27:02.065 [ 00:27:02.065 { 00:27:02.065 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:27:02.065 "subtype": "Discovery", 00:27:02.065 "listen_addresses": [], 00:27:02.065 "allow_any_host": true, 00:27:02.065 "hosts": [] 00:27:02.065 }, 00:27:02.065 { 00:27:02.065 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:27:02.065 "subtype": "NVMe", 00:27:02.065 "listen_addresses": [ 00:27:02.065 { 00:27:02.065 "trtype": "TCP", 00:27:02.065 "adrfam": "IPv4", 00:27:02.065 "traddr": "10.0.0.2", 00:27:02.065 "trsvcid": "4420" 00:27:02.065 } 00:27:02.065 ], 00:27:02.065 "allow_any_host": true, 00:27:02.065 "hosts": [], 00:27:02.065 "serial_number": "SPDK00000000000001", 00:27:02.065 "model_number": "SPDK bdev Controller", 00:27:02.065 "max_namespaces": 2, 00:27:02.065 "min_cntlid": 1, 00:27:02.065 "max_cntlid": 65519, 00:27:02.065 "namespaces": [ 00:27:02.065 { 00:27:02.065 "nsid": 1, 00:27:02.065 "bdev_name": "Malloc0", 00:27:02.065 "name": "Malloc0", 00:27:02.065 "nguid": "FC24FFDA70454404BEDCCC388F68AAA1", 00:27:02.065 "uuid": "fc24ffda-7045-4404-bedc-cc388f68aaa1" 00:27:02.065 }, 00:27:02.065 { 00:27:02.065 "nsid": 2, 00:27:02.065 "bdev_name": "Malloc1", 00:27:02.065 "name": "Malloc1", 00:27:02.065 "nguid": "077CA8808B9847458D5B3EB2E30008B0", 00:27:02.065 "uuid": "077ca880-8b98-4745-8d5b-3eb2e30008b0" 00:27:02.065 } 00:27:02.065 ] 00:27:02.065 } 00:27:02.065 ] 00:27:02.065 09:00:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.065 09:00:20 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 1053090 00:27:02.065 09:00:20 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:27:02.065 09:00:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.065 09:00:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:02.065 09:00:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.065 09:00:20 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:27:02.065 09:00:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.065 09:00:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:02.065 09:00:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.065 09:00:20 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:02.065 09:00:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.065 09:00:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:02.065 09:00:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.065 09:00:20 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:27:02.065 09:00:20 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:27:02.065 09:00:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:02.065 09:00:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:27:02.065 09:00:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:02.065 09:00:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:27:02.065 09:00:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:02.065 09:00:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:02.065 rmmod nvme_tcp 00:27:02.065 rmmod nvme_fabrics 00:27:02.065 rmmod nvme_keyring 00:27:02.065 09:00:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:02.065 09:00:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:27:02.065 09:00:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:27:02.065 09:00:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 1053061 ']' 00:27:02.065 09:00:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 1053061 00:27:02.065 09:00:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@950 -- # '[' -z 1053061 ']' 00:27:02.065 09:00:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # kill -0 1053061 00:27:02.065 09:00:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # uname 00:27:02.065 09:00:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:02.065 09:00:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1053061 00:27:02.065 09:00:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:02.065 09:00:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:02.065 09:00:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1053061' 00:27:02.065 killing process with pid 1053061 00:27:02.065 09:00:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@969 -- # kill 1053061 00:27:02.065 09:00:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@974 -- # wait 1053061 00:27:02.323 09:00:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:02.323 09:00:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:02.323 09:00:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:02.323 09:00:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:02.323 09:00:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:02.323 09:00:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:02.323 09:00:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:02.323 09:00:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:04.860 09:00:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:04.860 00:27:04.860 real 0m5.348s 00:27:04.860 user 0m4.067s 00:27:04.860 sys 0m1.942s 00:27:04.860 09:00:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:04.860 09:00:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:04.860 ************************************ 00:27:04.860 END TEST nvmf_aer 00:27:04.860 ************************************ 00:27:04.860 09:00:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:27:04.860 09:00:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:04.860 09:00:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:04.860 09:00:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.860 ************************************ 00:27:04.860 START TEST nvmf_async_init 00:27:04.860 ************************************ 00:27:04.860 09:00:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:27:04.860 * Looking for test storage... 00:27:04.860 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:04.860 09:00:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:04.860 09:00:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:27:04.860 09:00:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:04.860 09:00:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:04.860 09:00:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:04.860 09:00:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:04.860 09:00:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:04.860 09:00:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:04.860 09:00:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:04.860 09:00:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:04.860 09:00:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:04.860 09:00:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:04.860 09:00:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:04.860 09:00:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:04.860 09:00:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:04.860 09:00:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:04.860 09:00:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:04.860 09:00:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:04.860 09:00:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:04.860 09:00:22 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:04.860 09:00:22 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:04.860 09:00:22 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:04.860 09:00:22 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:04.860 09:00:22 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:04.860 09:00:22 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:04.860 09:00:22 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:27:04.860 09:00:22 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:04.860 09:00:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:27:04.860 09:00:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:04.860 09:00:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:04.860 09:00:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:04.860 09:00:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:04.860 09:00:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:04.860 09:00:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:04.860 09:00:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:04.860 09:00:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:04.860 09:00:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:27:04.860 09:00:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:27:04.860 09:00:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:27:04.860 09:00:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:27:04.860 09:00:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:27:04.860 09:00:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:27:04.860 09:00:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=1a526291adaa41af8fb8c96cce961f69 00:27:04.860 09:00:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:27:04.860 09:00:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:04.860 09:00:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:04.860 09:00:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:04.860 09:00:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:04.860 09:00:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:04.860 09:00:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:04.860 09:00:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:04.860 09:00:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:04.860 09:00:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:04.860 09:00:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:04.860 09:00:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:27:04.860 09:00:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:06.234 09:00:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:06.234 09:00:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:27:06.234 09:00:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:06.234 09:00:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:06.234 09:00:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:06.234 09:00:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:06.234 09:00:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:06.234 09:00:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:27:06.234 09:00:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:06.234 09:00:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:27:06.234 09:00:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:27:06.234 09:00:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:27:06.234 09:00:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:27:06.234 09:00:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:27:06.234 09:00:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:27:06.234 09:00:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:06.234 09:00:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:06.234 09:00:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:06.234 09:00:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:06.234 09:00:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:06.234 09:00:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:06.234 09:00:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:06.234 09:00:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:06.234 09:00:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:06.235 09:00:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:06.235 09:00:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:06.235 09:00:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:06.235 09:00:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:06.235 09:00:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:06.235 09:00:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:06.235 09:00:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:06.235 09:00:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:06.235 09:00:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:06.235 09:00:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:06.235 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:06.235 09:00:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:06.235 09:00:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:06.235 09:00:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:06.235 09:00:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:06.235 09:00:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:06.235 09:00:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:06.235 09:00:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:06.235 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:06.235 09:00:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:06.235 09:00:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:06.235 09:00:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:06.235 09:00:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:06.235 09:00:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:06.235 09:00:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:06.235 09:00:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:06.235 09:00:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:06.235 09:00:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:06.235 09:00:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:06.235 09:00:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:06.235 09:00:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:06.235 09:00:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:06.235 09:00:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:06.235 09:00:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:06.235 09:00:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:06.235 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:06.235 09:00:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:06.235 09:00:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:06.235 09:00:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:06.235 09:00:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:06.235 09:00:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:06.235 09:00:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:06.235 09:00:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:06.235 09:00:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:06.235 09:00:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:06.235 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:06.235 09:00:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:06.235 09:00:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:06.235 09:00:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:27:06.235 09:00:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:06.235 09:00:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:06.235 09:00:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:06.235 09:00:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:06.235 09:00:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:06.235 09:00:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:06.235 09:00:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:06.235 09:00:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:06.235 09:00:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:06.235 09:00:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:06.235 09:00:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:06.235 09:00:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:06.235 09:00:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:06.235 09:00:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:06.235 09:00:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:06.235 09:00:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:06.235 09:00:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:06.235 09:00:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:06.235 09:00:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:06.235 09:00:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:06.494 09:00:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:06.494 09:00:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:06.494 09:00:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:06.494 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:06.494 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.208 ms 00:27:06.494 00:27:06.494 --- 10.0.0.2 ping statistics --- 00:27:06.494 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:06.494 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:27:06.494 09:00:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:06.494 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:06.494 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.122 ms 00:27:06.494 00:27:06.494 --- 10.0.0.1 ping statistics --- 00:27:06.494 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:06.494 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:27:06.494 09:00:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:06.494 09:00:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:27:06.494 09:00:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:06.494 09:00:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:06.494 09:00:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:06.494 09:00:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:06.494 09:00:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:06.494 09:00:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:06.494 09:00:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:06.494 09:00:24 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:27:06.494 09:00:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:06.494 09:00:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:06.494 09:00:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:06.494 09:00:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=1055017 00:27:06.494 09:00:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:27:06.494 09:00:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 1055017 00:27:06.494 09:00:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@831 -- # '[' -z 1055017 ']' 00:27:06.494 09:00:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:06.494 09:00:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:06.494 09:00:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:06.494 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:06.494 09:00:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:06.494 09:00:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:06.494 [2024-07-26 09:00:24.797127] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:27:06.494 [2024-07-26 09:00:24.797201] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:06.494 EAL: No free 2048 kB hugepages reported on node 1 00:27:06.494 [2024-07-26 09:00:24.833518] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:06.494 [2024-07-26 09:00:24.864941] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:06.753 [2024-07-26 09:00:24.961981] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:06.753 [2024-07-26 09:00:24.962035] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:06.753 [2024-07-26 09:00:24.962080] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:06.753 [2024-07-26 09:00:24.962104] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:06.753 [2024-07-26 09:00:24.962137] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:06.753 [2024-07-26 09:00:24.962174] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:06.753 09:00:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:06.753 09:00:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # return 0 00:27:06.753 09:00:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:06.753 09:00:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:06.753 09:00:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:06.753 09:00:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:06.753 09:00:25 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:27:06.753 09:00:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:06.753 09:00:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:06.753 [2024-07-26 09:00:25.105315] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:06.753 09:00:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:06.753 09:00:25 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:27:06.753 09:00:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:06.753 09:00:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:06.753 null0 00:27:06.753 09:00:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:06.753 09:00:25 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:27:06.753 09:00:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:06.753 09:00:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:06.753 09:00:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:06.753 09:00:25 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:27:06.753 09:00:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:06.753 09:00:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:06.753 09:00:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:06.753 09:00:25 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 1a526291adaa41af8fb8c96cce961f69 00:27:06.753 09:00:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:06.753 09:00:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:06.753 09:00:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:06.753 09:00:25 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:06.753 09:00:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:06.753 09:00:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:06.753 [2024-07-26 09:00:25.145617] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:06.753 09:00:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:06.753 09:00:25 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:27:06.753 09:00:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:06.753 09:00:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:07.012 nvme0n1 00:27:07.012 09:00:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.012 09:00:25 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:27:07.012 09:00:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.012 09:00:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:07.012 [ 00:27:07.012 { 00:27:07.012 "name": "nvme0n1", 00:27:07.012 "aliases": [ 00:27:07.012 "1a526291-adaa-41af-8fb8-c96cce961f69" 00:27:07.012 ], 00:27:07.012 "product_name": "NVMe disk", 00:27:07.012 "block_size": 512, 00:27:07.012 "num_blocks": 2097152, 00:27:07.012 "uuid": "1a526291-adaa-41af-8fb8-c96cce961f69", 00:27:07.012 "assigned_rate_limits": { 00:27:07.012 "rw_ios_per_sec": 0, 00:27:07.012 "rw_mbytes_per_sec": 0, 00:27:07.012 "r_mbytes_per_sec": 0, 00:27:07.012 "w_mbytes_per_sec": 0 00:27:07.012 }, 00:27:07.012 "claimed": false, 00:27:07.012 "zoned": false, 00:27:07.012 "supported_io_types": { 00:27:07.012 "read": true, 00:27:07.012 "write": true, 00:27:07.012 "unmap": false, 00:27:07.012 "flush": true, 00:27:07.012 "reset": true, 00:27:07.012 "nvme_admin": true, 00:27:07.012 "nvme_io": true, 00:27:07.012 "nvme_io_md": false, 00:27:07.012 "write_zeroes": true, 00:27:07.012 "zcopy": false, 00:27:07.012 "get_zone_info": false, 00:27:07.012 "zone_management": false, 00:27:07.012 "zone_append": false, 00:27:07.012 "compare": true, 00:27:07.012 "compare_and_write": true, 00:27:07.012 "abort": true, 00:27:07.012 "seek_hole": false, 00:27:07.012 "seek_data": false, 00:27:07.012 "copy": true, 00:27:07.012 "nvme_iov_md": false 00:27:07.012 }, 00:27:07.012 "memory_domains": [ 00:27:07.012 { 00:27:07.012 "dma_device_id": "system", 00:27:07.012 "dma_device_type": 1 00:27:07.012 } 00:27:07.012 ], 00:27:07.012 "driver_specific": { 00:27:07.012 "nvme": [ 00:27:07.012 { 00:27:07.012 "trid": { 00:27:07.012 "trtype": "TCP", 00:27:07.012 "adrfam": "IPv4", 00:27:07.012 "traddr": "10.0.0.2", 00:27:07.012 "trsvcid": "4420", 00:27:07.012 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:27:07.012 }, 00:27:07.012 "ctrlr_data": { 00:27:07.012 "cntlid": 1, 00:27:07.012 "vendor_id": "0x8086", 00:27:07.012 "model_number": "SPDK bdev Controller", 00:27:07.012 "serial_number": "00000000000000000000", 00:27:07.012 "firmware_revision": "24.09", 00:27:07.012 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:07.012 "oacs": { 00:27:07.012 "security": 0, 00:27:07.012 "format": 0, 00:27:07.012 "firmware": 0, 00:27:07.012 "ns_manage": 0 00:27:07.012 }, 00:27:07.012 "multi_ctrlr": true, 00:27:07.012 "ana_reporting": false 00:27:07.012 }, 00:27:07.012 "vs": { 00:27:07.012 "nvme_version": "1.3" 00:27:07.012 }, 00:27:07.012 "ns_data": { 00:27:07.012 "id": 1, 00:27:07.012 "can_share": true 00:27:07.012 } 00:27:07.012 } 00:27:07.012 ], 00:27:07.012 "mp_policy": "active_passive" 00:27:07.012 } 00:27:07.012 } 00:27:07.012 ] 00:27:07.012 09:00:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.012 09:00:25 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:27:07.012 09:00:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.012 09:00:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:07.012 [2024-07-26 09:00:25.394713] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:07.012 [2024-07-26 09:00:25.394796] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x261d850 (9): Bad file descriptor 00:27:07.270 [2024-07-26 09:00:25.527240] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:07.270 09:00:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.270 09:00:25 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:27:07.270 09:00:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.270 09:00:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:07.270 [ 00:27:07.270 { 00:27:07.270 "name": "nvme0n1", 00:27:07.270 "aliases": [ 00:27:07.270 "1a526291-adaa-41af-8fb8-c96cce961f69" 00:27:07.270 ], 00:27:07.270 "product_name": "NVMe disk", 00:27:07.270 "block_size": 512, 00:27:07.270 "num_blocks": 2097152, 00:27:07.270 "uuid": "1a526291-adaa-41af-8fb8-c96cce961f69", 00:27:07.270 "assigned_rate_limits": { 00:27:07.271 "rw_ios_per_sec": 0, 00:27:07.271 "rw_mbytes_per_sec": 0, 00:27:07.271 "r_mbytes_per_sec": 0, 00:27:07.271 "w_mbytes_per_sec": 0 00:27:07.271 }, 00:27:07.271 "claimed": false, 00:27:07.271 "zoned": false, 00:27:07.271 "supported_io_types": { 00:27:07.271 "read": true, 00:27:07.271 "write": true, 00:27:07.271 "unmap": false, 00:27:07.271 "flush": true, 00:27:07.271 "reset": true, 00:27:07.271 "nvme_admin": true, 00:27:07.271 "nvme_io": true, 00:27:07.271 "nvme_io_md": false, 00:27:07.271 "write_zeroes": true, 00:27:07.271 "zcopy": false, 00:27:07.271 "get_zone_info": false, 00:27:07.271 "zone_management": false, 00:27:07.271 "zone_append": false, 00:27:07.271 "compare": true, 00:27:07.271 "compare_and_write": true, 00:27:07.271 "abort": true, 00:27:07.271 "seek_hole": false, 00:27:07.271 "seek_data": false, 00:27:07.271 "copy": true, 00:27:07.271 "nvme_iov_md": false 00:27:07.271 }, 00:27:07.271 "memory_domains": [ 00:27:07.271 { 00:27:07.271 "dma_device_id": "system", 00:27:07.271 "dma_device_type": 1 00:27:07.271 } 00:27:07.271 ], 00:27:07.271 "driver_specific": { 00:27:07.271 "nvme": [ 00:27:07.271 { 00:27:07.271 "trid": { 00:27:07.271 "trtype": "TCP", 00:27:07.271 "adrfam": "IPv4", 00:27:07.271 "traddr": "10.0.0.2", 00:27:07.271 "trsvcid": "4420", 00:27:07.271 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:27:07.271 }, 00:27:07.271 "ctrlr_data": { 00:27:07.271 "cntlid": 2, 00:27:07.271 "vendor_id": "0x8086", 00:27:07.271 "model_number": "SPDK bdev Controller", 00:27:07.271 "serial_number": "00000000000000000000", 00:27:07.271 "firmware_revision": "24.09", 00:27:07.271 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:07.271 "oacs": { 00:27:07.271 "security": 0, 00:27:07.271 "format": 0, 00:27:07.271 "firmware": 0, 00:27:07.271 "ns_manage": 0 00:27:07.271 }, 00:27:07.271 "multi_ctrlr": true, 00:27:07.271 "ana_reporting": false 00:27:07.271 }, 00:27:07.271 "vs": { 00:27:07.271 "nvme_version": "1.3" 00:27:07.271 }, 00:27:07.271 "ns_data": { 00:27:07.271 "id": 1, 00:27:07.271 "can_share": true 00:27:07.271 } 00:27:07.271 } 00:27:07.271 ], 00:27:07.271 "mp_policy": "active_passive" 00:27:07.271 } 00:27:07.271 } 00:27:07.271 ] 00:27:07.271 09:00:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.271 09:00:25 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:07.271 09:00:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.271 09:00:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:07.271 09:00:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.271 09:00:25 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:27:07.271 09:00:25 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.eei52T82av 00:27:07.271 09:00:25 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:27:07.271 09:00:25 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.eei52T82av 00:27:07.271 09:00:25 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:27:07.271 09:00:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.271 09:00:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:07.271 09:00:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.271 09:00:25 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:27:07.271 09:00:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.271 09:00:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:07.271 [2024-07-26 09:00:25.575437] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:27:07.271 [2024-07-26 09:00:25.575591] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:07.271 09:00:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.271 09:00:25 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.eei52T82av 00:27:07.271 09:00:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.271 09:00:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:07.271 [2024-07-26 09:00:25.583447] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:27:07.271 09:00:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.271 09:00:25 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.eei52T82av 00:27:07.271 09:00:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.271 09:00:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:07.271 [2024-07-26 09:00:25.591468] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:27:07.271 [2024-07-26 09:00:25.591536] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:27:07.271 nvme0n1 00:27:07.271 09:00:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.271 09:00:25 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:27:07.271 09:00:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.271 09:00:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:07.271 [ 00:27:07.271 { 00:27:07.271 "name": "nvme0n1", 00:27:07.271 "aliases": [ 00:27:07.271 "1a526291-adaa-41af-8fb8-c96cce961f69" 00:27:07.271 ], 00:27:07.271 "product_name": "NVMe disk", 00:27:07.271 "block_size": 512, 00:27:07.271 "num_blocks": 2097152, 00:27:07.271 "uuid": "1a526291-adaa-41af-8fb8-c96cce961f69", 00:27:07.271 "assigned_rate_limits": { 00:27:07.271 "rw_ios_per_sec": 0, 00:27:07.271 "rw_mbytes_per_sec": 0, 00:27:07.271 "r_mbytes_per_sec": 0, 00:27:07.271 "w_mbytes_per_sec": 0 00:27:07.271 }, 00:27:07.271 "claimed": false, 00:27:07.271 "zoned": false, 00:27:07.271 "supported_io_types": { 00:27:07.271 "read": true, 00:27:07.271 "write": true, 00:27:07.271 "unmap": false, 00:27:07.271 "flush": true, 00:27:07.271 "reset": true, 00:27:07.271 "nvme_admin": true, 00:27:07.271 "nvme_io": true, 00:27:07.271 "nvme_io_md": false, 00:27:07.271 "write_zeroes": true, 00:27:07.271 "zcopy": false, 00:27:07.271 "get_zone_info": false, 00:27:07.271 "zone_management": false, 00:27:07.271 "zone_append": false, 00:27:07.271 "compare": true, 00:27:07.271 "compare_and_write": true, 00:27:07.271 "abort": true, 00:27:07.271 "seek_hole": false, 00:27:07.271 "seek_data": false, 00:27:07.271 "copy": true, 00:27:07.271 "nvme_iov_md": false 00:27:07.271 }, 00:27:07.271 "memory_domains": [ 00:27:07.271 { 00:27:07.271 "dma_device_id": "system", 00:27:07.271 "dma_device_type": 1 00:27:07.271 } 00:27:07.271 ], 00:27:07.271 "driver_specific": { 00:27:07.271 "nvme": [ 00:27:07.271 { 00:27:07.271 "trid": { 00:27:07.271 "trtype": "TCP", 00:27:07.271 "adrfam": "IPv4", 00:27:07.271 "traddr": "10.0.0.2", 00:27:07.271 "trsvcid": "4421", 00:27:07.271 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:27:07.271 }, 00:27:07.271 "ctrlr_data": { 00:27:07.271 "cntlid": 3, 00:27:07.271 "vendor_id": "0x8086", 00:27:07.271 "model_number": "SPDK bdev Controller", 00:27:07.271 "serial_number": "00000000000000000000", 00:27:07.271 "firmware_revision": "24.09", 00:27:07.271 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:07.271 "oacs": { 00:27:07.271 "security": 0, 00:27:07.271 "format": 0, 00:27:07.271 "firmware": 0, 00:27:07.271 "ns_manage": 0 00:27:07.271 }, 00:27:07.271 "multi_ctrlr": true, 00:27:07.271 "ana_reporting": false 00:27:07.271 }, 00:27:07.271 "vs": { 00:27:07.271 "nvme_version": "1.3" 00:27:07.271 }, 00:27:07.271 "ns_data": { 00:27:07.271 "id": 1, 00:27:07.271 "can_share": true 00:27:07.271 } 00:27:07.271 } 00:27:07.271 ], 00:27:07.271 "mp_policy": "active_passive" 00:27:07.271 } 00:27:07.271 } 00:27:07.271 ] 00:27:07.271 09:00:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.271 09:00:25 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:07.271 09:00:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.271 09:00:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:07.271 09:00:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.271 09:00:25 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.eei52T82av 00:27:07.271 09:00:25 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:27:07.271 09:00:25 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:27:07.271 09:00:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:07.271 09:00:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:27:07.271 09:00:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:07.272 09:00:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:27:07.272 09:00:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:07.272 09:00:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:07.272 rmmod nvme_tcp 00:27:07.272 rmmod nvme_fabrics 00:27:07.272 rmmod nvme_keyring 00:27:07.530 09:00:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:07.530 09:00:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:27:07.530 09:00:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:27:07.530 09:00:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 1055017 ']' 00:27:07.530 09:00:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 1055017 00:27:07.530 09:00:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@950 -- # '[' -z 1055017 ']' 00:27:07.530 09:00:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # kill -0 1055017 00:27:07.530 09:00:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # uname 00:27:07.530 09:00:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:07.530 09:00:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1055017 00:27:07.531 09:00:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:07.531 09:00:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:07.531 09:00:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1055017' 00:27:07.531 killing process with pid 1055017 00:27:07.531 09:00:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@969 -- # kill 1055017 00:27:07.531 [2024-07-26 09:00:25.778388] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:27:07.531 [2024-07-26 09:00:25.778428] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:27:07.531 09:00:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@974 -- # wait 1055017 00:27:07.790 09:00:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:07.790 09:00:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:07.790 09:00:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:07.790 09:00:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:07.790 09:00:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:07.790 09:00:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:07.790 09:00:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:07.790 09:00:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:09.693 09:00:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:09.693 00:27:09.693 real 0m5.296s 00:27:09.693 user 0m2.025s 00:27:09.693 sys 0m1.682s 00:27:09.693 09:00:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:09.693 09:00:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:09.693 ************************************ 00:27:09.693 END TEST nvmf_async_init 00:27:09.693 ************************************ 00:27:09.693 09:00:28 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:27:09.693 09:00:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:09.693 09:00:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:09.693 09:00:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.693 ************************************ 00:27:09.693 START TEST dma 00:27:09.693 ************************************ 00:27:09.693 09:00:28 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:27:09.693 * Looking for test storage... 00:27:09.953 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:09.953 09:00:28 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:09.953 09:00:28 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:27:09.953 09:00:28 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:09.953 09:00:28 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:09.953 09:00:28 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:09.953 09:00:28 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:09.953 09:00:28 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:09.953 09:00:28 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:09.953 09:00:28 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:09.953 09:00:28 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:09.953 09:00:28 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:09.953 09:00:28 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:09.953 09:00:28 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:09.953 09:00:28 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:09.953 09:00:28 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:09.953 09:00:28 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:09.953 09:00:28 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:09.953 09:00:28 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:09.953 09:00:28 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:09.953 09:00:28 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:09.953 09:00:28 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:09.953 09:00:28 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:09.953 09:00:28 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:09.953 09:00:28 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:09.953 09:00:28 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:09.953 09:00:28 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:27:09.953 09:00:28 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:09.953 09:00:28 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@47 -- # : 0 00:27:09.953 09:00:28 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:09.953 09:00:28 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:09.953 09:00:28 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:09.953 09:00:28 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:09.953 09:00:28 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:09.953 09:00:28 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:09.953 09:00:28 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:09.953 09:00:28 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:09.953 09:00:28 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:27:09.953 09:00:28 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:27:09.953 00:27:09.953 real 0m0.069s 00:27:09.953 user 0m0.035s 00:27:09.953 sys 0m0.040s 00:27:09.953 09:00:28 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:09.953 09:00:28 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:27:09.953 ************************************ 00:27:09.953 END TEST dma 00:27:09.953 ************************************ 00:27:09.953 09:00:28 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:27:09.953 09:00:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:09.953 09:00:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:09.953 09:00:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.953 ************************************ 00:27:09.953 START TEST nvmf_identify 00:27:09.953 ************************************ 00:27:09.953 09:00:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:27:09.953 * Looking for test storage... 00:27:09.953 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:09.953 09:00:28 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:09.953 09:00:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:27:09.953 09:00:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:09.953 09:00:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:09.953 09:00:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:09.953 09:00:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:09.953 09:00:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:09.953 09:00:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:09.953 09:00:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:09.953 09:00:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:09.953 09:00:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:09.953 09:00:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:09.953 09:00:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:09.953 09:00:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:09.953 09:00:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:09.953 09:00:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:09.953 09:00:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:09.953 09:00:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:09.953 09:00:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:09.953 09:00:28 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:09.953 09:00:28 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:09.953 09:00:28 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:09.953 09:00:28 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:09.953 09:00:28 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:09.953 09:00:28 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:09.953 09:00:28 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:27:09.953 09:00:28 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:09.953 09:00:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:27:09.954 09:00:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:09.954 09:00:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:09.954 09:00:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:09.954 09:00:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:09.954 09:00:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:09.954 09:00:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:09.954 09:00:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:09.954 09:00:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:09.954 09:00:28 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:09.954 09:00:28 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:09.954 09:00:28 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:27:09.954 09:00:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:09.954 09:00:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:09.954 09:00:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:09.954 09:00:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:09.954 09:00:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:09.954 09:00:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:09.954 09:00:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:09.954 09:00:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:09.954 09:00:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:09.954 09:00:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:09.954 09:00:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:27:09.954 09:00:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:11.856 09:00:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:11.856 09:00:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:27:11.856 09:00:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:11.856 09:00:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:11.856 09:00:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:11.856 09:00:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:11.856 09:00:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:11.856 09:00:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:27:11.856 09:00:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:11.856 09:00:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:27:11.856 09:00:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:27:11.856 09:00:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:27:11.856 09:00:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:27:11.857 09:00:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:27:11.857 09:00:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:27:11.857 09:00:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:11.857 09:00:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:11.857 09:00:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:11.857 09:00:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:11.857 09:00:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:11.857 09:00:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:11.857 09:00:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:11.857 09:00:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:11.857 09:00:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:11.857 09:00:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:11.857 09:00:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:11.857 09:00:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:11.857 09:00:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:11.857 09:00:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:11.857 09:00:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:11.857 09:00:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:11.857 09:00:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:11.857 09:00:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:11.857 09:00:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:11.857 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:11.857 09:00:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:11.857 09:00:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:11.857 09:00:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:11.857 09:00:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:11.857 09:00:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:11.857 09:00:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:11.857 09:00:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:11.857 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:11.857 09:00:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:11.857 09:00:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:11.857 09:00:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:11.857 09:00:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:11.857 09:00:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:11.857 09:00:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:11.857 09:00:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:11.857 09:00:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:11.857 09:00:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:11.857 09:00:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:11.857 09:00:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:11.857 09:00:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:11.857 09:00:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:11.857 09:00:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:11.857 09:00:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:11.857 09:00:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:11.857 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:11.857 09:00:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:11.857 09:00:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:11.857 09:00:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:11.857 09:00:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:11.857 09:00:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:11.857 09:00:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:11.857 09:00:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:11.857 09:00:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:11.857 09:00:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:11.857 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:11.857 09:00:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:11.857 09:00:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:11.857 09:00:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:27:11.857 09:00:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:11.857 09:00:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:11.857 09:00:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:11.857 09:00:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:11.857 09:00:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:11.857 09:00:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:11.857 09:00:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:11.857 09:00:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:11.857 09:00:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:11.857 09:00:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:11.857 09:00:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:11.857 09:00:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:11.857 09:00:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:11.857 09:00:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:11.857 09:00:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:11.857 09:00:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:11.857 09:00:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:11.857 09:00:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:11.857 09:00:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:11.857 09:00:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:11.857 09:00:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:11.857 09:00:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:11.857 09:00:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:11.857 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:11.857 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.152 ms 00:27:11.857 00:27:11.857 --- 10.0.0.2 ping statistics --- 00:27:11.857 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:11.857 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:27:11.857 09:00:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:11.857 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:11.857 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.083 ms 00:27:11.857 00:27:11.857 --- 10.0.0.1 ping statistics --- 00:27:11.857 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:11.857 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:27:11.857 09:00:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:11.857 09:00:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:27:11.857 09:00:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:11.857 09:00:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:11.857 09:00:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:11.857 09:00:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:11.857 09:00:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:11.857 09:00:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:11.857 09:00:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:11.857 09:00:30 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:27:11.857 09:00:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:11.857 09:00:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:11.857 09:00:30 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=1057142 00:27:11.857 09:00:30 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:11.857 09:00:30 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:11.857 09:00:30 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 1057142 00:27:11.857 09:00:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@831 -- # '[' -z 1057142 ']' 00:27:11.857 09:00:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:11.857 09:00:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:11.857 09:00:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:11.857 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:11.858 09:00:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:11.858 09:00:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:11.858 [2024-07-26 09:00:30.294266] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:27:11.858 [2024-07-26 09:00:30.294340] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:12.116 EAL: No free 2048 kB hugepages reported on node 1 00:27:12.116 [2024-07-26 09:00:30.331011] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:12.116 [2024-07-26 09:00:30.357884] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:12.116 [2024-07-26 09:00:30.443353] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:12.116 [2024-07-26 09:00:30.443404] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:12.116 [2024-07-26 09:00:30.443426] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:12.116 [2024-07-26 09:00:30.443443] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:12.116 [2024-07-26 09:00:30.443457] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:12.116 [2024-07-26 09:00:30.443587] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:12.116 [2024-07-26 09:00:30.443653] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:12.116 [2024-07-26 09:00:30.443706] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:12.116 [2024-07-26 09:00:30.443714] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:12.375 09:00:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:12.375 09:00:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # return 0 00:27:12.375 09:00:30 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:12.375 09:00:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:12.375 09:00:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:12.375 [2024-07-26 09:00:30.583530] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:12.375 09:00:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:12.376 09:00:30 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:27:12.376 09:00:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:12.376 09:00:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:12.376 09:00:30 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:12.376 09:00:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:12.376 09:00:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:12.376 Malloc0 00:27:12.376 09:00:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:12.376 09:00:30 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:12.376 09:00:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:12.376 09:00:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:12.376 09:00:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:12.376 09:00:30 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:27:12.376 09:00:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:12.376 09:00:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:12.376 09:00:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:12.376 09:00:30 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:12.376 09:00:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:12.376 09:00:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:12.376 [2024-07-26 09:00:30.659539] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:12.376 09:00:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:12.376 09:00:30 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:12.376 09:00:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:12.376 09:00:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:12.376 09:00:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:12.376 09:00:30 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:27:12.376 09:00:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:12.376 09:00:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:12.376 [ 00:27:12.376 { 00:27:12.376 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:27:12.376 "subtype": "Discovery", 00:27:12.376 "listen_addresses": [ 00:27:12.376 { 00:27:12.376 "trtype": "TCP", 00:27:12.376 "adrfam": "IPv4", 00:27:12.376 "traddr": "10.0.0.2", 00:27:12.376 "trsvcid": "4420" 00:27:12.376 } 00:27:12.376 ], 00:27:12.376 "allow_any_host": true, 00:27:12.376 "hosts": [] 00:27:12.376 }, 00:27:12.376 { 00:27:12.376 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:27:12.376 "subtype": "NVMe", 00:27:12.376 "listen_addresses": [ 00:27:12.376 { 00:27:12.376 "trtype": "TCP", 00:27:12.376 "adrfam": "IPv4", 00:27:12.376 "traddr": "10.0.0.2", 00:27:12.376 "trsvcid": "4420" 00:27:12.376 } 00:27:12.376 ], 00:27:12.376 "allow_any_host": true, 00:27:12.376 "hosts": [], 00:27:12.376 "serial_number": "SPDK00000000000001", 00:27:12.376 "model_number": "SPDK bdev Controller", 00:27:12.376 "max_namespaces": 32, 00:27:12.376 "min_cntlid": 1, 00:27:12.376 "max_cntlid": 65519, 00:27:12.376 "namespaces": [ 00:27:12.376 { 00:27:12.376 "nsid": 1, 00:27:12.376 "bdev_name": "Malloc0", 00:27:12.376 "name": "Malloc0", 00:27:12.376 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:27:12.376 "eui64": "ABCDEF0123456789", 00:27:12.376 "uuid": "a5a06380-f28a-44c6-961a-e52f80f2dcd6" 00:27:12.376 } 00:27:12.376 ] 00:27:12.376 } 00:27:12.376 ] 00:27:12.376 09:00:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:12.376 09:00:30 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:27:12.376 [2024-07-26 09:00:30.701303] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:27:12.376 [2024-07-26 09:00:30.701376] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1057258 ] 00:27:12.376 EAL: No free 2048 kB hugepages reported on node 1 00:27:12.376 [2024-07-26 09:00:30.720869] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:12.376 [2024-07-26 09:00:30.738810] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:27:12.376 [2024-07-26 09:00:30.738873] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:27:12.376 [2024-07-26 09:00:30.738884] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:27:12.376 [2024-07-26 09:00:30.738899] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:27:12.376 [2024-07-26 09:00:30.738913] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:27:12.376 [2024-07-26 09:00:30.742113] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:27:12.376 [2024-07-26 09:00:30.742169] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xeb7630 0 00:27:12.376 [2024-07-26 09:00:30.749068] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:27:12.376 [2024-07-26 09:00:30.749095] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:27:12.376 [2024-07-26 09:00:30.749106] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:27:12.376 [2024-07-26 09:00:30.749114] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:27:12.376 [2024-07-26 09:00:30.749169] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:12.376 [2024-07-26 09:00:30.749183] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:12.376 [2024-07-26 09:00:30.749192] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xeb7630) 00:27:12.376 [2024-07-26 09:00:30.749214] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:27:12.376 [2024-07-26 09:00:30.749246] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf05f80, cid 0, qid 0 00:27:12.376 [2024-07-26 09:00:30.756072] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:12.376 [2024-07-26 09:00:30.756091] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:12.376 [2024-07-26 09:00:30.756099] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:12.376 [2024-07-26 09:00:30.756108] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf05f80) on tqpair=0xeb7630 00:27:12.376 [2024-07-26 09:00:30.756125] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:27:12.376 [2024-07-26 09:00:30.756137] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:27:12.376 [2024-07-26 09:00:30.756148] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:27:12.376 [2024-07-26 09:00:30.756173] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:12.376 [2024-07-26 09:00:30.756182] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:12.376 [2024-07-26 09:00:30.756189] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xeb7630) 00:27:12.376 [2024-07-26 09:00:30.756201] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.376 [2024-07-26 09:00:30.756225] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf05f80, cid 0, qid 0 00:27:12.376 [2024-07-26 09:00:30.756396] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:12.376 [2024-07-26 09:00:30.756412] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:12.376 [2024-07-26 09:00:30.756419] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:12.376 [2024-07-26 09:00:30.756426] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf05f80) on tqpair=0xeb7630 00:27:12.376 [2024-07-26 09:00:30.756441] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:27:12.376 [2024-07-26 09:00:30.756455] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:27:12.376 [2024-07-26 09:00:30.756468] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:12.376 [2024-07-26 09:00:30.756476] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:12.376 [2024-07-26 09:00:30.756483] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xeb7630) 00:27:12.376 [2024-07-26 09:00:30.756495] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.376 [2024-07-26 09:00:30.756517] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf05f80, cid 0, qid 0 00:27:12.376 [2024-07-26 09:00:30.756670] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:12.376 [2024-07-26 09:00:30.756686] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:12.376 [2024-07-26 09:00:30.756693] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:12.376 [2024-07-26 09:00:30.756700] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf05f80) on tqpair=0xeb7630 00:27:12.376 [2024-07-26 09:00:30.756709] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:27:12.376 [2024-07-26 09:00:30.756724] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:27:12.376 [2024-07-26 09:00:30.756737] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:12.376 [2024-07-26 09:00:30.756745] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:12.376 [2024-07-26 09:00:30.756752] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xeb7630) 00:27:12.377 [2024-07-26 09:00:30.756762] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.377 [2024-07-26 09:00:30.756789] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf05f80, cid 0, qid 0 00:27:12.377 [2024-07-26 09:00:30.756898] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:12.377 [2024-07-26 09:00:30.756911] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:12.377 [2024-07-26 09:00:30.756918] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:12.377 [2024-07-26 09:00:30.756926] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf05f80) on tqpair=0xeb7630 00:27:12.377 [2024-07-26 09:00:30.756935] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:27:12.377 [2024-07-26 09:00:30.756952] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:12.377 [2024-07-26 09:00:30.756961] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:12.377 [2024-07-26 09:00:30.756969] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xeb7630) 00:27:12.377 [2024-07-26 09:00:30.756980] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.377 [2024-07-26 09:00:30.757001] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf05f80, cid 0, qid 0 00:27:12.377 [2024-07-26 09:00:30.757131] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:12.377 [2024-07-26 09:00:30.757147] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:12.377 [2024-07-26 09:00:30.757154] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:12.377 [2024-07-26 09:00:30.757161] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf05f80) on tqpair=0xeb7630 00:27:12.377 [2024-07-26 09:00:30.757170] nvme_ctrlr.c:3873:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:27:12.377 [2024-07-26 09:00:30.757180] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:27:12.377 [2024-07-26 09:00:30.757193] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:27:12.377 [2024-07-26 09:00:30.757304] nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:27:12.377 [2024-07-26 09:00:30.757313] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:27:12.377 [2024-07-26 09:00:30.757328] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:12.377 [2024-07-26 09:00:30.757336] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:12.377 [2024-07-26 09:00:30.757343] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xeb7630) 00:27:12.377 [2024-07-26 09:00:30.757354] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.377 [2024-07-26 09:00:30.757376] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf05f80, cid 0, qid 0 00:27:12.377 [2024-07-26 09:00:30.757523] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:12.377 [2024-07-26 09:00:30.757535] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:12.377 [2024-07-26 09:00:30.757542] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:12.377 [2024-07-26 09:00:30.757549] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf05f80) on tqpair=0xeb7630 00:27:12.377 [2024-07-26 09:00:30.757558] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:27:12.377 [2024-07-26 09:00:30.757574] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:12.377 [2024-07-26 09:00:30.757584] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:12.377 [2024-07-26 09:00:30.757591] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xeb7630) 00:27:12.377 [2024-07-26 09:00:30.757606] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.377 [2024-07-26 09:00:30.757628] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf05f80, cid 0, qid 0 00:27:12.377 [2024-07-26 09:00:30.757738] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:12.377 [2024-07-26 09:00:30.757753] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:12.377 [2024-07-26 09:00:30.757760] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:12.377 [2024-07-26 09:00:30.757768] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf05f80) on tqpair=0xeb7630 00:27:12.377 [2024-07-26 09:00:30.757776] nvme_ctrlr.c:3908:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:27:12.377 [2024-07-26 09:00:30.757785] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:27:12.377 [2024-07-26 09:00:30.757798] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:27:12.377 [2024-07-26 09:00:30.757813] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:27:12.377 [2024-07-26 09:00:30.757831] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:12.377 [2024-07-26 09:00:30.757839] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xeb7630) 00:27:12.377 [2024-07-26 09:00:30.757851] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.377 [2024-07-26 09:00:30.757872] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf05f80, cid 0, qid 0 00:27:12.377 [2024-07-26 09:00:30.758023] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:12.377 [2024-07-26 09:00:30.758036] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:12.377 [2024-07-26 09:00:30.758044] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:12.377 [2024-07-26 09:00:30.758051] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xeb7630): datao=0, datal=4096, cccid=0 00:27:12.377 [2024-07-26 09:00:30.758066] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf05f80) on tqpair(0xeb7630): expected_datao=0, payload_size=4096 00:27:12.377 [2024-07-26 09:00:30.758076] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:12.377 [2024-07-26 09:00:30.758089] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:12.377 [2024-07-26 09:00:30.758098] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:12.377 [2024-07-26 09:00:30.758111] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:12.377 [2024-07-26 09:00:30.758121] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:12.377 [2024-07-26 09:00:30.758128] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:12.377 [2024-07-26 09:00:30.758135] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf05f80) on tqpair=0xeb7630 00:27:12.377 [2024-07-26 09:00:30.758148] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:27:12.377 [2024-07-26 09:00:30.758157] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:27:12.377 [2024-07-26 09:00:30.758165] nvme_ctrlr.c:2064:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:27:12.377 [2024-07-26 09:00:30.758175] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:27:12.377 [2024-07-26 09:00:30.758183] nvme_ctrlr.c:2103:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:27:12.377 [2024-07-26 09:00:30.758191] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:27:12.377 [2024-07-26 09:00:30.758211] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:27:12.377 [2024-07-26 09:00:30.758229] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:12.377 [2024-07-26 09:00:30.758238] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:12.377 [2024-07-26 09:00:30.758245] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xeb7630) 00:27:12.377 [2024-07-26 09:00:30.758257] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:27:12.377 [2024-07-26 09:00:30.758279] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf05f80, cid 0, qid 0 00:27:12.377 [2024-07-26 09:00:30.758406] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:12.377 [2024-07-26 09:00:30.758418] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:12.377 [2024-07-26 09:00:30.758425] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:12.377 [2024-07-26 09:00:30.758432] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf05f80) on tqpair=0xeb7630 00:27:12.377 [2024-07-26 09:00:30.758446] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:12.377 [2024-07-26 09:00:30.758454] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:12.377 [2024-07-26 09:00:30.758460] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xeb7630) 00:27:12.377 [2024-07-26 09:00:30.758471] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:12.377 [2024-07-26 09:00:30.758481] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:12.377 [2024-07-26 09:00:30.758488] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:12.377 [2024-07-26 09:00:30.758495] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xeb7630) 00:27:12.377 [2024-07-26 09:00:30.758505] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:12.377 [2024-07-26 09:00:30.758515] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:12.377 [2024-07-26 09:00:30.758523] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:12.377 [2024-07-26 09:00:30.758529] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xeb7630) 00:27:12.377 [2024-07-26 09:00:30.758538] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:12.377 [2024-07-26 09:00:30.758549] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:12.377 [2024-07-26 09:00:30.758556] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:12.377 [2024-07-26 09:00:30.758563] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xeb7630) 00:27:12.377 [2024-07-26 09:00:30.758572] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:12.377 [2024-07-26 09:00:30.758581] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:27:12.377 [2024-07-26 09:00:30.758601] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:27:12.377 [2024-07-26 09:00:30.758614] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:12.378 [2024-07-26 09:00:30.758622] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xeb7630) 00:27:12.378 [2024-07-26 09:00:30.758633] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.378 [2024-07-26 09:00:30.758656] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf05f80, cid 0, qid 0 00:27:12.378 [2024-07-26 09:00:30.758667] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf06100, cid 1, qid 0 00:27:12.378 [2024-07-26 09:00:30.758679] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf06280, cid 2, qid 0 00:27:12.378 [2024-07-26 09:00:30.758688] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf06400, cid 3, qid 0 00:27:12.378 [2024-07-26 09:00:30.758696] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf06580, cid 4, qid 0 00:27:12.378 [2024-07-26 09:00:30.758870] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:12.378 [2024-07-26 09:00:30.758882] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:12.378 [2024-07-26 09:00:30.758890] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:12.378 [2024-07-26 09:00:30.758897] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf06580) on tqpair=0xeb7630 00:27:12.378 [2024-07-26 09:00:30.758907] nvme_ctrlr.c:3026:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:27:12.378 [2024-07-26 09:00:30.758917] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:27:12.378 [2024-07-26 09:00:30.758934] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:12.378 [2024-07-26 09:00:30.758944] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xeb7630) 00:27:12.378 [2024-07-26 09:00:30.758954] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.378 [2024-07-26 09:00:30.758975] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf06580, cid 4, qid 0 00:27:12.378 [2024-07-26 09:00:30.759148] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:12.378 [2024-07-26 09:00:30.759164] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:12.378 [2024-07-26 09:00:30.759171] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:12.378 [2024-07-26 09:00:30.759178] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xeb7630): datao=0, datal=4096, cccid=4 00:27:12.378 [2024-07-26 09:00:30.759187] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf06580) on tqpair(0xeb7630): expected_datao=0, payload_size=4096 00:27:12.378 [2024-07-26 09:00:30.759194] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:12.378 [2024-07-26 09:00:30.759211] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:12.378 [2024-07-26 09:00:30.759221] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:12.378 [2024-07-26 09:00:30.759290] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:12.378 [2024-07-26 09:00:30.759302] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:12.378 [2024-07-26 09:00:30.759309] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:12.378 [2024-07-26 09:00:30.759316] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf06580) on tqpair=0xeb7630 00:27:12.378 [2024-07-26 09:00:30.759335] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:27:12.378 [2024-07-26 09:00:30.759376] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:12.378 [2024-07-26 09:00:30.759387] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xeb7630) 00:27:12.378 [2024-07-26 09:00:30.759398] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.378 [2024-07-26 09:00:30.759411] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:12.378 [2024-07-26 09:00:30.759418] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:12.378 [2024-07-26 09:00:30.759425] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xeb7630) 00:27:12.378 [2024-07-26 09:00:30.759435] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:27:12.378 [2024-07-26 09:00:30.759462] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf06580, cid 4, qid 0 00:27:12.378 [2024-07-26 09:00:30.759474] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf06700, cid 5, qid 0 00:27:12.378 [2024-07-26 09:00:30.759640] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:12.378 [2024-07-26 09:00:30.759656] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:12.378 [2024-07-26 09:00:30.759663] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:12.378 [2024-07-26 09:00:30.759670] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xeb7630): datao=0, datal=1024, cccid=4 00:27:12.378 [2024-07-26 09:00:30.759678] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf06580) on tqpair(0xeb7630): expected_datao=0, payload_size=1024 00:27:12.378 [2024-07-26 09:00:30.759686] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:12.378 [2024-07-26 09:00:30.759697] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:12.378 [2024-07-26 09:00:30.759704] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:12.378 [2024-07-26 09:00:30.759713] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:12.378 [2024-07-26 09:00:30.759723] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:12.378 [2024-07-26 09:00:30.759730] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:12.378 [2024-07-26 09:00:30.759736] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf06700) on tqpair=0xeb7630 00:27:12.378 [2024-07-26 09:00:30.802076] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:12.378 [2024-07-26 09:00:30.802094] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:12.378 [2024-07-26 09:00:30.802102] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:12.378 [2024-07-26 09:00:30.802109] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf06580) on tqpair=0xeb7630 00:27:12.378 [2024-07-26 09:00:30.802129] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:12.378 [2024-07-26 09:00:30.802138] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xeb7630) 00:27:12.378 [2024-07-26 09:00:30.802150] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.378 [2024-07-26 09:00:30.802180] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf06580, cid 4, qid 0 00:27:12.378 [2024-07-26 09:00:30.802350] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:12.378 [2024-07-26 09:00:30.802363] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:12.378 [2024-07-26 09:00:30.802370] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:12.378 [2024-07-26 09:00:30.802377] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xeb7630): datao=0, datal=3072, cccid=4 00:27:12.378 [2024-07-26 09:00:30.802385] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf06580) on tqpair(0xeb7630): expected_datao=0, payload_size=3072 00:27:12.378 [2024-07-26 09:00:30.802393] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:12.378 [2024-07-26 09:00:30.802404] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:12.378 [2024-07-26 09:00:30.802412] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:12.378 [2024-07-26 09:00:30.802430] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:12.378 [2024-07-26 09:00:30.802441] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:12.378 [2024-07-26 09:00:30.802448] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:12.378 [2024-07-26 09:00:30.802455] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf06580) on tqpair=0xeb7630 00:27:12.378 [2024-07-26 09:00:30.802470] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:12.378 [2024-07-26 09:00:30.802479] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xeb7630) 00:27:12.378 [2024-07-26 09:00:30.802490] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.378 [2024-07-26 09:00:30.802518] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf06580, cid 4, qid 0 00:27:12.378 [2024-07-26 09:00:30.802666] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:12.378 [2024-07-26 09:00:30.802679] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:12.378 [2024-07-26 09:00:30.802686] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:12.378 [2024-07-26 09:00:30.802693] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xeb7630): datao=0, datal=8, cccid=4 00:27:12.378 [2024-07-26 09:00:30.802701] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf06580) on tqpair(0xeb7630): expected_datao=0, payload_size=8 00:27:12.378 [2024-07-26 09:00:30.802708] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:12.378 [2024-07-26 09:00:30.802718] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:12.378 [2024-07-26 09:00:30.802726] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:12.642 [2024-07-26 09:00:30.844207] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:12.642 [2024-07-26 09:00:30.844225] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:12.642 [2024-07-26 09:00:30.844233] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:12.642 [2024-07-26 09:00:30.844240] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf06580) on tqpair=0xeb7630 00:27:12.642 ===================================================== 00:27:12.642 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:27:12.642 ===================================================== 00:27:12.642 Controller Capabilities/Features 00:27:12.642 ================================ 00:27:12.642 Vendor ID: 0000 00:27:12.642 Subsystem Vendor ID: 0000 00:27:12.642 Serial Number: .................... 00:27:12.642 Model Number: ........................................ 00:27:12.642 Firmware Version: 24.09 00:27:12.642 Recommended Arb Burst: 0 00:27:12.642 IEEE OUI Identifier: 00 00 00 00:27:12.642 Multi-path I/O 00:27:12.642 May have multiple subsystem ports: No 00:27:12.642 May have multiple controllers: No 00:27:12.642 Associated with SR-IOV VF: No 00:27:12.642 Max Data Transfer Size: 131072 00:27:12.642 Max Number of Namespaces: 0 00:27:12.642 Max Number of I/O Queues: 1024 00:27:12.642 NVMe Specification Version (VS): 1.3 00:27:12.642 NVMe Specification Version (Identify): 1.3 00:27:12.642 Maximum Queue Entries: 128 00:27:12.642 Contiguous Queues Required: Yes 00:27:12.642 Arbitration Mechanisms Supported 00:27:12.642 Weighted Round Robin: Not Supported 00:27:12.642 Vendor Specific: Not Supported 00:27:12.642 Reset Timeout: 15000 ms 00:27:12.642 Doorbell Stride: 4 bytes 00:27:12.642 NVM Subsystem Reset: Not Supported 00:27:12.642 Command Sets Supported 00:27:12.642 NVM Command Set: Supported 00:27:12.642 Boot Partition: Not Supported 00:27:12.642 Memory Page Size Minimum: 4096 bytes 00:27:12.642 Memory Page Size Maximum: 4096 bytes 00:27:12.642 Persistent Memory Region: Not Supported 00:27:12.642 Optional Asynchronous Events Supported 00:27:12.642 Namespace Attribute Notices: Not Supported 00:27:12.642 Firmware Activation Notices: Not Supported 00:27:12.642 ANA Change Notices: Not Supported 00:27:12.642 PLE Aggregate Log Change Notices: Not Supported 00:27:12.642 LBA Status Info Alert Notices: Not Supported 00:27:12.642 EGE Aggregate Log Change Notices: Not Supported 00:27:12.642 Normal NVM Subsystem Shutdown event: Not Supported 00:27:12.642 Zone Descriptor Change Notices: Not Supported 00:27:12.642 Discovery Log Change Notices: Supported 00:27:12.642 Controller Attributes 00:27:12.642 128-bit Host Identifier: Not Supported 00:27:12.642 Non-Operational Permissive Mode: Not Supported 00:27:12.642 NVM Sets: Not Supported 00:27:12.642 Read Recovery Levels: Not Supported 00:27:12.642 Endurance Groups: Not Supported 00:27:12.642 Predictable Latency Mode: Not Supported 00:27:12.642 Traffic Based Keep ALive: Not Supported 00:27:12.642 Namespace Granularity: Not Supported 00:27:12.642 SQ Associations: Not Supported 00:27:12.642 UUID List: Not Supported 00:27:12.642 Multi-Domain Subsystem: Not Supported 00:27:12.642 Fixed Capacity Management: Not Supported 00:27:12.642 Variable Capacity Management: Not Supported 00:27:12.642 Delete Endurance Group: Not Supported 00:27:12.642 Delete NVM Set: Not Supported 00:27:12.642 Extended LBA Formats Supported: Not Supported 00:27:12.642 Flexible Data Placement Supported: Not Supported 00:27:12.642 00:27:12.642 Controller Memory Buffer Support 00:27:12.642 ================================ 00:27:12.642 Supported: No 00:27:12.642 00:27:12.642 Persistent Memory Region Support 00:27:12.642 ================================ 00:27:12.642 Supported: No 00:27:12.642 00:27:12.642 Admin Command Set Attributes 00:27:12.642 ============================ 00:27:12.642 Security Send/Receive: Not Supported 00:27:12.642 Format NVM: Not Supported 00:27:12.642 Firmware Activate/Download: Not Supported 00:27:12.642 Namespace Management: Not Supported 00:27:12.642 Device Self-Test: Not Supported 00:27:12.642 Directives: Not Supported 00:27:12.642 NVMe-MI: Not Supported 00:27:12.642 Virtualization Management: Not Supported 00:27:12.642 Doorbell Buffer Config: Not Supported 00:27:12.642 Get LBA Status Capability: Not Supported 00:27:12.642 Command & Feature Lockdown Capability: Not Supported 00:27:12.642 Abort Command Limit: 1 00:27:12.642 Async Event Request Limit: 4 00:27:12.642 Number of Firmware Slots: N/A 00:27:12.642 Firmware Slot 1 Read-Only: N/A 00:27:12.642 Firmware Activation Without Reset: N/A 00:27:12.642 Multiple Update Detection Support: N/A 00:27:12.642 Firmware Update Granularity: No Information Provided 00:27:12.642 Per-Namespace SMART Log: No 00:27:12.642 Asymmetric Namespace Access Log Page: Not Supported 00:27:12.642 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:27:12.642 Command Effects Log Page: Not Supported 00:27:12.642 Get Log Page Extended Data: Supported 00:27:12.642 Telemetry Log Pages: Not Supported 00:27:12.642 Persistent Event Log Pages: Not Supported 00:27:12.642 Supported Log Pages Log Page: May Support 00:27:12.642 Commands Supported & Effects Log Page: Not Supported 00:27:12.642 Feature Identifiers & Effects Log Page:May Support 00:27:12.642 NVMe-MI Commands & Effects Log Page: May Support 00:27:12.642 Data Area 4 for Telemetry Log: Not Supported 00:27:12.642 Error Log Page Entries Supported: 128 00:27:12.642 Keep Alive: Not Supported 00:27:12.642 00:27:12.642 NVM Command Set Attributes 00:27:12.642 ========================== 00:27:12.642 Submission Queue Entry Size 00:27:12.642 Max: 1 00:27:12.642 Min: 1 00:27:12.642 Completion Queue Entry Size 00:27:12.642 Max: 1 00:27:12.642 Min: 1 00:27:12.642 Number of Namespaces: 0 00:27:12.642 Compare Command: Not Supported 00:27:12.642 Write Uncorrectable Command: Not Supported 00:27:12.642 Dataset Management Command: Not Supported 00:27:12.642 Write Zeroes Command: Not Supported 00:27:12.642 Set Features Save Field: Not Supported 00:27:12.642 Reservations: Not Supported 00:27:12.642 Timestamp: Not Supported 00:27:12.642 Copy: Not Supported 00:27:12.642 Volatile Write Cache: Not Present 00:27:12.642 Atomic Write Unit (Normal): 1 00:27:12.642 Atomic Write Unit (PFail): 1 00:27:12.642 Atomic Compare & Write Unit: 1 00:27:12.642 Fused Compare & Write: Supported 00:27:12.642 Scatter-Gather List 00:27:12.642 SGL Command Set: Supported 00:27:12.642 SGL Keyed: Supported 00:27:12.642 SGL Bit Bucket Descriptor: Not Supported 00:27:12.642 SGL Metadata Pointer: Not Supported 00:27:12.642 Oversized SGL: Not Supported 00:27:12.642 SGL Metadata Address: Not Supported 00:27:12.642 SGL Offset: Supported 00:27:12.642 Transport SGL Data Block: Not Supported 00:27:12.642 Replay Protected Memory Block: Not Supported 00:27:12.642 00:27:12.642 Firmware Slot Information 00:27:12.642 ========================= 00:27:12.642 Active slot: 0 00:27:12.642 00:27:12.642 00:27:12.642 Error Log 00:27:12.642 ========= 00:27:12.642 00:27:12.642 Active Namespaces 00:27:12.642 ================= 00:27:12.642 Discovery Log Page 00:27:12.642 ================== 00:27:12.642 Generation Counter: 2 00:27:12.642 Number of Records: 2 00:27:12.642 Record Format: 0 00:27:12.642 00:27:12.642 Discovery Log Entry 0 00:27:12.642 ---------------------- 00:27:12.642 Transport Type: 3 (TCP) 00:27:12.642 Address Family: 1 (IPv4) 00:27:12.642 Subsystem Type: 3 (Current Discovery Subsystem) 00:27:12.642 Entry Flags: 00:27:12.642 Duplicate Returned Information: 1 00:27:12.642 Explicit Persistent Connection Support for Discovery: 1 00:27:12.642 Transport Requirements: 00:27:12.642 Secure Channel: Not Required 00:27:12.643 Port ID: 0 (0x0000) 00:27:12.643 Controller ID: 65535 (0xffff) 00:27:12.643 Admin Max SQ Size: 128 00:27:12.643 Transport Service Identifier: 4420 00:27:12.643 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:27:12.643 Transport Address: 10.0.0.2 00:27:12.643 Discovery Log Entry 1 00:27:12.643 ---------------------- 00:27:12.643 Transport Type: 3 (TCP) 00:27:12.643 Address Family: 1 (IPv4) 00:27:12.643 Subsystem Type: 2 (NVM Subsystem) 00:27:12.643 Entry Flags: 00:27:12.643 Duplicate Returned Information: 0 00:27:12.643 Explicit Persistent Connection Support for Discovery: 0 00:27:12.643 Transport Requirements: 00:27:12.643 Secure Channel: Not Required 00:27:12.643 Port ID: 0 (0x0000) 00:27:12.643 Controller ID: 65535 (0xffff) 00:27:12.643 Admin Max SQ Size: 128 00:27:12.643 Transport Service Identifier: 4420 00:27:12.643 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:27:12.643 Transport Address: 10.0.0.2 [2024-07-26 09:00:30.844362] nvme_ctrlr.c:4361:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:27:12.643 [2024-07-26 09:00:30.844386] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf05f80) on tqpair=0xeb7630 00:27:12.643 [2024-07-26 09:00:30.844399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.643 [2024-07-26 09:00:30.844409] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf06100) on tqpair=0xeb7630 00:27:12.643 [2024-07-26 09:00:30.844417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.643 [2024-07-26 09:00:30.844426] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf06280) on tqpair=0xeb7630 00:27:12.643 [2024-07-26 09:00:30.844434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.643 [2024-07-26 09:00:30.844442] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf06400) on tqpair=0xeb7630 00:27:12.643 [2024-07-26 09:00:30.844450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.643 [2024-07-26 09:00:30.844469] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:12.643 [2024-07-26 09:00:30.844479] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:12.643 [2024-07-26 09:00:30.844486] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xeb7630) 00:27:12.643 [2024-07-26 09:00:30.844497] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.643 [2024-07-26 09:00:30.844523] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf06400, cid 3, qid 0 00:27:12.643 [2024-07-26 09:00:30.844669] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:12.643 [2024-07-26 09:00:30.844682] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:12.643 [2024-07-26 09:00:30.844689] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:12.643 [2024-07-26 09:00:30.844696] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf06400) on tqpair=0xeb7630 00:27:12.643 [2024-07-26 09:00:30.844709] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:12.643 [2024-07-26 09:00:30.844717] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:12.643 [2024-07-26 09:00:30.844724] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xeb7630) 00:27:12.643 [2024-07-26 09:00:30.844734] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.643 [2024-07-26 09:00:30.844760] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf06400, cid 3, qid 0 00:27:12.643 [2024-07-26 09:00:30.844897] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:12.643 [2024-07-26 09:00:30.844912] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:12.643 [2024-07-26 09:00:30.844920] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:12.643 [2024-07-26 09:00:30.844927] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf06400) on tqpair=0xeb7630 00:27:12.643 [2024-07-26 09:00:30.844938] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:27:12.643 [2024-07-26 09:00:30.844948] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:27:12.643 [2024-07-26 09:00:30.844964] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:12.643 [2024-07-26 09:00:30.844974] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:12.643 [2024-07-26 09:00:30.844981] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xeb7630) 00:27:12.643 [2024-07-26 09:00:30.844992] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.643 [2024-07-26 09:00:30.845014] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf06400, cid 3, qid 0 00:27:12.643 [2024-07-26 09:00:30.845143] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:12.643 [2024-07-26 09:00:30.845159] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:12.643 [2024-07-26 09:00:30.845166] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:12.643 [2024-07-26 09:00:30.845173] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf06400) on tqpair=0xeb7630 00:27:12.643 [2024-07-26 09:00:30.845191] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:12.643 [2024-07-26 09:00:30.845201] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:12.643 [2024-07-26 09:00:30.845208] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xeb7630) 00:27:12.643 [2024-07-26 09:00:30.845218] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.643 [2024-07-26 09:00:30.845240] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf06400, cid 3, qid 0 00:27:12.643 [2024-07-26 09:00:30.845348] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:12.643 [2024-07-26 09:00:30.845361] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:12.643 [2024-07-26 09:00:30.845368] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:12.643 [2024-07-26 09:00:30.845375] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf06400) on tqpair=0xeb7630 00:27:12.643 [2024-07-26 09:00:30.845391] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:12.643 [2024-07-26 09:00:30.845401] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:12.643 [2024-07-26 09:00:30.845407] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xeb7630) 00:27:12.643 [2024-07-26 09:00:30.845418] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.643 [2024-07-26 09:00:30.845439] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf06400, cid 3, qid 0 00:27:12.643 [2024-07-26 09:00:30.845550] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:12.643 [2024-07-26 09:00:30.845565] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:12.643 [2024-07-26 09:00:30.845572] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:12.643 [2024-07-26 09:00:30.845580] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf06400) on tqpair=0xeb7630 00:27:12.643 [2024-07-26 09:00:30.845596] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:12.643 [2024-07-26 09:00:30.845606] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:12.643 [2024-07-26 09:00:30.845613] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xeb7630) 00:27:12.643 [2024-07-26 09:00:30.845624] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.643 [2024-07-26 09:00:30.845649] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf06400, cid 3, qid 0 00:27:12.643 [2024-07-26 09:00:30.845748] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:12.643 [2024-07-26 09:00:30.845761] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:12.643 [2024-07-26 09:00:30.845768] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:12.643 [2024-07-26 09:00:30.845775] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf06400) on tqpair=0xeb7630 00:27:12.643 [2024-07-26 09:00:30.845791] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:12.643 [2024-07-26 09:00:30.845801] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:12.643 [2024-07-26 09:00:30.845808] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xeb7630) 00:27:12.643 [2024-07-26 09:00:30.845819] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.643 [2024-07-26 09:00:30.845839] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf06400, cid 3, qid 0 00:27:12.643 [2024-07-26 09:00:30.845965] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:12.643 [2024-07-26 09:00:30.845977] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:12.643 [2024-07-26 09:00:30.845984] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:12.643 [2024-07-26 09:00:30.845991] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf06400) on tqpair=0xeb7630 00:27:12.643 [2024-07-26 09:00:30.846007] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:12.643 [2024-07-26 09:00:30.846017] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:12.643 [2024-07-26 09:00:30.846024] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xeb7630) 00:27:12.643 [2024-07-26 09:00:30.846034] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.643 [2024-07-26 09:00:30.846055] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf06400, cid 3, qid 0 00:27:12.643 [2024-07-26 09:00:30.850084] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:12.643 [2024-07-26 09:00:30.850096] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:12.643 [2024-07-26 09:00:30.850104] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:12.643 [2024-07-26 09:00:30.850111] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf06400) on tqpair=0xeb7630 00:27:12.643 [2024-07-26 09:00:30.850128] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:12.644 [2024-07-26 09:00:30.850139] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:12.644 [2024-07-26 09:00:30.850146] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xeb7630) 00:27:12.644 [2024-07-26 09:00:30.850157] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.644 [2024-07-26 09:00:30.850180] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf06400, cid 3, qid 0 00:27:12.644 [2024-07-26 09:00:30.850322] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:12.644 [2024-07-26 09:00:30.850337] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:12.644 [2024-07-26 09:00:30.850344] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:12.644 [2024-07-26 09:00:30.850351] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf06400) on tqpair=0xeb7630 00:27:12.644 [2024-07-26 09:00:30.850365] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 5 milliseconds 00:27:12.644 00:27:12.644 09:00:30 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:27:12.644 [2024-07-26 09:00:30.886341] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:27:12.644 [2024-07-26 09:00:30.886399] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1057286 ] 00:27:12.644 EAL: No free 2048 kB hugepages reported on node 1 00:27:12.644 [2024-07-26 09:00:30.903695] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:12.644 [2024-07-26 09:00:30.921486] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:27:12.644 [2024-07-26 09:00:30.921534] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:27:12.644 [2024-07-26 09:00:30.921543] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:27:12.644 [2024-07-26 09:00:30.921558] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:27:12.644 [2024-07-26 09:00:30.921571] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:27:12.644 [2024-07-26 09:00:30.925128] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:27:12.644 [2024-07-26 09:00:30.925179] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x154d630 0 00:27:12.644 [2024-07-26 09:00:30.933086] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:27:12.644 [2024-07-26 09:00:30.933108] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:27:12.644 [2024-07-26 09:00:30.933118] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:27:12.644 [2024-07-26 09:00:30.933124] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:27:12.644 [2024-07-26 09:00:30.933182] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:12.644 [2024-07-26 09:00:30.933194] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:12.644 [2024-07-26 09:00:30.933201] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x154d630) 00:27:12.644 [2024-07-26 09:00:30.933216] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:27:12.644 [2024-07-26 09:00:30.933242] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x159bf80, cid 0, qid 0 00:27:12.644 [2024-07-26 09:00:30.941076] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:12.644 [2024-07-26 09:00:30.941100] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:12.644 [2024-07-26 09:00:30.941108] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:12.644 [2024-07-26 09:00:30.941115] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x159bf80) on tqpair=0x154d630 00:27:12.644 [2024-07-26 09:00:30.941129] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:27:12.644 [2024-07-26 09:00:30.941154] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:27:12.644 [2024-07-26 09:00:30.941168] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:27:12.644 [2024-07-26 09:00:30.941187] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:12.644 [2024-07-26 09:00:30.941196] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:12.644 [2024-07-26 09:00:30.941203] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x154d630) 00:27:12.644 [2024-07-26 09:00:30.941214] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.644 [2024-07-26 09:00:30.941238] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x159bf80, cid 0, qid 0 00:27:12.644 [2024-07-26 09:00:30.941396] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:12.644 [2024-07-26 09:00:30.941413] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:12.644 [2024-07-26 09:00:30.941421] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:12.644 [2024-07-26 09:00:30.941428] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x159bf80) on tqpair=0x154d630 00:27:12.644 [2024-07-26 09:00:30.941440] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:27:12.644 [2024-07-26 09:00:30.941454] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:27:12.644 [2024-07-26 09:00:30.941466] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:12.644 [2024-07-26 09:00:30.941473] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:12.644 [2024-07-26 09:00:30.941480] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x154d630) 00:27:12.644 [2024-07-26 09:00:30.941491] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.644 [2024-07-26 09:00:30.941512] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x159bf80, cid 0, qid 0 00:27:12.644 [2024-07-26 09:00:30.941627] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:12.644 [2024-07-26 09:00:30.941642] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:12.644 [2024-07-26 09:00:30.941649] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:12.644 [2024-07-26 09:00:30.941656] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x159bf80) on tqpair=0x154d630 00:27:12.644 [2024-07-26 09:00:30.941666] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:27:12.644 [2024-07-26 09:00:30.941680] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:27:12.644 [2024-07-26 09:00:30.941692] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:12.644 [2024-07-26 09:00:30.941699] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:12.644 [2024-07-26 09:00:30.941706] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x154d630) 00:27:12.644 [2024-07-26 09:00:30.941716] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.644 [2024-07-26 09:00:30.941738] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x159bf80, cid 0, qid 0 00:27:12.644 [2024-07-26 09:00:30.941845] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:12.644 [2024-07-26 09:00:30.941860] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:12.644 [2024-07-26 09:00:30.941867] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:12.644 [2024-07-26 09:00:30.941874] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x159bf80) on tqpair=0x154d630 00:27:12.644 [2024-07-26 09:00:30.941883] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:27:12.644 [2024-07-26 09:00:30.941900] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:12.644 [2024-07-26 09:00:30.941909] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:12.644 [2024-07-26 09:00:30.941916] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x154d630) 00:27:12.644 [2024-07-26 09:00:30.941926] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.644 [2024-07-26 09:00:30.941947] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x159bf80, cid 0, qid 0 00:27:12.644 [2024-07-26 09:00:30.942053] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:12.644 [2024-07-26 09:00:30.942076] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:12.644 [2024-07-26 09:00:30.942084] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:12.644 [2024-07-26 09:00:30.942091] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x159bf80) on tqpair=0x154d630 00:27:12.644 [2024-07-26 09:00:30.942106] nvme_ctrlr.c:3873:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:27:12.644 [2024-07-26 09:00:30.942116] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:27:12.644 [2024-07-26 09:00:30.942130] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:27:12.644 [2024-07-26 09:00:30.942240] nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:27:12.644 [2024-07-26 09:00:30.942248] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:27:12.644 [2024-07-26 09:00:30.942261] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:12.644 [2024-07-26 09:00:30.942269] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:12.644 [2024-07-26 09:00:30.942275] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x154d630) 00:27:12.644 [2024-07-26 09:00:30.942286] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.644 [2024-07-26 09:00:30.942307] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x159bf80, cid 0, qid 0 00:27:12.644 [2024-07-26 09:00:30.942449] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:12.644 [2024-07-26 09:00:30.942464] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:12.644 [2024-07-26 09:00:30.942471] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:12.645 [2024-07-26 09:00:30.942478] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x159bf80) on tqpair=0x154d630 00:27:12.645 [2024-07-26 09:00:30.942486] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:27:12.645 [2024-07-26 09:00:30.942502] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:12.645 [2024-07-26 09:00:30.942511] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:12.645 [2024-07-26 09:00:30.942518] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x154d630) 00:27:12.645 [2024-07-26 09:00:30.942529] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.645 [2024-07-26 09:00:30.942549] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x159bf80, cid 0, qid 0 00:27:12.645 [2024-07-26 09:00:30.942659] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:12.645 [2024-07-26 09:00:30.942674] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:12.645 [2024-07-26 09:00:30.942681] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:12.645 [2024-07-26 09:00:30.942688] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x159bf80) on tqpair=0x154d630 00:27:12.645 [2024-07-26 09:00:30.942696] nvme_ctrlr.c:3908:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:27:12.645 [2024-07-26 09:00:30.942706] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:27:12.645 [2024-07-26 09:00:30.942719] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:27:12.645 [2024-07-26 09:00:30.942733] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:27:12.645 [2024-07-26 09:00:30.942747] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:12.645 [2024-07-26 09:00:30.942755] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x154d630) 00:27:12.645 [2024-07-26 09:00:30.942766] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.645 [2024-07-26 09:00:30.942790] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x159bf80, cid 0, qid 0 00:27:12.645 [2024-07-26 09:00:30.942946] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:12.645 [2024-07-26 09:00:30.942961] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:12.645 [2024-07-26 09:00:30.942969] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:12.645 [2024-07-26 09:00:30.942976] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x154d630): datao=0, datal=4096, cccid=0 00:27:12.645 [2024-07-26 09:00:30.942984] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x159bf80) on tqpair(0x154d630): expected_datao=0, payload_size=4096 00:27:12.645 [2024-07-26 09:00:30.942992] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:12.645 [2024-07-26 09:00:30.943017] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:12.645 [2024-07-26 09:00:30.943027] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:12.645 [2024-07-26 09:00:30.943104] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:12.645 [2024-07-26 09:00:30.943119] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:12.645 [2024-07-26 09:00:30.943127] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:12.645 [2024-07-26 09:00:30.943134] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x159bf80) on tqpair=0x154d630 00:27:12.645 [2024-07-26 09:00:30.943145] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:27:12.645 [2024-07-26 09:00:30.943154] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:27:12.645 [2024-07-26 09:00:30.943162] nvme_ctrlr.c:2064:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:27:12.645 [2024-07-26 09:00:30.943171] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:27:12.645 [2024-07-26 09:00:30.943179] nvme_ctrlr.c:2103:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:27:12.645 [2024-07-26 09:00:30.943187] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:27:12.645 [2024-07-26 09:00:30.943202] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:27:12.645 [2024-07-26 09:00:30.943220] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:12.645 [2024-07-26 09:00:30.943228] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:12.645 [2024-07-26 09:00:30.943234] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x154d630) 00:27:12.645 [2024-07-26 09:00:30.943246] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:27:12.645 [2024-07-26 09:00:30.943267] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x159bf80, cid 0, qid 0 00:27:12.645 [2024-07-26 09:00:30.943393] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:12.645 [2024-07-26 09:00:30.943405] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:12.645 [2024-07-26 09:00:30.943412] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:12.645 [2024-07-26 09:00:30.943419] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x159bf80) on tqpair=0x154d630 00:27:12.645 [2024-07-26 09:00:30.943431] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:12.645 [2024-07-26 09:00:30.943438] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:12.645 [2024-07-26 09:00:30.943445] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x154d630) 00:27:12.645 [2024-07-26 09:00:30.943454] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:12.645 [2024-07-26 09:00:30.943464] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:12.645 [2024-07-26 09:00:30.943471] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:12.645 [2024-07-26 09:00:30.943481] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x154d630) 00:27:12.645 [2024-07-26 09:00:30.943491] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:12.645 [2024-07-26 09:00:30.943501] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:12.645 [2024-07-26 09:00:30.943508] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:12.645 [2024-07-26 09:00:30.943514] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x154d630) 00:27:12.645 [2024-07-26 09:00:30.943523] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:12.645 [2024-07-26 09:00:30.943532] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:12.645 [2024-07-26 09:00:30.943539] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:12.645 [2024-07-26 09:00:30.943561] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x154d630) 00:27:12.645 [2024-07-26 09:00:30.943570] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:12.645 [2024-07-26 09:00:30.943578] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:27:12.645 [2024-07-26 09:00:30.943598] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:27:12.645 [2024-07-26 09:00:30.943611] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:12.645 [2024-07-26 09:00:30.943618] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x154d630) 00:27:12.645 [2024-07-26 09:00:30.943628] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.645 [2024-07-26 09:00:30.943650] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x159bf80, cid 0, qid 0 00:27:12.645 [2024-07-26 09:00:30.943676] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x159c100, cid 1, qid 0 00:27:12.645 [2024-07-26 09:00:30.943685] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x159c280, cid 2, qid 0 00:27:12.645 [2024-07-26 09:00:30.943693] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x159c400, cid 3, qid 0 00:27:12.645 [2024-07-26 09:00:30.943700] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x159c580, cid 4, qid 0 00:27:12.645 [2024-07-26 09:00:30.943879] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:12.645 [2024-07-26 09:00:30.943894] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:12.645 [2024-07-26 09:00:30.943901] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:12.645 [2024-07-26 09:00:30.943908] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x159c580) on tqpair=0x154d630 00:27:12.645 [2024-07-26 09:00:30.943918] nvme_ctrlr.c:3026:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:27:12.645 [2024-07-26 09:00:30.943927] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:27:12.645 [2024-07-26 09:00:30.943945] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:27:12.645 [2024-07-26 09:00:30.943958] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:27:12.645 [2024-07-26 09:00:30.943969] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:12.645 [2024-07-26 09:00:30.943976] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:12.645 [2024-07-26 09:00:30.943983] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x154d630) 00:27:12.645 [2024-07-26 09:00:30.943993] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:27:12.645 [2024-07-26 09:00:30.944018] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x159c580, cid 4, qid 0 00:27:12.645 [2024-07-26 09:00:30.944168] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:12.645 [2024-07-26 09:00:30.944182] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:12.645 [2024-07-26 09:00:30.944190] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:12.645 [2024-07-26 09:00:30.944196] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x159c580) on tqpair=0x154d630 00:27:12.645 [2024-07-26 09:00:30.944264] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:27:12.645 [2024-07-26 09:00:30.944284] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:27:12.645 [2024-07-26 09:00:30.944299] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:12.645 [2024-07-26 09:00:30.944307] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x154d630) 00:27:12.646 [2024-07-26 09:00:30.944317] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.646 [2024-07-26 09:00:30.944339] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x159c580, cid 4, qid 0 00:27:12.646 [2024-07-26 09:00:30.944500] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:12.646 [2024-07-26 09:00:30.944515] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:12.646 [2024-07-26 09:00:30.944522] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:12.646 [2024-07-26 09:00:30.944529] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x154d630): datao=0, datal=4096, cccid=4 00:27:12.646 [2024-07-26 09:00:30.944536] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x159c580) on tqpair(0x154d630): expected_datao=0, payload_size=4096 00:27:12.646 [2024-07-26 09:00:30.944544] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:12.646 [2024-07-26 09:00:30.944561] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:12.646 [2024-07-26 09:00:30.944570] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:12.646 [2024-07-26 09:00:30.989071] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:12.646 [2024-07-26 09:00:30.989090] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:12.646 [2024-07-26 09:00:30.989097] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:12.646 [2024-07-26 09:00:30.989104] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x159c580) on tqpair=0x154d630 00:27:12.646 [2024-07-26 09:00:30.989127] nvme_ctrlr.c:4697:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:27:12.646 [2024-07-26 09:00:30.989146] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:27:12.646 [2024-07-26 09:00:30.989179] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:27:12.646 [2024-07-26 09:00:30.989194] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:12.646 [2024-07-26 09:00:30.989202] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x154d630) 00:27:12.646 [2024-07-26 09:00:30.989213] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.646 [2024-07-26 09:00:30.989236] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x159c580, cid 4, qid 0 00:27:12.646 [2024-07-26 09:00:30.989412] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:12.646 [2024-07-26 09:00:30.989428] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:12.646 [2024-07-26 09:00:30.989435] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:12.646 [2024-07-26 09:00:30.989441] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x154d630): datao=0, datal=4096, cccid=4 00:27:12.646 [2024-07-26 09:00:30.989456] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x159c580) on tqpair(0x154d630): expected_datao=0, payload_size=4096 00:27:12.646 [2024-07-26 09:00:30.989464] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:12.646 [2024-07-26 09:00:30.989482] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:12.646 [2024-07-26 09:00:30.989491] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:12.646 [2024-07-26 09:00:30.989563] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:12.646 [2024-07-26 09:00:30.989578] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:12.646 [2024-07-26 09:00:30.989585] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:12.646 [2024-07-26 09:00:30.989592] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x159c580) on tqpair=0x154d630 00:27:12.646 [2024-07-26 09:00:30.989616] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:27:12.646 [2024-07-26 09:00:30.989636] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:27:12.646 [2024-07-26 09:00:30.989649] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:12.646 [2024-07-26 09:00:30.989657] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x154d630) 00:27:12.646 [2024-07-26 09:00:30.989668] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.646 [2024-07-26 09:00:30.989690] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x159c580, cid 4, qid 0 00:27:12.646 [2024-07-26 09:00:30.989813] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:12.646 [2024-07-26 09:00:30.989828] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:12.646 [2024-07-26 09:00:30.989835] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:12.646 [2024-07-26 09:00:30.989842] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x154d630): datao=0, datal=4096, cccid=4 00:27:12.646 [2024-07-26 09:00:30.989849] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x159c580) on tqpair(0x154d630): expected_datao=0, payload_size=4096 00:27:12.646 [2024-07-26 09:00:30.989857] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:12.646 [2024-07-26 09:00:30.989874] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:12.646 [2024-07-26 09:00:30.989883] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:12.646 [2024-07-26 09:00:30.989954] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:12.646 [2024-07-26 09:00:30.989965] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:12.646 [2024-07-26 09:00:30.989972] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:12.646 [2024-07-26 09:00:30.989979] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x159c580) on tqpair=0x154d630 00:27:12.646 [2024-07-26 09:00:30.989993] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:27:12.646 [2024-07-26 09:00:30.990007] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:27:12.646 [2024-07-26 09:00:30.990022] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:27:12.646 [2024-07-26 09:00:30.990036] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:27:12.646 [2024-07-26 09:00:30.990046] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:27:12.646 [2024-07-26 09:00:30.990055] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:27:12.646 [2024-07-26 09:00:30.990074] nvme_ctrlr.c:3114:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:27:12.646 [2024-07-26 09:00:30.990086] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:27:12.646 [2024-07-26 09:00:30.990095] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:27:12.646 [2024-07-26 09:00:30.990115] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:12.646 [2024-07-26 09:00:30.990124] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x154d630) 00:27:12.646 [2024-07-26 09:00:30.990135] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.646 [2024-07-26 09:00:30.990146] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:12.646 [2024-07-26 09:00:30.990153] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:12.646 [2024-07-26 09:00:30.990160] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x154d630) 00:27:12.646 [2024-07-26 09:00:30.990169] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:27:12.646 [2024-07-26 09:00:30.990195] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x159c580, cid 4, qid 0 00:27:12.646 [2024-07-26 09:00:30.990207] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x159c700, cid 5, qid 0 00:27:12.646 [2024-07-26 09:00:30.990364] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:12.646 [2024-07-26 09:00:30.990376] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:12.646 [2024-07-26 09:00:30.990383] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:12.646 [2024-07-26 09:00:30.990390] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x159c580) on tqpair=0x154d630 00:27:12.646 [2024-07-26 09:00:30.990401] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:12.646 [2024-07-26 09:00:30.990411] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:12.646 [2024-07-26 09:00:30.990417] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:12.646 [2024-07-26 09:00:30.990424] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x159c700) on tqpair=0x154d630 00:27:12.646 [2024-07-26 09:00:30.990439] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:12.646 [2024-07-26 09:00:30.990448] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x154d630) 00:27:12.646 [2024-07-26 09:00:30.990459] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.647 [2024-07-26 09:00:30.990480] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x159c700, cid 5, qid 0 00:27:12.647 [2024-07-26 09:00:30.990607] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:12.647 [2024-07-26 09:00:30.990622] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:12.647 [2024-07-26 09:00:30.990629] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:12.647 [2024-07-26 09:00:30.990636] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x159c700) on tqpair=0x154d630 00:27:12.647 [2024-07-26 09:00:30.990652] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:12.647 [2024-07-26 09:00:30.990661] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x154d630) 00:27:12.647 [2024-07-26 09:00:30.990672] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.647 [2024-07-26 09:00:30.990692] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x159c700, cid 5, qid 0 00:27:12.647 [2024-07-26 09:00:30.990805] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:12.647 [2024-07-26 09:00:30.990820] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:12.647 [2024-07-26 09:00:30.990827] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:12.647 [2024-07-26 09:00:30.990834] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x159c700) on tqpair=0x154d630 00:27:12.647 [2024-07-26 09:00:30.990854] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:12.647 [2024-07-26 09:00:30.990863] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x154d630) 00:27:12.647 [2024-07-26 09:00:30.990874] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.647 [2024-07-26 09:00:30.990895] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x159c700, cid 5, qid 0 00:27:12.647 [2024-07-26 09:00:30.991001] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:12.647 [2024-07-26 09:00:30.991013] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:12.647 [2024-07-26 09:00:30.991020] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:12.647 [2024-07-26 09:00:30.991027] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x159c700) on tqpair=0x154d630 00:27:12.647 [2024-07-26 09:00:30.991052] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:12.647 [2024-07-26 09:00:30.991071] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x154d630) 00:27:12.647 [2024-07-26 09:00:30.991082] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.647 [2024-07-26 09:00:30.991094] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:12.647 [2024-07-26 09:00:30.991102] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x154d630) 00:27:12.647 [2024-07-26 09:00:30.991111] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.647 [2024-07-26 09:00:30.991122] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:12.647 [2024-07-26 09:00:30.991130] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x154d630) 00:27:12.647 [2024-07-26 09:00:30.991139] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.647 [2024-07-26 09:00:30.991150] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:12.647 [2024-07-26 09:00:30.991157] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x154d630) 00:27:12.647 [2024-07-26 09:00:30.991167] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.647 [2024-07-26 09:00:30.991190] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x159c700, cid 5, qid 0 00:27:12.647 [2024-07-26 09:00:30.991201] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x159c580, cid 4, qid 0 00:27:12.647 [2024-07-26 09:00:30.991208] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x159c880, cid 6, qid 0 00:27:12.647 [2024-07-26 09:00:30.991216] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x159ca00, cid 7, qid 0 00:27:12.647 [2024-07-26 09:00:30.991503] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:12.647 [2024-07-26 09:00:30.991519] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:12.647 [2024-07-26 09:00:30.991526] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:12.647 [2024-07-26 09:00:30.991532] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x154d630): datao=0, datal=8192, cccid=5 00:27:12.647 [2024-07-26 09:00:30.991540] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x159c700) on tqpair(0x154d630): expected_datao=0, payload_size=8192 00:27:12.647 [2024-07-26 09:00:30.991548] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:12.647 [2024-07-26 09:00:30.991558] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:12.647 [2024-07-26 09:00:30.991566] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:12.647 [2024-07-26 09:00:30.991575] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:12.647 [2024-07-26 09:00:30.991588] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:12.647 [2024-07-26 09:00:30.991595] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:12.647 [2024-07-26 09:00:30.991602] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x154d630): datao=0, datal=512, cccid=4 00:27:12.647 [2024-07-26 09:00:30.991609] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x159c580) on tqpair(0x154d630): expected_datao=0, payload_size=512 00:27:12.647 [2024-07-26 09:00:30.991617] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:12.647 [2024-07-26 09:00:30.991626] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:12.647 [2024-07-26 09:00:30.991633] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:12.647 [2024-07-26 09:00:30.991642] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:12.647 [2024-07-26 09:00:30.991651] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:12.647 [2024-07-26 09:00:30.991658] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:12.647 [2024-07-26 09:00:30.991664] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x154d630): datao=0, datal=512, cccid=6 00:27:12.647 [2024-07-26 09:00:30.991672] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x159c880) on tqpair(0x154d630): expected_datao=0, payload_size=512 00:27:12.647 [2024-07-26 09:00:30.991679] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:12.647 [2024-07-26 09:00:30.991688] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:12.647 [2024-07-26 09:00:30.991695] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:12.647 [2024-07-26 09:00:30.991704] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:12.647 [2024-07-26 09:00:30.991713] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:12.647 [2024-07-26 09:00:30.991719] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:12.647 [2024-07-26 09:00:30.991726] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x154d630): datao=0, datal=4096, cccid=7 00:27:12.647 [2024-07-26 09:00:30.991733] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x159ca00) on tqpair(0x154d630): expected_datao=0, payload_size=4096 00:27:12.647 [2024-07-26 09:00:30.991741] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:12.647 [2024-07-26 09:00:30.991750] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:12.647 [2024-07-26 09:00:30.991758] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:12.647 [2024-07-26 09:00:30.991769] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:12.647 [2024-07-26 09:00:30.991795] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:12.647 [2024-07-26 09:00:30.991802] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:12.647 [2024-07-26 09:00:30.991809] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x159c700) on tqpair=0x154d630 00:27:12.647 [2024-07-26 09:00:30.991827] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:12.647 [2024-07-26 09:00:30.991838] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:12.647 [2024-07-26 09:00:30.991845] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:12.647 [2024-07-26 09:00:30.991866] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x159c580) on tqpair=0x154d630 00:27:12.647 [2024-07-26 09:00:30.991881] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:12.647 [2024-07-26 09:00:30.991891] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:12.647 [2024-07-26 09:00:30.991897] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:12.647 [2024-07-26 09:00:30.991903] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x159c880) on tqpair=0x154d630 00:27:12.647 [2024-07-26 09:00:30.991913] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:12.647 [2024-07-26 09:00:30.991922] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:12.647 [2024-07-26 09:00:30.991928] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:12.647 [2024-07-26 09:00:30.991935] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x159ca00) on tqpair=0x154d630 00:27:12.647 ===================================================== 00:27:12.647 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:12.647 ===================================================== 00:27:12.647 Controller Capabilities/Features 00:27:12.647 ================================ 00:27:12.647 Vendor ID: 8086 00:27:12.647 Subsystem Vendor ID: 8086 00:27:12.647 Serial Number: SPDK00000000000001 00:27:12.647 Model Number: SPDK bdev Controller 00:27:12.647 Firmware Version: 24.09 00:27:12.647 Recommended Arb Burst: 6 00:27:12.647 IEEE OUI Identifier: e4 d2 5c 00:27:12.647 Multi-path I/O 00:27:12.647 May have multiple subsystem ports: Yes 00:27:12.647 May have multiple controllers: Yes 00:27:12.647 Associated with SR-IOV VF: No 00:27:12.647 Max Data Transfer Size: 131072 00:27:12.647 Max Number of Namespaces: 32 00:27:12.647 Max Number of I/O Queues: 127 00:27:12.647 NVMe Specification Version (VS): 1.3 00:27:12.647 NVMe Specification Version (Identify): 1.3 00:27:12.647 Maximum Queue Entries: 128 00:27:12.648 Contiguous Queues Required: Yes 00:27:12.648 Arbitration Mechanisms Supported 00:27:12.648 Weighted Round Robin: Not Supported 00:27:12.648 Vendor Specific: Not Supported 00:27:12.648 Reset Timeout: 15000 ms 00:27:12.648 Doorbell Stride: 4 bytes 00:27:12.648 NVM Subsystem Reset: Not Supported 00:27:12.648 Command Sets Supported 00:27:12.648 NVM Command Set: Supported 00:27:12.648 Boot Partition: Not Supported 00:27:12.648 Memory Page Size Minimum: 4096 bytes 00:27:12.648 Memory Page Size Maximum: 4096 bytes 00:27:12.648 Persistent Memory Region: Not Supported 00:27:12.648 Optional Asynchronous Events Supported 00:27:12.648 Namespace Attribute Notices: Supported 00:27:12.648 Firmware Activation Notices: Not Supported 00:27:12.648 ANA Change Notices: Not Supported 00:27:12.648 PLE Aggregate Log Change Notices: Not Supported 00:27:12.648 LBA Status Info Alert Notices: Not Supported 00:27:12.648 EGE Aggregate Log Change Notices: Not Supported 00:27:12.648 Normal NVM Subsystem Shutdown event: Not Supported 00:27:12.648 Zone Descriptor Change Notices: Not Supported 00:27:12.648 Discovery Log Change Notices: Not Supported 00:27:12.648 Controller Attributes 00:27:12.648 128-bit Host Identifier: Supported 00:27:12.648 Non-Operational Permissive Mode: Not Supported 00:27:12.648 NVM Sets: Not Supported 00:27:12.648 Read Recovery Levels: Not Supported 00:27:12.648 Endurance Groups: Not Supported 00:27:12.648 Predictable Latency Mode: Not Supported 00:27:12.648 Traffic Based Keep ALive: Not Supported 00:27:12.648 Namespace Granularity: Not Supported 00:27:12.648 SQ Associations: Not Supported 00:27:12.648 UUID List: Not Supported 00:27:12.648 Multi-Domain Subsystem: Not Supported 00:27:12.648 Fixed Capacity Management: Not Supported 00:27:12.648 Variable Capacity Management: Not Supported 00:27:12.648 Delete Endurance Group: Not Supported 00:27:12.648 Delete NVM Set: Not Supported 00:27:12.648 Extended LBA Formats Supported: Not Supported 00:27:12.648 Flexible Data Placement Supported: Not Supported 00:27:12.648 00:27:12.648 Controller Memory Buffer Support 00:27:12.648 ================================ 00:27:12.648 Supported: No 00:27:12.648 00:27:12.648 Persistent Memory Region Support 00:27:12.648 ================================ 00:27:12.648 Supported: No 00:27:12.648 00:27:12.648 Admin Command Set Attributes 00:27:12.648 ============================ 00:27:12.648 Security Send/Receive: Not Supported 00:27:12.648 Format NVM: Not Supported 00:27:12.648 Firmware Activate/Download: Not Supported 00:27:12.648 Namespace Management: Not Supported 00:27:12.648 Device Self-Test: Not Supported 00:27:12.648 Directives: Not Supported 00:27:12.648 NVMe-MI: Not Supported 00:27:12.648 Virtualization Management: Not Supported 00:27:12.648 Doorbell Buffer Config: Not Supported 00:27:12.648 Get LBA Status Capability: Not Supported 00:27:12.648 Command & Feature Lockdown Capability: Not Supported 00:27:12.648 Abort Command Limit: 4 00:27:12.648 Async Event Request Limit: 4 00:27:12.648 Number of Firmware Slots: N/A 00:27:12.648 Firmware Slot 1 Read-Only: N/A 00:27:12.648 Firmware Activation Without Reset: N/A 00:27:12.648 Multiple Update Detection Support: N/A 00:27:12.648 Firmware Update Granularity: No Information Provided 00:27:12.648 Per-Namespace SMART Log: No 00:27:12.648 Asymmetric Namespace Access Log Page: Not Supported 00:27:12.648 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:27:12.648 Command Effects Log Page: Supported 00:27:12.648 Get Log Page Extended Data: Supported 00:27:12.648 Telemetry Log Pages: Not Supported 00:27:12.648 Persistent Event Log Pages: Not Supported 00:27:12.648 Supported Log Pages Log Page: May Support 00:27:12.648 Commands Supported & Effects Log Page: Not Supported 00:27:12.648 Feature Identifiers & Effects Log Page:May Support 00:27:12.648 NVMe-MI Commands & Effects Log Page: May Support 00:27:12.648 Data Area 4 for Telemetry Log: Not Supported 00:27:12.648 Error Log Page Entries Supported: 128 00:27:12.648 Keep Alive: Supported 00:27:12.648 Keep Alive Granularity: 10000 ms 00:27:12.648 00:27:12.648 NVM Command Set Attributes 00:27:12.648 ========================== 00:27:12.648 Submission Queue Entry Size 00:27:12.648 Max: 64 00:27:12.648 Min: 64 00:27:12.648 Completion Queue Entry Size 00:27:12.648 Max: 16 00:27:12.648 Min: 16 00:27:12.648 Number of Namespaces: 32 00:27:12.648 Compare Command: Supported 00:27:12.648 Write Uncorrectable Command: Not Supported 00:27:12.648 Dataset Management Command: Supported 00:27:12.648 Write Zeroes Command: Supported 00:27:12.648 Set Features Save Field: Not Supported 00:27:12.648 Reservations: Supported 00:27:12.648 Timestamp: Not Supported 00:27:12.648 Copy: Supported 00:27:12.648 Volatile Write Cache: Present 00:27:12.648 Atomic Write Unit (Normal): 1 00:27:12.648 Atomic Write Unit (PFail): 1 00:27:12.648 Atomic Compare & Write Unit: 1 00:27:12.648 Fused Compare & Write: Supported 00:27:12.648 Scatter-Gather List 00:27:12.648 SGL Command Set: Supported 00:27:12.648 SGL Keyed: Supported 00:27:12.648 SGL Bit Bucket Descriptor: Not Supported 00:27:12.648 SGL Metadata Pointer: Not Supported 00:27:12.648 Oversized SGL: Not Supported 00:27:12.648 SGL Metadata Address: Not Supported 00:27:12.648 SGL Offset: Supported 00:27:12.648 Transport SGL Data Block: Not Supported 00:27:12.648 Replay Protected Memory Block: Not Supported 00:27:12.648 00:27:12.648 Firmware Slot Information 00:27:12.648 ========================= 00:27:12.648 Active slot: 1 00:27:12.648 Slot 1 Firmware Revision: 24.09 00:27:12.648 00:27:12.648 00:27:12.648 Commands Supported and Effects 00:27:12.648 ============================== 00:27:12.648 Admin Commands 00:27:12.648 -------------- 00:27:12.648 Get Log Page (02h): Supported 00:27:12.648 Identify (06h): Supported 00:27:12.648 Abort (08h): Supported 00:27:12.648 Set Features (09h): Supported 00:27:12.648 Get Features (0Ah): Supported 00:27:12.648 Asynchronous Event Request (0Ch): Supported 00:27:12.648 Keep Alive (18h): Supported 00:27:12.648 I/O Commands 00:27:12.648 ------------ 00:27:12.648 Flush (00h): Supported LBA-Change 00:27:12.648 Write (01h): Supported LBA-Change 00:27:12.648 Read (02h): Supported 00:27:12.648 Compare (05h): Supported 00:27:12.648 Write Zeroes (08h): Supported LBA-Change 00:27:12.648 Dataset Management (09h): Supported LBA-Change 00:27:12.648 Copy (19h): Supported LBA-Change 00:27:12.648 00:27:12.648 Error Log 00:27:12.648 ========= 00:27:12.648 00:27:12.648 Arbitration 00:27:12.648 =========== 00:27:12.648 Arbitration Burst: 1 00:27:12.648 00:27:12.648 Power Management 00:27:12.648 ================ 00:27:12.648 Number of Power States: 1 00:27:12.648 Current Power State: Power State #0 00:27:12.648 Power State #0: 00:27:12.648 Max Power: 0.00 W 00:27:12.648 Non-Operational State: Operational 00:27:12.648 Entry Latency: Not Reported 00:27:12.648 Exit Latency: Not Reported 00:27:12.648 Relative Read Throughput: 0 00:27:12.648 Relative Read Latency: 0 00:27:12.648 Relative Write Throughput: 0 00:27:12.648 Relative Write Latency: 0 00:27:12.648 Idle Power: Not Reported 00:27:12.648 Active Power: Not Reported 00:27:12.648 Non-Operational Permissive Mode: Not Supported 00:27:12.648 00:27:12.648 Health Information 00:27:12.648 ================== 00:27:12.648 Critical Warnings: 00:27:12.648 Available Spare Space: OK 00:27:12.648 Temperature: OK 00:27:12.648 Device Reliability: OK 00:27:12.648 Read Only: No 00:27:12.648 Volatile Memory Backup: OK 00:27:12.648 Current Temperature: 0 Kelvin (-273 Celsius) 00:27:12.648 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:27:12.648 Available Spare: 0% 00:27:12.648 Available Spare Threshold: 0% 00:27:12.648 Life Percentage Used:[2024-07-26 09:00:30.992094] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:12.648 [2024-07-26 09:00:30.992107] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x154d630) 00:27:12.648 [2024-07-26 09:00:30.992118] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.648 [2024-07-26 09:00:30.992141] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x159ca00, cid 7, qid 0 00:27:12.648 [2024-07-26 09:00:30.992301] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:12.649 [2024-07-26 09:00:30.992314] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:12.649 [2024-07-26 09:00:30.992321] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:12.649 [2024-07-26 09:00:30.992328] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x159ca00) on tqpair=0x154d630 00:27:12.649 [2024-07-26 09:00:30.992376] nvme_ctrlr.c:4361:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:27:12.649 [2024-07-26 09:00:30.992395] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x159bf80) on tqpair=0x154d630 00:27:12.649 [2024-07-26 09:00:30.992406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.649 [2024-07-26 09:00:30.992416] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x159c100) on tqpair=0x154d630 00:27:12.649 [2024-07-26 09:00:30.992423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.649 [2024-07-26 09:00:30.992432] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x159c280) on tqpair=0x154d630 00:27:12.649 [2024-07-26 09:00:30.992440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.649 [2024-07-26 09:00:30.992448] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x159c400) on tqpair=0x154d630 00:27:12.649 [2024-07-26 09:00:30.992471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.649 [2024-07-26 09:00:30.992484] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:12.649 [2024-07-26 09:00:30.992492] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:12.649 [2024-07-26 09:00:30.992498] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x154d630) 00:27:12.649 [2024-07-26 09:00:30.992509] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.649 [2024-07-26 09:00:30.992531] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x159c400, cid 3, qid 0 00:27:12.649 [2024-07-26 09:00:30.992682] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:12.649 [2024-07-26 09:00:30.992694] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:12.649 [2024-07-26 09:00:30.992701] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:12.649 [2024-07-26 09:00:30.992708] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x159c400) on tqpair=0x154d630 00:27:12.649 [2024-07-26 09:00:30.992720] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:12.649 [2024-07-26 09:00:30.992727] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:12.649 [2024-07-26 09:00:30.992734] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x154d630) 00:27:12.649 [2024-07-26 09:00:30.992744] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.649 [2024-07-26 09:00:30.992770] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x159c400, cid 3, qid 0 00:27:12.649 [2024-07-26 09:00:30.992897] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:12.649 [2024-07-26 09:00:30.992912] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:12.649 [2024-07-26 09:00:30.992919] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:12.649 [2024-07-26 09:00:30.992930] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x159c400) on tqpair=0x154d630 00:27:12.649 [2024-07-26 09:00:30.992939] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:27:12.649 [2024-07-26 09:00:30.992947] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:27:12.649 [2024-07-26 09:00:30.992963] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:12.649 [2024-07-26 09:00:30.992972] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:12.649 [2024-07-26 09:00:30.992979] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x154d630) 00:27:12.649 [2024-07-26 09:00:30.992989] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.649 [2024-07-26 09:00:30.993010] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x159c400, cid 3, qid 0 00:27:12.649 [2024-07-26 09:00:30.997084] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:12.649 [2024-07-26 09:00:30.997101] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:12.649 [2024-07-26 09:00:30.997108] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:12.649 [2024-07-26 09:00:30.997115] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x159c400) on tqpair=0x154d630 00:27:12.649 [2024-07-26 09:00:30.997133] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:12.649 [2024-07-26 09:00:30.997159] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:12.649 [2024-07-26 09:00:30.997166] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x154d630) 00:27:12.649 [2024-07-26 09:00:30.997177] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.649 [2024-07-26 09:00:30.997200] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x159c400, cid 3, qid 0 00:27:12.649 [2024-07-26 09:00:30.997351] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:12.649 [2024-07-26 09:00:30.997362] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:12.649 [2024-07-26 09:00:30.997369] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:12.649 [2024-07-26 09:00:30.997376] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x159c400) on tqpair=0x154d630 00:27:12.649 [2024-07-26 09:00:30.997389] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 4 milliseconds 00:27:12.649 0% 00:27:12.649 Data Units Read: 0 00:27:12.649 Data Units Written: 0 00:27:12.649 Host Read Commands: 0 00:27:12.649 Host Write Commands: 0 00:27:12.649 Controller Busy Time: 0 minutes 00:27:12.649 Power Cycles: 0 00:27:12.649 Power On Hours: 0 hours 00:27:12.649 Unsafe Shutdowns: 0 00:27:12.649 Unrecoverable Media Errors: 0 00:27:12.649 Lifetime Error Log Entries: 0 00:27:12.649 Warning Temperature Time: 0 minutes 00:27:12.649 Critical Temperature Time: 0 minutes 00:27:12.649 00:27:12.649 Number of Queues 00:27:12.649 ================ 00:27:12.649 Number of I/O Submission Queues: 127 00:27:12.649 Number of I/O Completion Queues: 127 00:27:12.649 00:27:12.649 Active Namespaces 00:27:12.649 ================= 00:27:12.649 Namespace ID:1 00:27:12.649 Error Recovery Timeout: Unlimited 00:27:12.649 Command Set Identifier: NVM (00h) 00:27:12.649 Deallocate: Supported 00:27:12.649 Deallocated/Unwritten Error: Not Supported 00:27:12.649 Deallocated Read Value: Unknown 00:27:12.649 Deallocate in Write Zeroes: Not Supported 00:27:12.649 Deallocated Guard Field: 0xFFFF 00:27:12.649 Flush: Supported 00:27:12.649 Reservation: Supported 00:27:12.649 Namespace Sharing Capabilities: Multiple Controllers 00:27:12.649 Size (in LBAs): 131072 (0GiB) 00:27:12.649 Capacity (in LBAs): 131072 (0GiB) 00:27:12.649 Utilization (in LBAs): 131072 (0GiB) 00:27:12.649 NGUID: ABCDEF0123456789ABCDEF0123456789 00:27:12.649 EUI64: ABCDEF0123456789 00:27:12.649 UUID: a5a06380-f28a-44c6-961a-e52f80f2dcd6 00:27:12.649 Thin Provisioning: Not Supported 00:27:12.649 Per-NS Atomic Units: Yes 00:27:12.649 Atomic Boundary Size (Normal): 0 00:27:12.649 Atomic Boundary Size (PFail): 0 00:27:12.649 Atomic Boundary Offset: 0 00:27:12.649 Maximum Single Source Range Length: 65535 00:27:12.649 Maximum Copy Length: 65535 00:27:12.649 Maximum Source Range Count: 1 00:27:12.649 NGUID/EUI64 Never Reused: No 00:27:12.649 Namespace Write Protected: No 00:27:12.649 Number of LBA Formats: 1 00:27:12.649 Current LBA Format: LBA Format #00 00:27:12.649 LBA Format #00: Data Size: 512 Metadata Size: 0 00:27:12.649 00:27:12.649 09:00:31 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:27:12.649 09:00:31 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:12.649 09:00:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:12.649 09:00:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:12.649 09:00:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:12.649 09:00:31 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:27:12.649 09:00:31 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:27:12.649 09:00:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:12.649 09:00:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:27:12.649 09:00:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:12.649 09:00:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:27:12.649 09:00:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:12.649 09:00:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:12.649 rmmod nvme_tcp 00:27:12.649 rmmod nvme_fabrics 00:27:12.649 rmmod nvme_keyring 00:27:12.649 09:00:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:12.649 09:00:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:27:12.649 09:00:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:27:12.649 09:00:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 1057142 ']' 00:27:12.649 09:00:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 1057142 00:27:12.649 09:00:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@950 -- # '[' -z 1057142 ']' 00:27:12.649 09:00:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # kill -0 1057142 00:27:12.649 09:00:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # uname 00:27:12.649 09:00:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:12.650 09:00:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1057142 00:27:12.936 09:00:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:12.936 09:00:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:12.936 09:00:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1057142' 00:27:12.936 killing process with pid 1057142 00:27:12.936 09:00:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@969 -- # kill 1057142 00:27:12.936 09:00:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@974 -- # wait 1057142 00:27:12.936 09:00:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:12.936 09:00:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:12.936 09:00:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:12.936 09:00:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:12.936 09:00:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:12.936 09:00:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:12.936 09:00:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:12.936 09:00:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:15.470 09:00:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:15.470 00:27:15.470 real 0m5.227s 00:27:15.470 user 0m4.292s 00:27:15.471 sys 0m1.776s 00:27:15.471 09:00:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:15.471 09:00:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:15.471 ************************************ 00:27:15.471 END TEST nvmf_identify 00:27:15.471 ************************************ 00:27:15.471 09:00:33 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:27:15.471 09:00:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:15.471 09:00:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:15.471 09:00:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.471 ************************************ 00:27:15.471 START TEST nvmf_perf 00:27:15.471 ************************************ 00:27:15.471 09:00:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:27:15.471 * Looking for test storage... 00:27:15.471 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:15.471 09:00:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:15.471 09:00:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:27:15.471 09:00:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:15.471 09:00:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:15.471 09:00:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:15.471 09:00:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:15.471 09:00:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:15.471 09:00:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:15.471 09:00:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:15.471 09:00:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:15.471 09:00:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:15.471 09:00:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:15.471 09:00:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:15.471 09:00:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:15.471 09:00:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:15.471 09:00:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:15.471 09:00:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:15.471 09:00:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:15.471 09:00:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:15.471 09:00:33 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:15.471 09:00:33 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:15.471 09:00:33 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:15.471 09:00:33 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:15.471 09:00:33 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:15.471 09:00:33 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:15.471 09:00:33 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:27:15.471 09:00:33 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:15.471 09:00:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:27:15.471 09:00:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:15.471 09:00:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:15.471 09:00:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:15.471 09:00:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:15.471 09:00:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:15.471 09:00:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:15.471 09:00:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:15.471 09:00:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:15.471 09:00:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:27:15.471 09:00:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:27:15.471 09:00:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:15.471 09:00:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:27:15.471 09:00:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:15.471 09:00:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:15.471 09:00:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:15.471 09:00:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:15.471 09:00:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:15.471 09:00:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:15.471 09:00:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:15.471 09:00:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:15.471 09:00:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:15.471 09:00:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:15.471 09:00:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:27:15.471 09:00:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:27:17.373 09:00:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:17.373 09:00:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:27:17.373 09:00:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:17.373 09:00:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:17.373 09:00:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:17.373 09:00:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:17.373 09:00:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:17.373 09:00:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:27:17.373 09:00:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:17.373 09:00:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:27:17.373 09:00:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:27:17.373 09:00:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:27:17.373 09:00:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:27:17.373 09:00:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:27:17.373 09:00:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:27:17.373 09:00:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:17.373 09:00:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:17.373 09:00:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:17.373 09:00:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:17.373 09:00:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:17.373 09:00:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:17.373 09:00:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:17.373 09:00:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:17.373 09:00:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:17.373 09:00:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:17.373 09:00:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:17.373 09:00:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:17.373 09:00:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:17.373 09:00:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:17.373 09:00:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:17.373 09:00:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:17.373 09:00:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:17.373 09:00:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:17.373 09:00:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:17.373 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:17.373 09:00:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:17.373 09:00:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:17.373 09:00:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:17.373 09:00:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:17.373 09:00:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:17.373 09:00:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:17.373 09:00:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:17.373 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:17.373 09:00:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:17.373 09:00:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:17.373 09:00:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:17.373 09:00:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:17.373 09:00:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:17.373 09:00:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:17.373 09:00:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:17.373 09:00:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:17.373 09:00:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:17.373 09:00:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:17.373 09:00:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:17.373 09:00:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:17.373 09:00:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:17.373 09:00:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:17.373 09:00:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:17.373 09:00:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:17.373 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:17.373 09:00:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:17.373 09:00:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:17.373 09:00:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:17.373 09:00:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:17.373 09:00:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:17.373 09:00:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:17.373 09:00:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:17.373 09:00:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:17.373 09:00:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:17.373 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:17.373 09:00:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:17.373 09:00:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:17.374 09:00:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:27:17.374 09:00:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:17.374 09:00:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:17.374 09:00:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:17.374 09:00:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:17.374 09:00:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:17.374 09:00:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:17.374 09:00:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:17.374 09:00:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:17.374 09:00:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:17.374 09:00:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:17.374 09:00:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:17.374 09:00:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:17.374 09:00:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:17.374 09:00:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:17.374 09:00:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:17.374 09:00:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:17.374 09:00:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:17.374 09:00:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:17.374 09:00:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:17.374 09:00:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:17.374 09:00:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:17.374 09:00:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:17.374 09:00:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:17.374 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:17.374 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.248 ms 00:27:17.374 00:27:17.374 --- 10.0.0.2 ping statistics --- 00:27:17.374 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:17.374 rtt min/avg/max/mdev = 0.248/0.248/0.248/0.000 ms 00:27:17.374 09:00:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:17.374 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:17.374 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.208 ms 00:27:17.374 00:27:17.374 --- 10.0.0.1 ping statistics --- 00:27:17.374 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:17.374 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:27:17.374 09:00:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:17.374 09:00:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:27:17.374 09:00:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:17.374 09:00:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:17.374 09:00:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:17.374 09:00:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:17.374 09:00:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:17.374 09:00:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:17.374 09:00:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:17.374 09:00:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:27:17.374 09:00:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:17.374 09:00:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:17.374 09:00:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:27:17.374 09:00:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=1059215 00:27:17.374 09:00:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:17.374 09:00:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 1059215 00:27:17.374 09:00:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@831 -- # '[' -z 1059215 ']' 00:27:17.374 09:00:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:17.374 09:00:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:17.374 09:00:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:17.374 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:17.374 09:00:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:17.374 09:00:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:27:17.374 [2024-07-26 09:00:35.709142] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:27:17.374 [2024-07-26 09:00:35.709231] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:17.374 EAL: No free 2048 kB hugepages reported on node 1 00:27:17.374 [2024-07-26 09:00:35.749383] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:17.374 [2024-07-26 09:00:35.776133] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:17.632 [2024-07-26 09:00:35.866377] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:17.632 [2024-07-26 09:00:35.866435] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:17.632 [2024-07-26 09:00:35.866455] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:17.632 [2024-07-26 09:00:35.866472] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:17.632 [2024-07-26 09:00:35.866489] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:17.632 [2024-07-26 09:00:35.866622] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:17.632 [2024-07-26 09:00:35.866688] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:17.632 [2024-07-26 09:00:35.866761] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:17.632 [2024-07-26 09:00:35.866755] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:17.632 09:00:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:17.632 09:00:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # return 0 00:27:17.632 09:00:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:17.632 09:00:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:17.633 09:00:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:27:17.633 09:00:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:17.633 09:00:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:27:17.633 09:00:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:27:20.911 09:00:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:27:20.911 09:00:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:27:20.911 09:00:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:88:00.0 00:27:20.911 09:00:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:27:21.476 09:00:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:27:21.477 09:00:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:88:00.0 ']' 00:27:21.477 09:00:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:27:21.477 09:00:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:27:21.477 09:00:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:27:21.477 [2024-07-26 09:00:39.917981] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:21.734 09:00:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:21.734 09:00:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:27:21.734 09:00:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:21.992 09:00:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:27:21.992 09:00:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:27:22.250 09:00:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:22.507 [2024-07-26 09:00:40.921697] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:22.507 09:00:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:22.765 09:00:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:88:00.0 ']' 00:27:22.765 09:00:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:27:22.765 09:00:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:27:22.765 09:00:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:27:24.137 Initializing NVMe Controllers 00:27:24.137 Attached to NVMe Controller at 0000:88:00.0 [8086:0a54] 00:27:24.137 Associating PCIE (0000:88:00.0) NSID 1 with lcore 0 00:27:24.137 Initialization complete. Launching workers. 00:27:24.137 ======================================================== 00:27:24.137 Latency(us) 00:27:24.137 Device Information : IOPS MiB/s Average min max 00:27:24.137 PCIE (0000:88:00.0) NSID 1 from core 0: 86215.76 336.78 370.61 11.95 4309.57 00:27:24.137 ======================================================== 00:27:24.137 Total : 86215.76 336.78 370.61 11.95 4309.57 00:27:24.137 00:27:24.137 09:00:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:24.137 EAL: No free 2048 kB hugepages reported on node 1 00:27:25.510 Initializing NVMe Controllers 00:27:25.510 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:25.510 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:25.510 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:27:25.510 Initialization complete. Launching workers. 00:27:25.510 ======================================================== 00:27:25.510 Latency(us) 00:27:25.510 Device Information : IOPS MiB/s Average min max 00:27:25.510 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 89.00 0.35 11590.79 202.62 45028.26 00:27:25.510 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 60.00 0.23 16741.38 4999.65 55849.70 00:27:25.510 ======================================================== 00:27:25.510 Total : 149.00 0.58 13664.85 202.62 55849.70 00:27:25.510 00:27:25.510 09:00:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:25.510 EAL: No free 2048 kB hugepages reported on node 1 00:27:26.443 Initializing NVMe Controllers 00:27:26.443 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:26.443 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:26.443 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:27:26.443 Initialization complete. Launching workers. 00:27:26.443 ======================================================== 00:27:26.443 Latency(us) 00:27:26.443 Device Information : IOPS MiB/s Average min max 00:27:26.443 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8457.99 33.04 3798.32 600.10 11178.79 00:27:26.443 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3820.00 14.92 8414.34 6838.40 18957.20 00:27:26.443 ======================================================== 00:27:26.443 Total : 12277.99 47.96 5234.48 600.10 18957.20 00:27:26.443 00:27:26.701 09:00:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:27:26.701 09:00:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:27:26.701 09:00:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:26.701 EAL: No free 2048 kB hugepages reported on node 1 00:27:29.235 Initializing NVMe Controllers 00:27:29.235 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:29.235 Controller IO queue size 128, less than required. 00:27:29.235 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:29.235 Controller IO queue size 128, less than required. 00:27:29.235 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:29.235 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:29.235 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:27:29.235 Initialization complete. Launching workers. 00:27:29.235 ======================================================== 00:27:29.235 Latency(us) 00:27:29.235 Device Information : IOPS MiB/s Average min max 00:27:29.235 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1266.49 316.62 103175.65 75329.73 144623.63 00:27:29.235 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 565.49 141.37 234251.76 87120.05 395571.31 00:27:29.235 ======================================================== 00:27:29.235 Total : 1831.98 458.00 143636.10 75329.73 395571.31 00:27:29.235 00:27:29.235 09:00:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:27:29.235 EAL: No free 2048 kB hugepages reported on node 1 00:27:29.493 No valid NVMe controllers or AIO or URING devices found 00:27:29.493 Initializing NVMe Controllers 00:27:29.493 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:29.493 Controller IO queue size 128, less than required. 00:27:29.493 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:29.493 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:27:29.493 Controller IO queue size 128, less than required. 00:27:29.493 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:29.493 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:27:29.493 WARNING: Some requested NVMe devices were skipped 00:27:29.493 09:00:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:27:29.493 EAL: No free 2048 kB hugepages reported on node 1 00:27:32.024 Initializing NVMe Controllers 00:27:32.024 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:32.024 Controller IO queue size 128, less than required. 00:27:32.024 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:32.024 Controller IO queue size 128, less than required. 00:27:32.024 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:32.024 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:32.024 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:27:32.024 Initialization complete. Launching workers. 00:27:32.024 00:27:32.024 ==================== 00:27:32.024 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:27:32.024 TCP transport: 00:27:32.024 polls: 20963 00:27:32.024 idle_polls: 9670 00:27:32.024 sock_completions: 11293 00:27:32.024 nvme_completions: 3977 00:27:32.024 submitted_requests: 5950 00:27:32.024 queued_requests: 1 00:27:32.024 00:27:32.024 ==================== 00:27:32.024 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:27:32.024 TCP transport: 00:27:32.024 polls: 18696 00:27:32.024 idle_polls: 6589 00:27:32.024 sock_completions: 12107 00:27:32.024 nvme_completions: 5537 00:27:32.024 submitted_requests: 8366 00:27:32.024 queued_requests: 1 00:27:32.024 ======================================================== 00:27:32.024 Latency(us) 00:27:32.024 Device Information : IOPS MiB/s Average min max 00:27:32.024 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 994.00 248.50 132112.34 74274.19 197474.08 00:27:32.024 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1384.00 346.00 94628.04 48134.93 135202.98 00:27:32.024 ======================================================== 00:27:32.024 Total : 2377.99 594.50 110296.41 48134.93 197474.08 00:27:32.024 00:27:32.024 09:00:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:27:32.024 09:00:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:32.282 09:00:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:27:32.282 09:00:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:88:00.0 ']' 00:27:32.282 09:00:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:27:35.634 09:00:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # ls_guid=7ea4b2eb-e206-41a2-b93c-8faf56b46212 00:27:35.634 09:00:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb 7ea4b2eb-e206-41a2-b93c-8faf56b46212 00:27:35.634 09:00:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=7ea4b2eb-e206-41a2-b93c-8faf56b46212 00:27:35.634 09:00:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:27:35.634 09:00:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:27:35.634 09:00:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:27:35.634 09:00:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:27:35.891 09:00:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:27:35.891 { 00:27:35.891 "uuid": "7ea4b2eb-e206-41a2-b93c-8faf56b46212", 00:27:35.891 "name": "lvs_0", 00:27:35.891 "base_bdev": "Nvme0n1", 00:27:35.891 "total_data_clusters": 238234, 00:27:35.891 "free_clusters": 238234, 00:27:35.891 "block_size": 512, 00:27:35.891 "cluster_size": 4194304 00:27:35.891 } 00:27:35.891 ]' 00:27:35.891 09:00:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="7ea4b2eb-e206-41a2-b93c-8faf56b46212") .free_clusters' 00:27:35.891 09:00:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=238234 00:27:35.891 09:00:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="7ea4b2eb-e206-41a2-b93c-8faf56b46212") .cluster_size' 00:27:35.891 09:00:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:27:35.891 09:00:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=952936 00:27:35.891 09:00:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 952936 00:27:35.891 952936 00:27:35.891 09:00:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@77 -- # '[' 952936 -gt 20480 ']' 00:27:35.891 09:00:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@78 -- # free_mb=20480 00:27:35.891 09:00:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 7ea4b2eb-e206-41a2-b93c-8faf56b46212 lbd_0 20480 00:27:36.453 09:00:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # lb_guid=3e81a222-0db7-41d5-a044-9a888ac926e2 00:27:36.453 09:00:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore 3e81a222-0db7-41d5-a044-9a888ac926e2 lvs_n_0 00:27:37.017 09:00:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=2a1bbbd9-f118-4bbb-a723-55d46a3fc846 00:27:37.017 09:00:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb 2a1bbbd9-f118-4bbb-a723-55d46a3fc846 00:27:37.017 09:00:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=2a1bbbd9-f118-4bbb-a723-55d46a3fc846 00:27:37.017 09:00:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:27:37.017 09:00:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:27:37.017 09:00:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:27:37.017 09:00:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:27:37.274 09:00:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:27:37.274 { 00:27:37.274 "uuid": "7ea4b2eb-e206-41a2-b93c-8faf56b46212", 00:27:37.274 "name": "lvs_0", 00:27:37.274 "base_bdev": "Nvme0n1", 00:27:37.274 "total_data_clusters": 238234, 00:27:37.274 "free_clusters": 233114, 00:27:37.274 "block_size": 512, 00:27:37.274 "cluster_size": 4194304 00:27:37.274 }, 00:27:37.274 { 00:27:37.274 "uuid": "2a1bbbd9-f118-4bbb-a723-55d46a3fc846", 00:27:37.274 "name": "lvs_n_0", 00:27:37.274 "base_bdev": "3e81a222-0db7-41d5-a044-9a888ac926e2", 00:27:37.274 "total_data_clusters": 5114, 00:27:37.274 "free_clusters": 5114, 00:27:37.274 "block_size": 512, 00:27:37.274 "cluster_size": 4194304 00:27:37.274 } 00:27:37.274 ]' 00:27:37.274 09:00:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="2a1bbbd9-f118-4bbb-a723-55d46a3fc846") .free_clusters' 00:27:37.274 09:00:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=5114 00:27:37.274 09:00:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="2a1bbbd9-f118-4bbb-a723-55d46a3fc846") .cluster_size' 00:27:37.531 09:00:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:27:37.531 09:00:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=20456 00:27:37.531 09:00:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 20456 00:27:37.531 20456 00:27:37.531 09:00:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:27:37.531 09:00:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 2a1bbbd9-f118-4bbb-a723-55d46a3fc846 lbd_nest_0 20456 00:27:37.788 09:00:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=04bc10de-9c44-457e-802f-0c1b14a89309 00:27:37.788 09:00:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:38.045 09:00:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:27:38.045 09:00:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 04bc10de-9c44-457e-802f-0c1b14a89309 00:27:38.302 09:00:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:38.559 09:00:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:27:38.559 09:00:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:27:38.559 09:00:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:27:38.559 09:00:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:27:38.559 09:00:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:38.559 EAL: No free 2048 kB hugepages reported on node 1 00:27:50.747 Initializing NVMe Controllers 00:27:50.747 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:50.747 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:50.747 Initialization complete. Launching workers. 00:27:50.747 ======================================================== 00:27:50.747 Latency(us) 00:27:50.747 Device Information : IOPS MiB/s Average min max 00:27:50.747 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 49.00 0.02 20433.13 205.99 44861.63 00:27:50.747 ======================================================== 00:27:50.747 Total : 49.00 0.02 20433.13 205.99 44861.63 00:27:50.747 00:27:50.747 09:01:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:27:50.747 09:01:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:50.747 EAL: No free 2048 kB hugepages reported on node 1 00:28:00.710 Initializing NVMe Controllers 00:28:00.710 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:00.710 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:00.710 Initialization complete. Launching workers. 00:28:00.710 ======================================================== 00:28:00.710 Latency(us) 00:28:00.710 Device Information : IOPS MiB/s Average min max 00:28:00.710 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 80.30 10.04 12462.07 3022.79 47885.68 00:28:00.710 ======================================================== 00:28:00.710 Total : 80.30 10.04 12462.07 3022.79 47885.68 00:28:00.710 00:28:00.710 09:01:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:28:00.710 09:01:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:00.710 09:01:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:00.710 EAL: No free 2048 kB hugepages reported on node 1 00:28:10.667 Initializing NVMe Controllers 00:28:10.667 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:10.667 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:10.667 Initialization complete. Launching workers. 00:28:10.667 ======================================================== 00:28:10.667 Latency(us) 00:28:10.667 Device Information : IOPS MiB/s Average min max 00:28:10.667 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7419.88 3.62 4313.98 306.10 10534.50 00:28:10.667 ======================================================== 00:28:10.667 Total : 7419.88 3.62 4313.98 306.10 10534.50 00:28:10.667 00:28:10.667 09:01:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:10.667 09:01:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:10.667 EAL: No free 2048 kB hugepages reported on node 1 00:28:20.690 Initializing NVMe Controllers 00:28:20.690 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:20.690 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:20.690 Initialization complete. Launching workers. 00:28:20.690 ======================================================== 00:28:20.690 Latency(us) 00:28:20.690 Device Information : IOPS MiB/s Average min max 00:28:20.690 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2479.23 309.90 12914.70 770.42 29479.82 00:28:20.690 ======================================================== 00:28:20.690 Total : 2479.23 309.90 12914.70 770.42 29479.82 00:28:20.690 00:28:20.690 09:01:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:28:20.690 09:01:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:20.690 09:01:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:20.690 EAL: No free 2048 kB hugepages reported on node 1 00:28:30.669 Initializing NVMe Controllers 00:28:30.669 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:30.669 Controller IO queue size 128, less than required. 00:28:30.669 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:30.669 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:30.669 Initialization complete. Launching workers. 00:28:30.669 ======================================================== 00:28:30.669 Latency(us) 00:28:30.669 Device Information : IOPS MiB/s Average min max 00:28:30.669 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11899.10 5.81 10765.11 1712.96 25774.93 00:28:30.669 ======================================================== 00:28:30.669 Total : 11899.10 5.81 10765.11 1712.96 25774.93 00:28:30.669 00:28:30.669 09:01:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:30.669 09:01:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:30.669 EAL: No free 2048 kB hugepages reported on node 1 00:28:40.641 Initializing NVMe Controllers 00:28:40.641 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:40.641 Controller IO queue size 128, less than required. 00:28:40.641 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:40.641 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:40.641 Initialization complete. Launching workers. 00:28:40.641 ======================================================== 00:28:40.641 Latency(us) 00:28:40.641 Device Information : IOPS MiB/s Average min max 00:28:40.641 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1213.54 151.69 105920.72 24737.81 215560.64 00:28:40.641 ======================================================== 00:28:40.641 Total : 1213.54 151.69 105920.72 24737.81 215560.64 00:28:40.641 00:28:40.641 09:01:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:40.899 09:01:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 04bc10de-9c44-457e-802f-0c1b14a89309 00:28:41.834 09:01:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:28:41.834 09:02:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 3e81a222-0db7-41d5-a044-9a888ac926e2 00:28:42.092 09:02:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:28:42.350 09:02:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:28:42.350 09:02:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:28:42.350 09:02:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:42.350 09:02:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:28:42.350 09:02:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:42.350 09:02:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:28:42.350 09:02:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:42.350 09:02:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:42.350 rmmod nvme_tcp 00:28:42.610 rmmod nvme_fabrics 00:28:42.610 rmmod nvme_keyring 00:28:42.610 09:02:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:42.610 09:02:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:28:42.610 09:02:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:28:42.610 09:02:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 1059215 ']' 00:28:42.610 09:02:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 1059215 00:28:42.610 09:02:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@950 -- # '[' -z 1059215 ']' 00:28:42.610 09:02:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # kill -0 1059215 00:28:42.610 09:02:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # uname 00:28:42.610 09:02:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:42.610 09:02:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1059215 00:28:42.610 09:02:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:42.610 09:02:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:42.610 09:02:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1059215' 00:28:42.610 killing process with pid 1059215 00:28:42.610 09:02:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@969 -- # kill 1059215 00:28:42.610 09:02:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@974 -- # wait 1059215 00:28:44.511 09:02:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:44.511 09:02:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:44.511 09:02:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:44.511 09:02:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:44.511 09:02:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:44.511 09:02:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:44.511 09:02:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:44.511 09:02:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:46.419 09:02:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:46.419 00:28:46.419 real 1m31.030s 00:28:46.419 user 5m37.010s 00:28:46.419 sys 0m15.770s 00:28:46.419 09:02:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:46.419 09:02:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:46.419 ************************************ 00:28:46.419 END TEST nvmf_perf 00:28:46.419 ************************************ 00:28:46.419 09:02:04 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:28:46.419 09:02:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:28:46.419 09:02:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:46.419 09:02:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.419 ************************************ 00:28:46.419 START TEST nvmf_fio_host 00:28:46.419 ************************************ 00:28:46.419 09:02:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:28:46.419 * Looking for test storage... 00:28:46.419 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:46.419 09:02:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:46.419 09:02:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:46.419 09:02:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:46.419 09:02:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:46.419 09:02:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:46.419 09:02:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:46.419 09:02:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:46.419 09:02:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:28:46.419 09:02:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:46.419 09:02:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:46.419 09:02:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:28:46.419 09:02:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:46.419 09:02:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:46.419 09:02:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:46.419 09:02:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:46.419 09:02:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:46.419 09:02:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:46.419 09:02:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:46.419 09:02:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:46.419 09:02:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:46.419 09:02:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:46.419 09:02:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:46.419 09:02:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:46.419 09:02:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:46.419 09:02:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:46.419 09:02:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:46.419 09:02:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:46.419 09:02:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:46.419 09:02:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:46.419 09:02:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:46.419 09:02:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:46.419 09:02:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:46.419 09:02:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:46.419 09:02:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:46.419 09:02:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:28:46.420 09:02:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:46.420 09:02:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:28:46.420 09:02:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:46.420 09:02:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:46.420 09:02:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:46.420 09:02:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:46.420 09:02:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:46.420 09:02:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:46.420 09:02:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:46.420 09:02:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:46.420 09:02:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:46.420 09:02:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:28:46.420 09:02:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:46.420 09:02:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:46.420 09:02:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:46.420 09:02:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:46.420 09:02:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:46.420 09:02:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:46.420 09:02:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:46.420 09:02:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:46.420 09:02:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:46.420 09:02:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:46.420 09:02:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:28:46.420 09:02:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.355 09:02:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:48.355 09:02:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:28:48.355 09:02:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:48.355 09:02:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:48.355 09:02:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:48.355 09:02:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:48.355 09:02:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:48.355 09:02:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:28:48.355 09:02:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:48.355 09:02:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:28:48.355 09:02:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:28:48.355 09:02:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:28:48.355 09:02:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:28:48.355 09:02:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:28:48.355 09:02:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:28:48.355 09:02:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:48.355 09:02:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:48.355 09:02:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:48.355 09:02:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:48.355 09:02:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:48.355 09:02:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:48.355 09:02:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:48.355 09:02:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:48.355 09:02:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:48.355 09:02:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:48.355 09:02:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:48.355 09:02:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:48.355 09:02:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:48.355 09:02:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:48.355 09:02:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:48.355 09:02:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:48.355 09:02:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:48.355 09:02:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:48.355 09:02:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:48.355 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:48.355 09:02:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:48.355 09:02:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:48.355 09:02:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:48.355 09:02:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:48.355 09:02:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:48.355 09:02:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:48.356 09:02:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:48.356 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:48.356 09:02:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:48.356 09:02:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:48.356 09:02:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:48.356 09:02:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:48.356 09:02:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:48.356 09:02:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:48.356 09:02:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:48.356 09:02:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:48.356 09:02:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:48.356 09:02:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:48.356 09:02:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:48.356 09:02:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:48.356 09:02:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:48.356 09:02:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:48.356 09:02:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:48.356 09:02:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:48.356 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:48.356 09:02:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:48.356 09:02:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:48.356 09:02:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:48.356 09:02:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:48.356 09:02:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:48.356 09:02:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:48.356 09:02:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:48.356 09:02:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:48.356 09:02:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:48.356 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:48.356 09:02:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:48.356 09:02:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:48.356 09:02:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:28:48.356 09:02:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:48.356 09:02:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:48.356 09:02:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:48.356 09:02:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:48.356 09:02:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:48.356 09:02:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:48.356 09:02:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:48.356 09:02:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:48.356 09:02:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:48.356 09:02:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:48.356 09:02:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:48.356 09:02:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:48.356 09:02:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:48.356 09:02:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:48.356 09:02:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:48.356 09:02:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:48.356 09:02:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:48.356 09:02:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:48.356 09:02:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:48.356 09:02:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:48.356 09:02:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:48.356 09:02:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:48.356 09:02:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:48.356 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:48.356 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.237 ms 00:28:48.356 00:28:48.356 --- 10.0.0.2 ping statistics --- 00:28:48.356 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:48.356 rtt min/avg/max/mdev = 0.237/0.237/0.237/0.000 ms 00:28:48.356 09:02:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:48.356 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:48.356 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:28:48.356 00:28:48.356 --- 10.0.0.1 ping statistics --- 00:28:48.356 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:48.356 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:28:48.356 09:02:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:48.356 09:02:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:28:48.356 09:02:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:48.356 09:02:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:48.356 09:02:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:48.356 09:02:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:48.356 09:02:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:48.356 09:02:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:48.356 09:02:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:48.356 09:02:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:28:48.356 09:02:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:28:48.356 09:02:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:48.356 09:02:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.356 09:02:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=1071173 00:28:48.356 09:02:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:48.356 09:02:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:48.356 09:02:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 1071173 00:28:48.356 09:02:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@831 -- # '[' -z 1071173 ']' 00:28:48.356 09:02:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:48.356 09:02:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:48.356 09:02:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:48.356 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:48.356 09:02:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:48.356 09:02:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.356 [2024-07-26 09:02:06.653546] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:28:48.356 [2024-07-26 09:02:06.653617] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:48.356 EAL: No free 2048 kB hugepages reported on node 1 00:28:48.356 [2024-07-26 09:02:06.691768] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:28:48.356 [2024-07-26 09:02:06.723158] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:48.614 [2024-07-26 09:02:06.821713] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:48.614 [2024-07-26 09:02:06.821784] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:48.614 [2024-07-26 09:02:06.821810] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:48.614 [2024-07-26 09:02:06.821832] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:48.614 [2024-07-26 09:02:06.821850] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:48.614 [2024-07-26 09:02:06.825085] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:48.614 [2024-07-26 09:02:06.825134] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:48.614 [2024-07-26 09:02:06.825226] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:28:48.614 [2024-07-26 09:02:06.825230] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:48.614 09:02:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:48.614 09:02:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # return 0 00:28:48.614 09:02:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:28:48.871 [2024-07-26 09:02:07.201028] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:48.872 09:02:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:28:48.872 09:02:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:48.872 09:02:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.872 09:02:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:28:49.129 Malloc1 00:28:49.129 09:02:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:49.386 09:02:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:28:49.643 09:02:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:49.901 [2024-07-26 09:02:08.297230] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:49.901 09:02:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:50.159 09:02:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:28:50.159 09:02:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:28:50.159 09:02:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:28:50.159 09:02:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:28:50.159 09:02:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:50.159 09:02:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:28:50.159 09:02:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:50.159 09:02:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:28:50.159 09:02:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:28:50.159 09:02:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:50.159 09:02:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:50.159 09:02:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:28:50.159 09:02:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:50.159 09:02:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:50.159 09:02:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:50.159 09:02:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:50.159 09:02:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:50.159 09:02:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:28:50.159 09:02:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:50.159 09:02:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:50.159 09:02:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:50.159 09:02:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:28:50.159 09:02:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:28:50.418 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:28:50.418 fio-3.35 00:28:50.418 Starting 1 thread 00:28:50.418 EAL: No free 2048 kB hugepages reported on node 1 00:28:52.947 00:28:52.947 test: (groupid=0, jobs=1): err= 0: pid=1071530: Fri Jul 26 09:02:11 2024 00:28:52.947 read: IOPS=9093, BW=35.5MiB/s (37.2MB/s)(71.3MiB/2007msec) 00:28:52.947 slat (nsec): min=1985, max=164545, avg=2490.99, stdev=1842.59 00:28:52.947 clat (usec): min=2502, max=13432, avg=7743.04, stdev=593.85 00:28:52.947 lat (usec): min=2530, max=13434, avg=7745.53, stdev=593.74 00:28:52.947 clat percentiles (usec): 00:28:52.947 | 1.00th=[ 6390], 5.00th=[ 6783], 10.00th=[ 7046], 20.00th=[ 7308], 00:28:52.947 | 30.00th=[ 7439], 40.00th=[ 7635], 50.00th=[ 7767], 60.00th=[ 7898], 00:28:52.947 | 70.00th=[ 8029], 80.00th=[ 8225], 90.00th=[ 8455], 95.00th=[ 8586], 00:28:52.947 | 99.00th=[ 8979], 99.50th=[ 9241], 99.90th=[10552], 99.95th=[11469], 00:28:52.947 | 99.99th=[13435] 00:28:52.947 bw ( KiB/s): min=35264, max=36928, per=100.00%, avg=36374.00, stdev=752.95, samples=4 00:28:52.947 iops : min= 8816, max= 9232, avg=9093.50, stdev=188.24, samples=4 00:28:52.947 write: IOPS=9108, BW=35.6MiB/s (37.3MB/s)(71.4MiB/2007msec); 0 zone resets 00:28:52.947 slat (usec): min=2, max=122, avg= 2.59, stdev= 1.36 00:28:52.947 clat (usec): min=1378, max=12229, avg=6228.29, stdev=510.26 00:28:52.947 lat (usec): min=1386, max=12231, avg=6230.89, stdev=510.21 00:28:52.947 clat percentiles (usec): 00:28:52.947 | 1.00th=[ 5080], 5.00th=[ 5473], 10.00th=[ 5669], 20.00th=[ 5866], 00:28:52.947 | 30.00th=[ 5997], 40.00th=[ 6128], 50.00th=[ 6259], 60.00th=[ 6325], 00:28:52.947 | 70.00th=[ 6456], 80.00th=[ 6587], 90.00th=[ 6783], 95.00th=[ 6980], 00:28:52.947 | 99.00th=[ 7308], 99.50th=[ 7439], 99.90th=[10290], 99.95th=[11207], 00:28:52.947 | 99.99th=[12125] 00:28:52.947 bw ( KiB/s): min=35984, max=36800, per=100.00%, avg=36436.00, stdev=371.66, samples=4 00:28:52.947 iops : min= 8996, max= 9200, avg=9109.00, stdev=92.92, samples=4 00:28:52.947 lat (msec) : 2=0.02%, 4=0.12%, 10=99.73%, 20=0.13% 00:28:52.947 cpu : usr=62.06%, sys=33.30%, ctx=90, majf=0, minf=40 00:28:52.947 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:28:52.947 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:52.947 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:52.947 issued rwts: total=18251,18281,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:52.947 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:52.947 00:28:52.947 Run status group 0 (all jobs): 00:28:52.947 READ: bw=35.5MiB/s (37.2MB/s), 35.5MiB/s-35.5MiB/s (37.2MB/s-37.2MB/s), io=71.3MiB (74.8MB), run=2007-2007msec 00:28:52.947 WRITE: bw=35.6MiB/s (37.3MB/s), 35.6MiB/s-35.6MiB/s (37.3MB/s-37.3MB/s), io=71.4MiB (74.9MB), run=2007-2007msec 00:28:52.947 09:02:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:28:52.947 09:02:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:28:52.947 09:02:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:28:52.947 09:02:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:52.947 09:02:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:28:52.947 09:02:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:52.947 09:02:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:28:52.947 09:02:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:28:52.947 09:02:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:52.947 09:02:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:52.947 09:02:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:28:52.947 09:02:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:52.947 09:02:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:52.947 09:02:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:52.947 09:02:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:52.947 09:02:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:52.947 09:02:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:28:52.947 09:02:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:52.947 09:02:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:52.947 09:02:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:52.947 09:02:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:28:52.947 09:02:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:28:53.205 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:28:53.205 fio-3.35 00:28:53.205 Starting 1 thread 00:28:53.205 EAL: No free 2048 kB hugepages reported on node 1 00:28:55.732 00:28:55.732 test: (groupid=0, jobs=1): err= 0: pid=1071988: Fri Jul 26 09:02:13 2024 00:28:55.732 read: IOPS=8524, BW=133MiB/s (140MB/s)(267MiB/2006msec) 00:28:55.732 slat (nsec): min=2887, max=93692, avg=3760.50, stdev=1479.05 00:28:55.732 clat (usec): min=2303, max=16671, avg=8778.83, stdev=1941.20 00:28:55.732 lat (usec): min=2306, max=16674, avg=8782.59, stdev=1941.23 00:28:55.732 clat percentiles (usec): 00:28:55.732 | 1.00th=[ 4817], 5.00th=[ 5669], 10.00th=[ 6325], 20.00th=[ 7046], 00:28:55.732 | 30.00th=[ 7635], 40.00th=[ 8225], 50.00th=[ 8717], 60.00th=[ 9372], 00:28:55.732 | 70.00th=[ 9765], 80.00th=[10421], 90.00th=[11338], 95.00th=[11994], 00:28:55.732 | 99.00th=[13435], 99.50th=[13960], 99.90th=[15008], 99.95th=[15533], 00:28:55.732 | 99.99th=[15664] 00:28:55.732 bw ( KiB/s): min=60800, max=78944, per=51.97%, avg=70888.00, stdev=7725.61, samples=4 00:28:55.732 iops : min= 3800, max= 4934, avg=4430.50, stdev=482.85, samples=4 00:28:55.732 write: IOPS=4997, BW=78.1MiB/s (81.9MB/s)(145MiB/1852msec); 0 zone resets 00:28:55.732 slat (usec): min=30, max=138, avg=33.47, stdev= 4.60 00:28:55.732 clat (usec): min=4071, max=17871, avg=10930.04, stdev=1941.44 00:28:55.732 lat (usec): min=4104, max=17902, avg=10963.52, stdev=1941.47 00:28:55.732 clat percentiles (usec): 00:28:55.732 | 1.00th=[ 7439], 5.00th=[ 8225], 10.00th=[ 8717], 20.00th=[ 9372], 00:28:55.732 | 30.00th=[ 9765], 40.00th=[10159], 50.00th=[10683], 60.00th=[11207], 00:28:55.732 | 70.00th=[11731], 80.00th=[12518], 90.00th=[13829], 95.00th=[14746], 00:28:55.732 | 99.00th=[15795], 99.50th=[16188], 99.90th=[16712], 99.95th=[16909], 00:28:55.732 | 99.99th=[17957] 00:28:55.732 bw ( KiB/s): min=64160, max=81952, per=92.10%, avg=73640.00, stdev=7826.27, samples=4 00:28:55.732 iops : min= 4010, max= 5122, avg=4602.50, stdev=489.14, samples=4 00:28:55.732 lat (msec) : 4=0.14%, 10=61.28%, 20=38.58% 00:28:55.732 cpu : usr=76.56%, sys=20.40%, ctx=40, majf=0, minf=66 00:28:55.732 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:28:55.732 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:55.732 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:55.732 issued rwts: total=17101,9255,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:55.732 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:55.732 00:28:55.732 Run status group 0 (all jobs): 00:28:55.732 READ: bw=133MiB/s (140MB/s), 133MiB/s-133MiB/s (140MB/s-140MB/s), io=267MiB (280MB), run=2006-2006msec 00:28:55.732 WRITE: bw=78.1MiB/s (81.9MB/s), 78.1MiB/s-78.1MiB/s (81.9MB/s-81.9MB/s), io=145MiB (152MB), run=1852-1852msec 00:28:55.732 09:02:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:55.732 09:02:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:28:55.732 09:02:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:28:55.732 09:02:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:28:55.732 09:02:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1513 -- # bdfs=() 00:28:55.732 09:02:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1513 -- # local bdfs 00:28:55.732 09:02:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:28:55.732 09:02:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:28:55.732 09:02:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:28:55.732 09:02:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:28:55.732 09:02:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:88:00.0 00:28:55.732 09:02:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 -i 10.0.0.2 00:28:59.019 Nvme0n1 00:28:59.019 09:02:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:29:02.298 09:02:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=e9a87ba0-9a47-4835-a37d-5be7eba6ad00 00:29:02.299 09:02:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb e9a87ba0-9a47-4835-a37d-5be7eba6ad00 00:29:02.299 09:02:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=e9a87ba0-9a47-4835-a37d-5be7eba6ad00 00:29:02.299 09:02:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:29:02.299 09:02:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:29:02.299 09:02:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:29:02.299 09:02:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:02.299 09:02:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:29:02.299 { 00:29:02.299 "uuid": "e9a87ba0-9a47-4835-a37d-5be7eba6ad00", 00:29:02.299 "name": "lvs_0", 00:29:02.299 "base_bdev": "Nvme0n1", 00:29:02.299 "total_data_clusters": 930, 00:29:02.299 "free_clusters": 930, 00:29:02.299 "block_size": 512, 00:29:02.299 "cluster_size": 1073741824 00:29:02.299 } 00:29:02.299 ]' 00:29:02.299 09:02:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="e9a87ba0-9a47-4835-a37d-5be7eba6ad00") .free_clusters' 00:29:02.299 09:02:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=930 00:29:02.299 09:02:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="e9a87ba0-9a47-4835-a37d-5be7eba6ad00") .cluster_size' 00:29:02.299 09:02:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=1073741824 00:29:02.299 09:02:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=952320 00:29:02.299 09:02:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 952320 00:29:02.299 952320 00:29:02.299 09:02:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 952320 00:29:02.556 01330e72-2153-4242-9855-09bbcaf7efc0 00:29:02.556 09:02:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:29:02.814 09:02:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:29:03.072 09:02:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:29:03.331 09:02:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:03.331 09:02:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:03.331 09:02:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:29:03.331 09:02:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:03.331 09:02:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:29:03.331 09:02:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:03.331 09:02:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:29:03.331 09:02:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:29:03.331 09:02:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:03.331 09:02:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:03.331 09:02:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:29:03.331 09:02:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:03.331 09:02:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:03.331 09:02:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:03.331 09:02:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:03.331 09:02:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:03.331 09:02:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:29:03.331 09:02:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:03.331 09:02:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:03.331 09:02:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:03.331 09:02:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:29:03.331 09:02:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:03.590 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:29:03.590 fio-3.35 00:29:03.590 Starting 1 thread 00:29:03.590 EAL: No free 2048 kB hugepages reported on node 1 00:29:06.119 00:29:06.119 test: (groupid=0, jobs=1): err= 0: pid=1073264: Fri Jul 26 09:02:24 2024 00:29:06.119 read: IOPS=5990, BW=23.4MiB/s (24.5MB/s)(47.0MiB/2007msec) 00:29:06.119 slat (usec): min=2, max=117, avg= 2.65, stdev= 1.89 00:29:06.119 clat (usec): min=874, max=171017, avg=11770.86, stdev=11639.06 00:29:06.119 lat (usec): min=877, max=171051, avg=11773.51, stdev=11639.28 00:29:06.119 clat percentiles (msec): 00:29:06.119 | 1.00th=[ 9], 5.00th=[ 10], 10.00th=[ 10], 20.00th=[ 11], 00:29:06.119 | 30.00th=[ 11], 40.00th=[ 11], 50.00th=[ 11], 60.00th=[ 12], 00:29:06.119 | 70.00th=[ 12], 80.00th=[ 12], 90.00th=[ 13], 95.00th=[ 13], 00:29:06.119 | 99.00th=[ 14], 99.50th=[ 157], 99.90th=[ 171], 99.95th=[ 171], 00:29:06.119 | 99.99th=[ 171] 00:29:06.119 bw ( KiB/s): min=16880, max=26384, per=99.63%, avg=23872.00, stdev=4664.93, samples=4 00:29:06.119 iops : min= 4220, max= 6596, avg=5968.00, stdev=1166.23, samples=4 00:29:06.119 write: IOPS=5970, BW=23.3MiB/s (24.5MB/s)(46.8MiB/2007msec); 0 zone resets 00:29:06.119 slat (usec): min=2, max=103, avg= 2.75, stdev= 1.59 00:29:06.119 clat (usec): min=381, max=169215, avg=9500.40, stdev=10928.55 00:29:06.119 lat (usec): min=383, max=169221, avg=9503.14, stdev=10928.77 00:29:06.119 clat percentiles (msec): 00:29:06.119 | 1.00th=[ 7], 5.00th=[ 8], 10.00th=[ 8], 20.00th=[ 9], 00:29:06.119 | 30.00th=[ 9], 40.00th=[ 9], 50.00th=[ 9], 60.00th=[ 9], 00:29:06.119 | 70.00th=[ 10], 80.00th=[ 10], 90.00th=[ 10], 95.00th=[ 11], 00:29:06.119 | 99.00th=[ 11], 99.50th=[ 14], 99.90th=[ 169], 99.95th=[ 169], 00:29:06.119 | 99.99th=[ 169] 00:29:06.119 bw ( KiB/s): min=17896, max=25920, per=100.00%, avg=23882.00, stdev=3991.12, samples=4 00:29:06.119 iops : min= 4474, max= 6480, avg=5970.50, stdev=997.78, samples=4 00:29:06.119 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:29:06.119 lat (msec) : 2=0.03%, 4=0.11%, 10=55.22%, 20=44.08%, 250=0.53% 00:29:06.119 cpu : usr=55.13%, sys=41.28%, ctx=96, majf=0, minf=40 00:29:06.119 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:29:06.119 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:06.119 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:06.119 issued rwts: total=12022,11982,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:06.119 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:06.119 00:29:06.119 Run status group 0 (all jobs): 00:29:06.119 READ: bw=23.4MiB/s (24.5MB/s), 23.4MiB/s-23.4MiB/s (24.5MB/s-24.5MB/s), io=47.0MiB (49.2MB), run=2007-2007msec 00:29:06.119 WRITE: bw=23.3MiB/s (24.5MB/s), 23.3MiB/s-23.3MiB/s (24.5MB/s-24.5MB/s), io=46.8MiB (49.1MB), run=2007-2007msec 00:29:06.119 09:02:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:29:06.119 09:02:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:29:07.525 09:02:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=c786fd0a-7bb8-47e3-a802-70e915930a5f 00:29:07.525 09:02:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb c786fd0a-7bb8-47e3-a802-70e915930a5f 00:29:07.525 09:02:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=c786fd0a-7bb8-47e3-a802-70e915930a5f 00:29:07.525 09:02:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:29:07.525 09:02:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:29:07.525 09:02:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:29:07.525 09:02:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:07.525 09:02:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:29:07.525 { 00:29:07.525 "uuid": "e9a87ba0-9a47-4835-a37d-5be7eba6ad00", 00:29:07.525 "name": "lvs_0", 00:29:07.525 "base_bdev": "Nvme0n1", 00:29:07.525 "total_data_clusters": 930, 00:29:07.525 "free_clusters": 0, 00:29:07.525 "block_size": 512, 00:29:07.525 "cluster_size": 1073741824 00:29:07.525 }, 00:29:07.525 { 00:29:07.525 "uuid": "c786fd0a-7bb8-47e3-a802-70e915930a5f", 00:29:07.525 "name": "lvs_n_0", 00:29:07.525 "base_bdev": "01330e72-2153-4242-9855-09bbcaf7efc0", 00:29:07.525 "total_data_clusters": 237847, 00:29:07.525 "free_clusters": 237847, 00:29:07.525 "block_size": 512, 00:29:07.525 "cluster_size": 4194304 00:29:07.525 } 00:29:07.525 ]' 00:29:07.525 09:02:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="c786fd0a-7bb8-47e3-a802-70e915930a5f") .free_clusters' 00:29:07.525 09:02:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=237847 00:29:07.525 09:02:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="c786fd0a-7bb8-47e3-a802-70e915930a5f") .cluster_size' 00:29:07.525 09:02:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=4194304 00:29:07.525 09:02:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=951388 00:29:07.525 09:02:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 951388 00:29:07.525 951388 00:29:07.525 09:02:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 951388 00:29:08.464 82a6d20b-738a-4b50-b2cc-cc6a009e8b9b 00:29:08.464 09:02:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:29:08.464 09:02:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:29:08.721 09:02:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:29:08.980 09:02:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:08.980 09:02:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:08.980 09:02:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:29:08.980 09:02:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:08.980 09:02:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:29:08.980 09:02:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:08.980 09:02:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:29:08.980 09:02:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:29:08.980 09:02:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:08.980 09:02:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:08.980 09:02:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:29:08.980 09:02:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:08.980 09:02:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:08.980 09:02:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:08.980 09:02:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:08.980 09:02:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:08.980 09:02:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:29:08.980 09:02:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:08.980 09:02:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:08.980 09:02:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:08.980 09:02:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:29:08.980 09:02:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:09.240 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:29:09.240 fio-3.35 00:29:09.240 Starting 1 thread 00:29:09.240 EAL: No free 2048 kB hugepages reported on node 1 00:29:11.766 00:29:11.766 test: (groupid=0, jobs=1): err= 0: pid=1074018: Fri Jul 26 09:02:29 2024 00:29:11.766 read: IOPS=5779, BW=22.6MiB/s (23.7MB/s)(45.3MiB/2008msec) 00:29:11.766 slat (usec): min=2, max=136, avg= 2.72, stdev= 2.02 00:29:11.766 clat (usec): min=4460, max=21016, avg=12231.17, stdev=1074.31 00:29:11.766 lat (usec): min=4465, max=21018, avg=12233.89, stdev=1074.20 00:29:11.766 clat percentiles (usec): 00:29:11.766 | 1.00th=[ 9634], 5.00th=[10552], 10.00th=[10945], 20.00th=[11469], 00:29:11.766 | 30.00th=[11731], 40.00th=[11994], 50.00th=[12256], 60.00th=[12518], 00:29:11.766 | 70.00th=[12780], 80.00th=[13042], 90.00th=[13435], 95.00th=[13829], 00:29:11.766 | 99.00th=[14615], 99.50th=[14877], 99.90th=[19530], 99.95th=[20841], 00:29:11.766 | 99.99th=[20841] 00:29:11.766 bw ( KiB/s): min=22016, max=23536, per=99.73%, avg=23056.00, stdev=699.74, samples=4 00:29:11.766 iops : min= 5504, max= 5884, avg=5764.00, stdev=174.94, samples=4 00:29:11.766 write: IOPS=5763, BW=22.5MiB/s (23.6MB/s)(45.2MiB/2008msec); 0 zone resets 00:29:11.766 slat (usec): min=2, max=121, avg= 2.83, stdev= 1.68 00:29:11.766 clat (usec): min=2133, max=18406, avg=9798.16, stdev=905.57 00:29:11.766 lat (usec): min=2138, max=18409, avg=9800.98, stdev=905.51 00:29:11.766 clat percentiles (usec): 00:29:11.766 | 1.00th=[ 7767], 5.00th=[ 8455], 10.00th=[ 8717], 20.00th=[ 9110], 00:29:11.766 | 30.00th=[ 9372], 40.00th=[ 9634], 50.00th=[ 9765], 60.00th=[10028], 00:29:11.766 | 70.00th=[10159], 80.00th=[10421], 90.00th=[10814], 95.00th=[11207], 00:29:11.766 | 99.00th=[11731], 99.50th=[12125], 99.90th=[16057], 99.95th=[17433], 00:29:11.766 | 99.99th=[18220] 00:29:11.766 bw ( KiB/s): min=22952, max=23168, per=99.97%, avg=23046.00, stdev=107.21, samples=4 00:29:11.766 iops : min= 5738, max= 5792, avg=5761.50, stdev=26.80, samples=4 00:29:11.766 lat (msec) : 4=0.05%, 10=30.64%, 20=69.29%, 50=0.03% 00:29:11.766 cpu : usr=58.50%, sys=37.97%, ctx=110, majf=0, minf=40 00:29:11.766 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:29:11.766 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:11.766 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:11.766 issued rwts: total=11605,11573,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:11.766 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:11.766 00:29:11.766 Run status group 0 (all jobs): 00:29:11.766 READ: bw=22.6MiB/s (23.7MB/s), 22.6MiB/s-22.6MiB/s (23.7MB/s-23.7MB/s), io=45.3MiB (47.5MB), run=2008-2008msec 00:29:11.766 WRITE: bw=22.5MiB/s (23.6MB/s), 22.5MiB/s-22.5MiB/s (23.6MB/s-23.6MB/s), io=45.2MiB (47.4MB), run=2008-2008msec 00:29:11.766 09:02:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:29:12.024 09:02:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:29:12.024 09:02:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_n_0/lbd_nest_0 00:29:16.211 09:02:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:29:16.211 09:02:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:29:19.497 09:02:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:29:19.497 09:02:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:29:21.402 09:02:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:29:21.402 09:02:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:29:21.402 09:02:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:29:21.402 09:02:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:21.402 09:02:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:29:21.402 09:02:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:21.402 09:02:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:29:21.402 09:02:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:21.402 09:02:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:21.402 rmmod nvme_tcp 00:29:21.402 rmmod nvme_fabrics 00:29:21.402 rmmod nvme_keyring 00:29:21.402 09:02:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:21.402 09:02:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:29:21.402 09:02:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:29:21.402 09:02:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 1071173 ']' 00:29:21.402 09:02:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 1071173 00:29:21.402 09:02:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@950 -- # '[' -z 1071173 ']' 00:29:21.402 09:02:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # kill -0 1071173 00:29:21.402 09:02:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # uname 00:29:21.402 09:02:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:21.402 09:02:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1071173 00:29:21.402 09:02:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:21.402 09:02:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:21.402 09:02:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1071173' 00:29:21.402 killing process with pid 1071173 00:29:21.402 09:02:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@969 -- # kill 1071173 00:29:21.402 09:02:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@974 -- # wait 1071173 00:29:21.402 09:02:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:21.402 09:02:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:21.402 09:02:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:21.402 09:02:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:21.402 09:02:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:21.402 09:02:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:21.402 09:02:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:21.402 09:02:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:23.940 09:02:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:23.940 00:29:23.940 real 0m37.218s 00:29:23.940 user 2m23.635s 00:29:23.940 sys 0m6.800s 00:29:23.940 09:02:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:23.940 09:02:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:23.940 ************************************ 00:29:23.940 END TEST nvmf_fio_host 00:29:23.940 ************************************ 00:29:23.940 09:02:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:29:23.940 09:02:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:23.940 09:02:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:23.940 09:02:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:23.940 ************************************ 00:29:23.940 START TEST nvmf_failover 00:29:23.940 ************************************ 00:29:23.940 09:02:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:29:23.940 * Looking for test storage... 00:29:23.940 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:23.940 09:02:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:23.940 09:02:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:29:23.941 09:02:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:23.941 09:02:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:23.941 09:02:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:23.941 09:02:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:23.941 09:02:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:23.941 09:02:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:23.941 09:02:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:23.941 09:02:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:23.941 09:02:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:23.941 09:02:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:23.941 09:02:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:23.941 09:02:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:23.941 09:02:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:23.941 09:02:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:23.941 09:02:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:23.941 09:02:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:23.941 09:02:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:23.941 09:02:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:23.941 09:02:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:23.941 09:02:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:23.941 09:02:41 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:23.941 09:02:41 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:23.941 09:02:41 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:23.941 09:02:41 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:29:23.941 09:02:41 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:23.941 09:02:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:29:23.941 09:02:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:23.941 09:02:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:23.941 09:02:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:23.941 09:02:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:23.941 09:02:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:23.941 09:02:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:23.941 09:02:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:23.941 09:02:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:23.941 09:02:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:23.941 09:02:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:23.941 09:02:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:23.941 09:02:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:23.941 09:02:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:29:23.941 09:02:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:23.941 09:02:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:23.941 09:02:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:23.941 09:02:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:23.941 09:02:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:23.941 09:02:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:23.941 09:02:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:23.941 09:02:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:23.941 09:02:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:23.941 09:02:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:23.941 09:02:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:29:23.941 09:02:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:29:25.845 09:02:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:25.845 09:02:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:29:25.845 09:02:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:25.845 09:02:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:25.845 09:02:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:25.845 09:02:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:25.845 09:02:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:25.845 09:02:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:29:25.845 09:02:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:25.845 09:02:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:29:25.845 09:02:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:29:25.845 09:02:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:29:25.845 09:02:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:29:25.845 09:02:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:29:25.845 09:02:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:29:25.845 09:02:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:25.845 09:02:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:25.845 09:02:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:25.845 09:02:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:25.845 09:02:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:25.845 09:02:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:25.845 09:02:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:25.845 09:02:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:25.845 09:02:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:25.845 09:02:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:25.845 09:02:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:25.845 09:02:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:25.845 09:02:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:25.845 09:02:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:25.845 09:02:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:25.845 09:02:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:25.845 09:02:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:25.845 09:02:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:25.845 09:02:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:25.845 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:25.845 09:02:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:25.845 09:02:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:25.845 09:02:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:25.845 09:02:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:25.845 09:02:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:25.845 09:02:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:25.845 09:02:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:25.845 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:25.845 09:02:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:25.845 09:02:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:25.845 09:02:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:25.845 09:02:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:25.845 09:02:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:25.845 09:02:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:25.845 09:02:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:25.845 09:02:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:25.845 09:02:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:25.845 09:02:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:25.845 09:02:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:25.845 09:02:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:25.845 09:02:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:25.845 09:02:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:25.845 09:02:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:25.845 09:02:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:25.845 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:25.845 09:02:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:25.845 09:02:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:25.845 09:02:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:25.845 09:02:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:25.845 09:02:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:25.845 09:02:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:25.845 09:02:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:25.845 09:02:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:25.845 09:02:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:25.845 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:25.845 09:02:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:25.845 09:02:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:25.845 09:02:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:29:25.845 09:02:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:25.845 09:02:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:25.845 09:02:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:25.845 09:02:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:25.845 09:02:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:25.845 09:02:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:25.845 09:02:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:25.845 09:02:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:25.845 09:02:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:25.845 09:02:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:25.845 09:02:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:25.845 09:02:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:25.845 09:02:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:25.845 09:02:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:25.845 09:02:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:25.845 09:02:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:25.845 09:02:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:25.845 09:02:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:25.845 09:02:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:25.845 09:02:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:25.845 09:02:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:25.845 09:02:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:25.845 09:02:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:25.845 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:25.845 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.154 ms 00:29:25.845 00:29:25.845 --- 10.0.0.2 ping statistics --- 00:29:25.845 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:25.845 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:29:25.845 09:02:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:25.845 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:25.845 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:29:25.845 00:29:25.845 --- 10.0.0.1 ping statistics --- 00:29:25.845 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:25.845 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:29:25.845 09:02:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:25.845 09:02:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:29:25.845 09:02:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:25.845 09:02:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:25.845 09:02:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:25.845 09:02:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:25.845 09:02:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:25.845 09:02:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:25.845 09:02:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:25.845 09:02:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:29:25.845 09:02:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:25.845 09:02:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:25.845 09:02:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:29:25.845 09:02:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=1077261 00:29:25.845 09:02:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 1077261 00:29:25.845 09:02:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 1077261 ']' 00:29:25.845 09:02:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:25.845 09:02:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:25.845 09:02:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:25.845 09:02:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:25.845 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:25.845 09:02:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:25.845 09:02:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:29:25.845 [2024-07-26 09:02:43.998272] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:29:25.845 [2024-07-26 09:02:43.998349] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:25.845 EAL: No free 2048 kB hugepages reported on node 1 00:29:25.845 [2024-07-26 09:02:44.035835] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:29:25.845 [2024-07-26 09:02:44.068328] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:25.845 [2024-07-26 09:02:44.161490] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:25.845 [2024-07-26 09:02:44.161552] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:25.845 [2024-07-26 09:02:44.161570] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:25.845 [2024-07-26 09:02:44.161584] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:25.845 [2024-07-26 09:02:44.161596] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:25.845 [2024-07-26 09:02:44.161680] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:25.845 [2024-07-26 09:02:44.161817] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:29:25.845 [2024-07-26 09:02:44.161820] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:25.845 09:02:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:25.845 09:02:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:29:25.845 09:02:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:25.845 09:02:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:25.845 09:02:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:29:25.845 09:02:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:25.845 09:02:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:26.414 [2024-07-26 09:02:44.574629] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:26.414 09:02:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:29:26.675 Malloc0 00:29:26.675 09:02:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:26.675 09:02:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:27.241 09:02:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:27.499 [2024-07-26 09:02:45.708649] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:27.499 09:02:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:29:27.757 [2024-07-26 09:02:45.973347] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:27.757 09:02:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:29:28.016 [2024-07-26 09:02:46.218146] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:29:28.016 09:02:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=1077547 00:29:28.016 09:02:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:29:28.016 09:02:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:28.016 09:02:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 1077547 /var/tmp/bdevperf.sock 00:29:28.016 09:02:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 1077547 ']' 00:29:28.016 09:02:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:28.016 09:02:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:28.016 09:02:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:28.016 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:28.016 09:02:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:28.016 09:02:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:29:28.310 09:02:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:28.310 09:02:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:29:28.310 09:02:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:28.569 NVMe0n1 00:29:28.569 09:02:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:29.135 00:29:29.135 09:02:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=1077742 00:29:29.135 09:02:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:29.135 09:02:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:29:30.071 09:02:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:30.330 [2024-07-26 09:02:48.762559] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a02480 is same with the state(5) to be set 00:29:30.330 [2024-07-26 09:02:48.762663] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a02480 is same with the state(5) to be set 00:29:30.330 [2024-07-26 09:02:48.762679] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a02480 is same with the state(5) to be set 00:29:30.330 [2024-07-26 09:02:48.762691] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a02480 is same with the state(5) to be set 00:29:30.330 [2024-07-26 09:02:48.762703] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a02480 is same with the state(5) to be set 00:29:30.330 [2024-07-26 09:02:48.762714] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a02480 is same with the state(5) to be set 00:29:30.330 [2024-07-26 09:02:48.762726] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a02480 is same with the state(5) to be set 00:29:30.330 [2024-07-26 09:02:48.762737] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a02480 is same with the state(5) to be set 00:29:30.330 [2024-07-26 09:02:48.762763] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a02480 is same with the state(5) to be set 00:29:30.331 [2024-07-26 09:02:48.762786] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a02480 is same with the state(5) to be set 00:29:30.331 [2024-07-26 09:02:48.762798] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a02480 is same with the state(5) to be set 00:29:30.331 [2024-07-26 09:02:48.762824] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a02480 is same with the state(5) to be set 00:29:30.331 [2024-07-26 09:02:48.762837] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a02480 is same with the state(5) to be set 00:29:30.331 [2024-07-26 09:02:48.762849] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a02480 is same with the state(5) to be set 00:29:30.331 [2024-07-26 09:02:48.762861] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a02480 is same with the state(5) to be set 00:29:30.331 [2024-07-26 09:02:48.762872] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a02480 is same with the state(5) to be set 00:29:30.331 [2024-07-26 09:02:48.762884] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a02480 is same with the state(5) to be set 00:29:30.331 [2024-07-26 09:02:48.762896] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a02480 is same with the state(5) to be set 00:29:30.331 [2024-07-26 09:02:48.762907] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a02480 is same with the state(5) to be set 00:29:30.331 [2024-07-26 09:02:48.762919] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a02480 is same with the state(5) to be set 00:29:30.331 [2024-07-26 09:02:48.762931] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a02480 is same with the state(5) to be set 00:29:30.331 [2024-07-26 09:02:48.762943] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a02480 is same with the state(5) to be set 00:29:30.331 [2024-07-26 09:02:48.762955] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a02480 is same with the state(5) to be set 00:29:30.331 [2024-07-26 09:02:48.762966] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a02480 is same with the state(5) to be set 00:29:30.331 [2024-07-26 09:02:48.762978] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a02480 is same with the state(5) to be set 00:29:30.331 [2024-07-26 09:02:48.762990] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a02480 is same with the state(5) to be set 00:29:30.331 [2024-07-26 09:02:48.763001] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a02480 is same with the state(5) to be set 00:29:30.331 [2024-07-26 09:02:48.763013] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a02480 is same with the state(5) to be set 00:29:30.331 [2024-07-26 09:02:48.763025] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a02480 is same with the state(5) to be set 00:29:30.331 [2024-07-26 09:02:48.763036] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a02480 is same with the state(5) to be set 00:29:30.331 [2024-07-26 09:02:48.763048] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a02480 is same with the state(5) to be set 00:29:30.331 09:02:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:29:33.620 09:02:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:33.878 00:29:33.878 09:02:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:29:34.138 [2024-07-26 09:02:52.403641] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03250 is same with the state(5) to be set 00:29:34.138 [2024-07-26 09:02:52.403709] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03250 is same with the state(5) to be set 00:29:34.138 [2024-07-26 09:02:52.403736] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03250 is same with the state(5) to be set 00:29:34.138 [2024-07-26 09:02:52.403749] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03250 is same with the state(5) to be set 00:29:34.138 [2024-07-26 09:02:52.403761] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03250 is same with the state(5) to be set 00:29:34.138 [2024-07-26 09:02:52.403773] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03250 is same with the state(5) to be set 00:29:34.138 [2024-07-26 09:02:52.403785] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03250 is same with the state(5) to be set 00:29:34.138 [2024-07-26 09:02:52.403798] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03250 is same with the state(5) to be set 00:29:34.138 09:02:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:29:37.425 09:02:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:37.425 [2024-07-26 09:02:55.652382] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:37.425 09:02:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:29:38.357 09:02:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:29:38.615 [2024-07-26 09:02:56.954646] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03ff0 is same with the state(5) to be set 00:29:38.615 [2024-07-26 09:02:56.954731] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03ff0 is same with the state(5) to be set 00:29:38.615 [2024-07-26 09:02:56.954745] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03ff0 is same with the state(5) to be set 00:29:38.615 [2024-07-26 09:02:56.954758] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03ff0 is same with the state(5) to be set 00:29:38.615 [2024-07-26 09:02:56.954770] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03ff0 is same with the state(5) to be set 00:29:38.615 [2024-07-26 09:02:56.954782] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03ff0 is same with the state(5) to be set 00:29:38.615 [2024-07-26 09:02:56.954794] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03ff0 is same with the state(5) to be set 00:29:38.615 [2024-07-26 09:02:56.954805] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03ff0 is same with the state(5) to be set 00:29:38.615 [2024-07-26 09:02:56.954817] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03ff0 is same with the state(5) to be set 00:29:38.615 [2024-07-26 09:02:56.954829] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03ff0 is same with the state(5) to be set 00:29:38.615 [2024-07-26 09:02:56.954840] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03ff0 is same with the state(5) to be set 00:29:38.615 [2024-07-26 09:02:56.954852] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03ff0 is same with the state(5) to be set 00:29:38.615 [2024-07-26 09:02:56.954863] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03ff0 is same with the state(5) to be set 00:29:38.615 [2024-07-26 09:02:56.954875] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03ff0 is same with the state(5) to be set 00:29:38.615 [2024-07-26 09:02:56.954887] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03ff0 is same with the state(5) to be set 00:29:38.615 [2024-07-26 09:02:56.954899] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a03ff0 is same with the state(5) to be set 00:29:38.615 09:02:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 1077742 00:29:45.231 0 00:29:45.231 09:03:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 1077547 00:29:45.231 09:03:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 1077547 ']' 00:29:45.231 09:03:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 1077547 00:29:45.231 09:03:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:29:45.231 09:03:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:45.231 09:03:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1077547 00:29:45.231 09:03:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:45.231 09:03:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:45.231 09:03:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1077547' 00:29:45.231 killing process with pid 1077547 00:29:45.232 09:03:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 1077547 00:29:45.232 09:03:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 1077547 00:29:45.232 09:03:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:45.232 [2024-07-26 09:02:46.282026] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:29:45.232 [2024-07-26 09:02:46.282122] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1077547 ] 00:29:45.232 EAL: No free 2048 kB hugepages reported on node 1 00:29:45.232 [2024-07-26 09:02:46.313710] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:29:45.232 [2024-07-26 09:02:46.342857] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:45.232 [2024-07-26 09:02:46.429859] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:45.232 Running I/O for 15 seconds... 00:29:45.232 [2024-07-26 09:02:48.763372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:82040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.232 [2024-07-26 09:02:48.763425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.232 [2024-07-26 09:02:48.763466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:82048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.232 [2024-07-26 09:02:48.763495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.232 [2024-07-26 09:02:48.763523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:82056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.232 [2024-07-26 09:02:48.763550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.232 [2024-07-26 09:02:48.763578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:82064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.232 [2024-07-26 09:02:48.763604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.232 [2024-07-26 09:02:48.763631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.232 [2024-07-26 09:02:48.763657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.232 [2024-07-26 09:02:48.763697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:82080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.232 [2024-07-26 09:02:48.763722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.232 [2024-07-26 09:02:48.763761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.232 [2024-07-26 09:02:48.763785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.232 [2024-07-26 09:02:48.763810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:82096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.232 [2024-07-26 09:02:48.763833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.232 [2024-07-26 09:02:48.763859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.232 [2024-07-26 09:02:48.763883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.232 [2024-07-26 09:02:48.763909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:82112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.232 [2024-07-26 09:02:48.763932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.232 [2024-07-26 09:02:48.763957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:82120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.232 [2024-07-26 09:02:48.763993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.232 [2024-07-26 09:02:48.764019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:82128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.232 [2024-07-26 09:02:48.764042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.232 [2024-07-26 09:02:48.764090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:82136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.232 [2024-07-26 09:02:48.764115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.232 [2024-07-26 09:02:48.764142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:82144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.232 [2024-07-26 09:02:48.764164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.232 [2024-07-26 09:02:48.764192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:82152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.232 [2024-07-26 09:02:48.764215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.232 [2024-07-26 09:02:48.764242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:82160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.232 [2024-07-26 09:02:48.764265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.232 [2024-07-26 09:02:48.764292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:82168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.232 [2024-07-26 09:02:48.764316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.232 [2024-07-26 09:02:48.764343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:82176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.232 [2024-07-26 09:02:48.764366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.232 [2024-07-26 09:02:48.764406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:82184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.232 [2024-07-26 09:02:48.764428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.232 [2024-07-26 09:02:48.764454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:82192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.232 [2024-07-26 09:02:48.764477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.232 [2024-07-26 09:02:48.764503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:82200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.232 [2024-07-26 09:02:48.764525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.232 [2024-07-26 09:02:48.764550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:82208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.232 [2024-07-26 09:02:48.764573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.232 [2024-07-26 09:02:48.764598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:82216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.232 [2024-07-26 09:02:48.764621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.232 [2024-07-26 09:02:48.764651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:82224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.232 [2024-07-26 09:02:48.764674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.232 [2024-07-26 09:02:48.764699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:82232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.232 [2024-07-26 09:02:48.764723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.232 [2024-07-26 09:02:48.764747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:82240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.232 [2024-07-26 09:02:48.764770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.232 [2024-07-26 09:02:48.764794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:82248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.232 [2024-07-26 09:02:48.764819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.232 [2024-07-26 09:02:48.764842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:82256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.232 [2024-07-26 09:02:48.764867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.232 [2024-07-26 09:02:48.764891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:82264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.232 [2024-07-26 09:02:48.764915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.232 [2024-07-26 09:02:48.764939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:82272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.232 [2024-07-26 09:02:48.764963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.232 [2024-07-26 09:02:48.764987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:82280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.232 [2024-07-26 09:02:48.765009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.232 [2024-07-26 09:02:48.765034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:82288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.232 [2024-07-26 09:02:48.765057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.232 [2024-07-26 09:02:48.765109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:82296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.232 [2024-07-26 09:02:48.765134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.232 [2024-07-26 09:02:48.765160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:82304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.232 [2024-07-26 09:02:48.765183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.232 [2024-07-26 09:02:48.765209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:82312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.232 [2024-07-26 09:02:48.765234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.232 [2024-07-26 09:02:48.765260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:82320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.232 [2024-07-26 09:02:48.765289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.232 [2024-07-26 09:02:48.765315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:82328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.232 [2024-07-26 09:02:48.765340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.232 [2024-07-26 09:02:48.765380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:82336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.232 [2024-07-26 09:02:48.765404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.232 [2024-07-26 09:02:48.765428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:82344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.232 [2024-07-26 09:02:48.765451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.232 [2024-07-26 09:02:48.765476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:82352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.232 [2024-07-26 09:02:48.765499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.232 [2024-07-26 09:02:48.765525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:82360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.232 [2024-07-26 09:02:48.765547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.232 [2024-07-26 09:02:48.765573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:82368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.232 [2024-07-26 09:02:48.765595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.232 [2024-07-26 09:02:48.765621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:82376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.232 [2024-07-26 09:02:48.765643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.232 [2024-07-26 09:02:48.765669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:82384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.232 [2024-07-26 09:02:48.765691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.232 [2024-07-26 09:02:48.765717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:82392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.232 [2024-07-26 09:02:48.765739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.232 [2024-07-26 09:02:48.765764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:81872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.232 [2024-07-26 09:02:48.765787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.232 [2024-07-26 09:02:48.765812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:81880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.232 [2024-07-26 09:02:48.765835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.232 [2024-07-26 09:02:48.765859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:81888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.232 [2024-07-26 09:02:48.765883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.232 [2024-07-26 09:02:48.765913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:81896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.232 [2024-07-26 09:02:48.765938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.232 [2024-07-26 09:02:48.765962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:81904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.232 [2024-07-26 09:02:48.765985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.232 [2024-07-26 09:02:48.766010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:82400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.232 [2024-07-26 09:02:48.766033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.232 [2024-07-26 09:02:48.766083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.232 [2024-07-26 09:02:48.766109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.232 [2024-07-26 09:02:48.766134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:82416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.232 [2024-07-26 09:02:48.766158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.232 [2024-07-26 09:02:48.766184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.232 [2024-07-26 09:02:48.766206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.232 [2024-07-26 09:02:48.766233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:82432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.232 [2024-07-26 09:02:48.766256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.232 [2024-07-26 09:02:48.766282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:82440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.232 [2024-07-26 09:02:48.766310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.232 [2024-07-26 09:02:48.766337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:82448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.232 [2024-07-26 09:02:48.766360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.232 [2024-07-26 09:02:48.766388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:82456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.232 [2024-07-26 09:02:48.766410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.232 [2024-07-26 09:02:48.766438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:82464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.232 [2024-07-26 09:02:48.766462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.232 [2024-07-26 09:02:48.766490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:82472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.232 [2024-07-26 09:02:48.766513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.232 [2024-07-26 09:02:48.766539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:82480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.232 [2024-07-26 09:02:48.766563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.232 [2024-07-26 09:02:48.766595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:82488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.232 [2024-07-26 09:02:48.766619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.232 [2024-07-26 09:02:48.766645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:82496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.232 [2024-07-26 09:02:48.766668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.232 [2024-07-26 09:02:48.766695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:82504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.232 [2024-07-26 09:02:48.766718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.232 [2024-07-26 09:02:48.766745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:82512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.232 [2024-07-26 09:02:48.766770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.232 [2024-07-26 09:02:48.766796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:82520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.232 [2024-07-26 09:02:48.766822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.232 [2024-07-26 09:02:48.766848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:82528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.232 [2024-07-26 09:02:48.766873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.232 [2024-07-26 09:02:48.766898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:82536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.232 [2024-07-26 09:02:48.766923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.232 [2024-07-26 09:02:48.766947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:82544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.232 [2024-07-26 09:02:48.766973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.232 [2024-07-26 09:02:48.766998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:82552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.232 [2024-07-26 09:02:48.767023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.232 [2024-07-26 09:02:48.767048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:82560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.232 [2024-07-26 09:02:48.767095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.232 [2024-07-26 09:02:48.767122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:82568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.232 [2024-07-26 09:02:48.767148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.232 [2024-07-26 09:02:48.767174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:82576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.232 [2024-07-26 09:02:48.767198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.232 [2024-07-26 09:02:48.767225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:82584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.232 [2024-07-26 09:02:48.767256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.232 [2024-07-26 09:02:48.767283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:82592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.232 [2024-07-26 09:02:48.767307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.232 [2024-07-26 09:02:48.767334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:82600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.233 [2024-07-26 09:02:48.767358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.233 [2024-07-26 09:02:48.767385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:82608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.233 [2024-07-26 09:02:48.767409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.233 [2024-07-26 09:02:48.767437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:82616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.233 [2024-07-26 09:02:48.767461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.233 [2024-07-26 09:02:48.767488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:82624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.233 [2024-07-26 09:02:48.767511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.233 [2024-07-26 09:02:48.767539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:82632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.233 [2024-07-26 09:02:48.767562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.233 [2024-07-26 09:02:48.767590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:82640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.233 [2024-07-26 09:02:48.767616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.233 [2024-07-26 09:02:48.767642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:82648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.233 [2024-07-26 09:02:48.767667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.233 [2024-07-26 09:02:48.767693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:82656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.233 [2024-07-26 09:02:48.767717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.233 [2024-07-26 09:02:48.767744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:82664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.233 [2024-07-26 09:02:48.767769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.233 [2024-07-26 09:02:48.767795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:82672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.233 [2024-07-26 09:02:48.767820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.233 [2024-07-26 09:02:48.767846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:82680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.233 [2024-07-26 09:02:48.767872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.233 [2024-07-26 09:02:48.767902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:82688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.233 [2024-07-26 09:02:48.767929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.233 [2024-07-26 09:02:48.767955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:82696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.233 [2024-07-26 09:02:48.767980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.233 [2024-07-26 09:02:48.768007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:82704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.233 [2024-07-26 09:02:48.768032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.233 [2024-07-26 09:02:48.768067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:82712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.233 [2024-07-26 09:02:48.768094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.233 [2024-07-26 09:02:48.768121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:82720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.233 [2024-07-26 09:02:48.768145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.233 [2024-07-26 09:02:48.768172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:82728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.233 [2024-07-26 09:02:48.768196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.233 [2024-07-26 09:02:48.768222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.233 [2024-07-26 09:02:48.768245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.233 [2024-07-26 09:02:48.768274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:82744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.233 [2024-07-26 09:02:48.768298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.233 [2024-07-26 09:02:48.768326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:82752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.233 [2024-07-26 09:02:48.768349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.233 [2024-07-26 09:02:48.768377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.233 [2024-07-26 09:02:48.768402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.233 [2024-07-26 09:02:48.768430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:82768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.233 [2024-07-26 09:02:48.768454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.233 [2024-07-26 09:02:48.768483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:82776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.233 [2024-07-26 09:02:48.768507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.233 [2024-07-26 09:02:48.768533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:82784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.233 [2024-07-26 09:02:48.768561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.233 [2024-07-26 09:02:48.768588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:82792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.233 [2024-07-26 09:02:48.768613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.233 [2024-07-26 09:02:48.768639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:82800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.233 [2024-07-26 09:02:48.768664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.233 [2024-07-26 09:02:48.768689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:82808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.233 [2024-07-26 09:02:48.768718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.233 [2024-07-26 09:02:48.768745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:82816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.233 [2024-07-26 09:02:48.768771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.233 [2024-07-26 09:02:48.768797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:82824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.233 [2024-07-26 09:02:48.768823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.233 [2024-07-26 09:02:48.768850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:82832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.233 [2024-07-26 09:02:48.768877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.233 [2024-07-26 09:02:48.768903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:82840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.233 [2024-07-26 09:02:48.768929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.233 [2024-07-26 09:02:48.768955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:82848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.233 [2024-07-26 09:02:48.768981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.233 [2024-07-26 09:02:48.769007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:82856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.233 [2024-07-26 09:02:48.769033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.233 [2024-07-26 09:02:48.769068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:82864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.233 [2024-07-26 09:02:48.769095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.233 [2024-07-26 09:02:48.769123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:81912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.233 [2024-07-26 09:02:48.769148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.233 [2024-07-26 09:02:48.769174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:81920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.233 [2024-07-26 09:02:48.769198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.233 [2024-07-26 09:02:48.769225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:81928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.233 [2024-07-26 09:02:48.769256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.233 [2024-07-26 09:02:48.769285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:81936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.233 [2024-07-26 09:02:48.769309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.233 [2024-07-26 09:02:48.769338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:81944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.233 [2024-07-26 09:02:48.769363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.233 [2024-07-26 09:02:48.769391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:81952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.233 [2024-07-26 09:02:48.769415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.233 [2024-07-26 09:02:48.769443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:81960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.233 [2024-07-26 09:02:48.769467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.233 [2024-07-26 09:02:48.769495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:81968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.233 [2024-07-26 09:02:48.769520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.233 [2024-07-26 09:02:48.769549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:81976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.233 [2024-07-26 09:02:48.769573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.233 [2024-07-26 09:02:48.769601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:81984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.233 [2024-07-26 09:02:48.769625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.233 [2024-07-26 09:02:48.769654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:81992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.233 [2024-07-26 09:02:48.769679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.233 [2024-07-26 09:02:48.769709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:82000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.233 [2024-07-26 09:02:48.769733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.233 [2024-07-26 09:02:48.769761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:82008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.233 [2024-07-26 09:02:48.769785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.233 [2024-07-26 09:02:48.769814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:82016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.233 [2024-07-26 09:02:48.769839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.233 [2024-07-26 09:02:48.769867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:82024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.233 [2024-07-26 09:02:48.769891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.233 [2024-07-26 09:02:48.769924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:82872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.233 [2024-07-26 09:02:48.769949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.233 [2024-07-26 09:02:48.769977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:82880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.233 [2024-07-26 09:02:48.770002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.233 [2024-07-26 09:02:48.770030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:82888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.233 [2024-07-26 09:02:48.770055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.233 [2024-07-26 09:02:48.770110] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:45.233 [2024-07-26 09:02:48.770134] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:45.233 [2024-07-26 09:02:48.770155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82032 len:8 PRP1 0x0 PRP2 0x0 00:29:45.233 [2024-07-26 09:02:48.770186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.233 [2024-07-26 09:02:48.770275] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1603cb0 was disconnected and freed. reset controller. 00:29:45.233 [2024-07-26 09:02:48.770305] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:29:45.233 [2024-07-26 09:02:48.770371] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:45.233 [2024-07-26 09:02:48.770399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.233 [2024-07-26 09:02:48.770439] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:45.233 [2024-07-26 09:02:48.770465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.233 [2024-07-26 09:02:48.770490] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:45.233 [2024-07-26 09:02:48.770515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.233 [2024-07-26 09:02:48.770540] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:45.233 [2024-07-26 09:02:48.770564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.233 [2024-07-26 09:02:48.770588] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.233 [2024-07-26 09:02:48.770652] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1610850 (9): Bad file descriptor 00:29:45.233 [2024-07-26 09:02:48.774930] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.233 [2024-07-26 09:02:48.806998] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:45.233 [2024-07-26 09:02:52.404173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:79832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.233 [2024-07-26 09:02:52.404221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.233 [2024-07-26 09:02:52.404259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:79840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.233 [2024-07-26 09:02:52.404291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.233 [2024-07-26 09:02:52.404317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:79848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.233 [2024-07-26 09:02:52.404339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.233 [2024-07-26 09:02:52.404364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:79856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.233 [2024-07-26 09:02:52.404400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.233 [2024-07-26 09:02:52.404427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:79864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.233 [2024-07-26 09:02:52.404449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.233 [2024-07-26 09:02:52.404473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:79872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.233 [2024-07-26 09:02:52.404496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.233 [2024-07-26 09:02:52.404520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:79880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.233 [2024-07-26 09:02:52.404545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.233 [2024-07-26 09:02:52.404569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:79888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.233 [2024-07-26 09:02:52.404594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.233 [2024-07-26 09:02:52.404619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:79896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.233 [2024-07-26 09:02:52.404643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.233 [2024-07-26 09:02:52.404668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:79904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.233 [2024-07-26 09:02:52.404691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.233 [2024-07-26 09:02:52.404715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:79912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.233 [2024-07-26 09:02:52.404737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.233 [2024-07-26 09:02:52.404762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:79920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.233 [2024-07-26 09:02:52.404784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.233 [2024-07-26 09:02:52.404810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:79928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.233 [2024-07-26 09:02:52.404832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.233 [2024-07-26 09:02:52.404858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:79936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.233 [2024-07-26 09:02:52.404881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.233 [2024-07-26 09:02:52.404912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:79944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.233 [2024-07-26 09:02:52.404937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.233 [2024-07-26 09:02:52.404962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:79952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.233 [2024-07-26 09:02:52.404985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.233 [2024-07-26 09:02:52.405010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:79136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.233 [2024-07-26 09:02:52.405032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.233 [2024-07-26 09:02:52.405091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:79144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.233 [2024-07-26 09:02:52.405117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.233 [2024-07-26 09:02:52.405143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:79152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.233 [2024-07-26 09:02:52.405166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.233 [2024-07-26 09:02:52.405192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:79160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.233 [2024-07-26 09:02:52.405216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.233 [2024-07-26 09:02:52.405244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:79168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.233 [2024-07-26 09:02:52.405266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.233 [2024-07-26 09:02:52.405293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:79176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.233 [2024-07-26 09:02:52.405316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.233 [2024-07-26 09:02:52.405349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:79184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.233 [2024-07-26 09:02:52.405385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.233 [2024-07-26 09:02:52.405410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:79192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.233 [2024-07-26 09:02:52.405433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.233 [2024-07-26 09:02:52.405457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:79200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.234 [2024-07-26 09:02:52.405480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.234 [2024-07-26 09:02:52.405505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:79208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.234 [2024-07-26 09:02:52.405529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.234 [2024-07-26 09:02:52.405553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:79216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.234 [2024-07-26 09:02:52.405575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.234 [2024-07-26 09:02:52.405606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:79224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.234 [2024-07-26 09:02:52.405630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.234 [2024-07-26 09:02:52.405655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:79232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.234 [2024-07-26 09:02:52.405678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.234 [2024-07-26 09:02:52.405703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:79240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.234 [2024-07-26 09:02:52.405725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.234 [2024-07-26 09:02:52.405752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:79248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.234 [2024-07-26 09:02:52.405774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.234 [2024-07-26 09:02:52.405800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:79256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.234 [2024-07-26 09:02:52.405822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.234 [2024-07-26 09:02:52.405848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:79264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.234 [2024-07-26 09:02:52.405871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.234 [2024-07-26 09:02:52.405896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:79272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.234 [2024-07-26 09:02:52.405919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.234 [2024-07-26 09:02:52.405944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:79280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.234 [2024-07-26 09:02:52.405968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.234 [2024-07-26 09:02:52.405992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:79288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.234 [2024-07-26 09:02:52.406016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.234 [2024-07-26 09:02:52.406040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:79296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.234 [2024-07-26 09:02:52.406087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.234 [2024-07-26 09:02:52.406115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:79304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.234 [2024-07-26 09:02:52.406140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.234 [2024-07-26 09:02:52.406165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:79312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.234 [2024-07-26 09:02:52.406190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.234 [2024-07-26 09:02:52.406216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:79320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.234 [2024-07-26 09:02:52.406246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.234 [2024-07-26 09:02:52.406272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:79328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.234 [2024-07-26 09:02:52.406297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.234 [2024-07-26 09:02:52.406322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:79336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.234 [2024-07-26 09:02:52.406346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.234 [2024-07-26 09:02:52.406385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:79344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.234 [2024-07-26 09:02:52.406408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.234 [2024-07-26 09:02:52.406433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:79352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.234 [2024-07-26 09:02:52.406456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.234 [2024-07-26 09:02:52.406483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:79360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.234 [2024-07-26 09:02:52.406505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.234 [2024-07-26 09:02:52.406531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:79368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.234 [2024-07-26 09:02:52.406553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.234 [2024-07-26 09:02:52.406578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.234 [2024-07-26 09:02:52.406604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.234 [2024-07-26 09:02:52.406632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:79384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.234 [2024-07-26 09:02:52.406656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.234 [2024-07-26 09:02:52.406683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:79392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.234 [2024-07-26 09:02:52.406706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.234 [2024-07-26 09:02:52.406733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:79400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.234 [2024-07-26 09:02:52.406756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.234 [2024-07-26 09:02:52.406783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:79408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.234 [2024-07-26 09:02:52.406806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.234 [2024-07-26 09:02:52.406833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:79416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.234 [2024-07-26 09:02:52.406857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.234 [2024-07-26 09:02:52.406887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:79424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.234 [2024-07-26 09:02:52.406910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.234 [2024-07-26 09:02:52.406935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:79432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.234 [2024-07-26 09:02:52.406959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.234 [2024-07-26 09:02:52.406984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:79440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.234 [2024-07-26 09:02:52.407007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.234 [2024-07-26 09:02:52.407032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:79448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.234 [2024-07-26 09:02:52.407056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.234 [2024-07-26 09:02:52.407109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:79456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.234 [2024-07-26 09:02:52.407135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.234 [2024-07-26 09:02:52.407161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:79464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.234 [2024-07-26 09:02:52.407186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.234 [2024-07-26 09:02:52.407212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:79472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.234 [2024-07-26 09:02:52.407237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.234 [2024-07-26 09:02:52.407262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:79480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.234 [2024-07-26 09:02:52.407288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.234 [2024-07-26 09:02:52.407314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:79488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.234 [2024-07-26 09:02:52.407338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.234 [2024-07-26 09:02:52.407377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:79496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.234 [2024-07-26 09:02:52.407400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.234 [2024-07-26 09:02:52.407426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:79504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.234 [2024-07-26 09:02:52.407462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.234 [2024-07-26 09:02:52.407490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:79512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.234 [2024-07-26 09:02:52.407513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.234 [2024-07-26 09:02:52.407541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:79520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.234 [2024-07-26 09:02:52.407570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.234 [2024-07-26 09:02:52.407598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:79528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.234 [2024-07-26 09:02:52.407621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.234 [2024-07-26 09:02:52.407649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:79536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.234 [2024-07-26 09:02:52.407673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.234 [2024-07-26 09:02:52.407700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:79544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.234 [2024-07-26 09:02:52.407723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.234 [2024-07-26 09:02:52.407751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:79552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.234 [2024-07-26 09:02:52.407774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.234 [2024-07-26 09:02:52.407800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:79560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.234 [2024-07-26 09:02:52.407822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.234 [2024-07-26 09:02:52.407849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:79568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.234 [2024-07-26 09:02:52.407872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.234 [2024-07-26 09:02:52.407897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:79576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.234 [2024-07-26 09:02:52.407920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.234 [2024-07-26 09:02:52.407944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:79960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.234 [2024-07-26 09:02:52.407969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.234 [2024-07-26 09:02:52.407993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:79968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.234 [2024-07-26 09:02:52.408018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.234 [2024-07-26 09:02:52.408051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:79976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.234 [2024-07-26 09:02:52.408101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.234 [2024-07-26 09:02:52.408129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:79984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.234 [2024-07-26 09:02:52.408155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.234 [2024-07-26 09:02:52.408181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:79992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.234 [2024-07-26 09:02:52.408206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.234 [2024-07-26 09:02:52.408238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:80000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.234 [2024-07-26 09:02:52.408265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.234 [2024-07-26 09:02:52.408291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:80008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.234 [2024-07-26 09:02:52.408316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.234 [2024-07-26 09:02:52.408342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:80016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.234 [2024-07-26 09:02:52.408380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.234 [2024-07-26 09:02:52.408406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:80024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.234 [2024-07-26 09:02:52.408429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.234 [2024-07-26 09:02:52.408456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:80032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.234 [2024-07-26 09:02:52.408480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.234 [2024-07-26 09:02:52.408507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:80040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.234 [2024-07-26 09:02:52.408529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.234 [2024-07-26 09:02:52.408557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:80048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.234 [2024-07-26 09:02:52.408580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.234 [2024-07-26 09:02:52.408606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:80056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.234 [2024-07-26 09:02:52.408629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.234 [2024-07-26 09:02:52.408656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:80064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.234 [2024-07-26 09:02:52.408680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.234 [2024-07-26 09:02:52.408706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:80072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.234 [2024-07-26 09:02:52.408730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.234 [2024-07-26 09:02:52.408755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:80080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.234 [2024-07-26 09:02:52.408779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.234 [2024-07-26 09:02:52.408804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:79584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.234 [2024-07-26 09:02:52.408827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.234 [2024-07-26 09:02:52.408852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:79592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.234 [2024-07-26 09:02:52.408877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.234 [2024-07-26 09:02:52.408917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:79600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.234 [2024-07-26 09:02:52.408942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.234 [2024-07-26 09:02:52.408967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:79608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.234 [2024-07-26 09:02:52.408992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.234 [2024-07-26 09:02:52.409017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:79616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.234 [2024-07-26 09:02:52.409042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.234 [2024-07-26 09:02:52.409089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:79624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.234 [2024-07-26 09:02:52.409117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.234 [2024-07-26 09:02:52.409144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:79632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.234 [2024-07-26 09:02:52.409170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.234 [2024-07-26 09:02:52.409196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:79640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.234 [2024-07-26 09:02:52.409222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.234 [2024-07-26 09:02:52.409248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:79648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.234 [2024-07-26 09:02:52.409274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.234 [2024-07-26 09:02:52.409308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:79656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.234 [2024-07-26 09:02:52.409334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.234 [2024-07-26 09:02:52.409361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:79664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.234 [2024-07-26 09:02:52.409399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.234 [2024-07-26 09:02:52.409427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:79672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.234 [2024-07-26 09:02:52.409450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.234 [2024-07-26 09:02:52.409477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:79680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.234 [2024-07-26 09:02:52.409500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.234 [2024-07-26 09:02:52.409527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:79688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.234 [2024-07-26 09:02:52.409550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.235 [2024-07-26 09:02:52.409577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:79696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.235 [2024-07-26 09:02:52.409605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.235 [2024-07-26 09:02:52.409633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:79704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.235 [2024-07-26 09:02:52.409655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.235 [2024-07-26 09:02:52.409683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:79712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.235 [2024-07-26 09:02:52.409707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.235 [2024-07-26 09:02:52.409733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:79720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.235 [2024-07-26 09:02:52.409756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.235 [2024-07-26 09:02:52.409782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:79728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.235 [2024-07-26 09:02:52.409806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.235 [2024-07-26 09:02:52.409831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:79736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.235 [2024-07-26 09:02:52.409854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.235 [2024-07-26 09:02:52.409880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:79744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.235 [2024-07-26 09:02:52.409904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.235 [2024-07-26 09:02:52.409928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:79752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.235 [2024-07-26 09:02:52.409953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.235 [2024-07-26 09:02:52.409979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:79760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.235 [2024-07-26 09:02:52.410003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.235 [2024-07-26 09:02:52.410028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:79768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.235 [2024-07-26 09:02:52.410053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.235 [2024-07-26 09:02:52.410105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:79776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.235 [2024-07-26 09:02:52.410131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.235 [2024-07-26 09:02:52.410166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:79784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.235 [2024-07-26 09:02:52.410192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.235 [2024-07-26 09:02:52.410218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:79792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.235 [2024-07-26 09:02:52.410242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.235 [2024-07-26 09:02:52.410274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:79800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.235 [2024-07-26 09:02:52.410299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.235 [2024-07-26 09:02:52.410326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:79808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.235 [2024-07-26 09:02:52.410351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.235 [2024-07-26 09:02:52.410392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:79816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.235 [2024-07-26 09:02:52.410414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.235 [2024-07-26 09:02:52.410441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:79824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.235 [2024-07-26 09:02:52.410464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.235 [2024-07-26 09:02:52.410491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:80088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.235 [2024-07-26 09:02:52.410513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.235 [2024-07-26 09:02:52.410540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:80096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.235 [2024-07-26 09:02:52.410565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.235 [2024-07-26 09:02:52.410592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:80104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.235 [2024-07-26 09:02:52.410615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.235 [2024-07-26 09:02:52.410641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:80112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.235 [2024-07-26 09:02:52.410664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.235 [2024-07-26 09:02:52.410689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:80120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.235 [2024-07-26 09:02:52.410714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.235 [2024-07-26 09:02:52.410738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:80128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.235 [2024-07-26 09:02:52.410763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.235 [2024-07-26 09:02:52.410788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:80136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.235 [2024-07-26 09:02:52.410813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.235 [2024-07-26 09:02:52.410837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:80144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.235 [2024-07-26 09:02:52.410862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.235 [2024-07-26 09:02:52.410905] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:45.235 [2024-07-26 09:02:52.410928] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:45.235 [2024-07-26 09:02:52.410952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80152 len:8 PRP1 0x0 PRP2 0x0 00:29:45.235 [2024-07-26 09:02:52.410976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.235 [2024-07-26 09:02:52.411087] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1634670 was disconnected and freed. reset controller. 00:29:45.235 [2024-07-26 09:02:52.411117] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:29:45.235 [2024-07-26 09:02:52.411186] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:45.235 [2024-07-26 09:02:52.411215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.235 [2024-07-26 09:02:52.411240] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:45.235 [2024-07-26 09:02:52.411263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.235 [2024-07-26 09:02:52.411287] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:45.235 [2024-07-26 09:02:52.411310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.235 [2024-07-26 09:02:52.411335] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:45.235 [2024-07-26 09:02:52.411357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.235 [2024-07-26 09:02:52.411381] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.235 [2024-07-26 09:02:52.411453] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1610850 (9): Bad file descriptor 00:29:45.235 [2024-07-26 09:02:52.415756] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.235 [2024-07-26 09:02:52.581081] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:45.235 [2024-07-26 09:02:56.955889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:42496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.235 [2024-07-26 09:02:56.955942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.235 [2024-07-26 09:02:56.955982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:42504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.235 [2024-07-26 09:02:56.956008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.235 [2024-07-26 09:02:56.956036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:42512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.235 [2024-07-26 09:02:56.956085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.235 [2024-07-26 09:02:56.956115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:42520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.235 [2024-07-26 09:02:56.956142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.235 [2024-07-26 09:02:56.956169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:42528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.235 [2024-07-26 09:02:56.956196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.235 [2024-07-26 09:02:56.956231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:42536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.235 [2024-07-26 09:02:56.956256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.235 [2024-07-26 09:02:56.956282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:42544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.235 [2024-07-26 09:02:56.956308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.235 [2024-07-26 09:02:56.956343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:42552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.235 [2024-07-26 09:02:56.956369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.235 [2024-07-26 09:02:56.956410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:42560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.235 [2024-07-26 09:02:56.956434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.235 [2024-07-26 09:02:56.956460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:42816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.235 [2024-07-26 09:02:56.956483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.235 [2024-07-26 09:02:56.956510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:42824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.235 [2024-07-26 09:02:56.956534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.235 [2024-07-26 09:02:56.956559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:42832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.235 [2024-07-26 09:02:56.956583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.235 [2024-07-26 09:02:56.956608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:42840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.235 [2024-07-26 09:02:56.956633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.235 [2024-07-26 09:02:56.956658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:42848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.235 [2024-07-26 09:02:56.956684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.235 [2024-07-26 09:02:56.956708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:42856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.235 [2024-07-26 09:02:56.956733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.235 [2024-07-26 09:02:56.956758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:42864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.235 [2024-07-26 09:02:56.956782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.235 [2024-07-26 09:02:56.956809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:42872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.235 [2024-07-26 09:02:56.956832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.235 [2024-07-26 09:02:56.956859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:42880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.235 [2024-07-26 09:02:56.956881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.235 [2024-07-26 09:02:56.956914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:42888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.235 [2024-07-26 09:02:56.956937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.235 [2024-07-26 09:02:56.956963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:42896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.235 [2024-07-26 09:02:56.956986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.235 [2024-07-26 09:02:56.957013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:42904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.235 [2024-07-26 09:02:56.957036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.235 [2024-07-26 09:02:56.957086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:42912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.235 [2024-07-26 09:02:56.957122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.235 [2024-07-26 09:02:56.957150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:42920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.235 [2024-07-26 09:02:56.957173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.235 [2024-07-26 09:02:56.957201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:42928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.235 [2024-07-26 09:02:56.957226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.235 [2024-07-26 09:02:56.957252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:42936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.235 [2024-07-26 09:02:56.957278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.235 [2024-07-26 09:02:56.957314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:42944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.235 [2024-07-26 09:02:56.957340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.235 [2024-07-26 09:02:56.957364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:42952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.235 [2024-07-26 09:02:56.957402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.235 [2024-07-26 09:02:56.957427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:42960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.235 [2024-07-26 09:02:56.957451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.235 [2024-07-26 09:02:56.957477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:42968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.235 [2024-07-26 09:02:56.957500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.235 [2024-07-26 09:02:56.957526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:42976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.235 [2024-07-26 09:02:56.957549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.235 [2024-07-26 09:02:56.957575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:42984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.235 [2024-07-26 09:02:56.957604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.235 [2024-07-26 09:02:56.957630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:42992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.235 [2024-07-26 09:02:56.957654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.235 [2024-07-26 09:02:56.957681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:43000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.235 [2024-07-26 09:02:56.957704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.235 [2024-07-26 09:02:56.957730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:43008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.235 [2024-07-26 09:02:56.957753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.235 [2024-07-26 09:02:56.957780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:43016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.235 [2024-07-26 09:02:56.957803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.235 [2024-07-26 09:02:56.957828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:43024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.235 [2024-07-26 09:02:56.957852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.235 [2024-07-26 09:02:56.957878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:43032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.235 [2024-07-26 09:02:56.957902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.235 [2024-07-26 09:02:56.957927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:43040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.235 [2024-07-26 09:02:56.957951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.235 [2024-07-26 09:02:56.957977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:43048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.235 [2024-07-26 09:02:56.958002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.235 [2024-07-26 09:02:56.958028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:43056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.235 [2024-07-26 09:02:56.958056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.235 [2024-07-26 09:02:56.958114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:43064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.235 [2024-07-26 09:02:56.958141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.235 [2024-07-26 09:02:56.958168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:43072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.235 [2024-07-26 09:02:56.958193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.235 [2024-07-26 09:02:56.958219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:43080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.235 [2024-07-26 09:02:56.958243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.235 [2024-07-26 09:02:56.958275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:43088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.235 [2024-07-26 09:02:56.958311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.235 [2024-07-26 09:02:56.958337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:43096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.235 [2024-07-26 09:02:56.958362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.235 [2024-07-26 09:02:56.958403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:43104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.235 [2024-07-26 09:02:56.958427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.235 [2024-07-26 09:02:56.958454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:43112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.235 [2024-07-26 09:02:56.958479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.235 [2024-07-26 09:02:56.958506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:43120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.235 [2024-07-26 09:02:56.958529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.235 [2024-07-26 09:02:56.958555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:43128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.235 [2024-07-26 09:02:56.958578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.235 [2024-07-26 09:02:56.958606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:43136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.235 [2024-07-26 09:02:56.958628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.235 [2024-07-26 09:02:56.958656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:43144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.235 [2024-07-26 09:02:56.958678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.235 [2024-07-26 09:02:56.958706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.235 [2024-07-26 09:02:56.958729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.235 [2024-07-26 09:02:56.958756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:43160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.235 [2024-07-26 09:02:56.958780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.235 [2024-07-26 09:02:56.958806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:43168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.235 [2024-07-26 09:02:56.958830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.235 [2024-07-26 09:02:56.958856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:43176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.235 [2024-07-26 09:02:56.958881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.236 [2024-07-26 09:02:56.958908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:43184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.236 [2024-07-26 09:02:56.958936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.236 [2024-07-26 09:02:56.958970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:43192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.236 [2024-07-26 09:02:56.958994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.236 [2024-07-26 09:02:56.959023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:42568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.236 [2024-07-26 09:02:56.959045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.236 [2024-07-26 09:02:56.959095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:42576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.236 [2024-07-26 09:02:56.959119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.236 [2024-07-26 09:02:56.959146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:42584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.236 [2024-07-26 09:02:56.959171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.236 [2024-07-26 09:02:56.959197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:42592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.236 [2024-07-26 09:02:56.959221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.236 [2024-07-26 09:02:56.959246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:42600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.236 [2024-07-26 09:02:56.959272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.236 [2024-07-26 09:02:56.959298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:42608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.236 [2024-07-26 09:02:56.959324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.236 [2024-07-26 09:02:56.959351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:42616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.236 [2024-07-26 09:02:56.959390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.236 [2024-07-26 09:02:56.959415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:43200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.236 [2024-07-26 09:02:56.959439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.236 [2024-07-26 09:02:56.959465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:43208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.236 [2024-07-26 09:02:56.959488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.236 [2024-07-26 09:02:56.959514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:43216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.236 [2024-07-26 09:02:56.959536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.236 [2024-07-26 09:02:56.959563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:43224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.236 [2024-07-26 09:02:56.959586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.236 [2024-07-26 09:02:56.959613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:43232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.236 [2024-07-26 09:02:56.959641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.236 [2024-07-26 09:02:56.959668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:43240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.236 [2024-07-26 09:02:56.959692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.236 [2024-07-26 09:02:56.959720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:43248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.236 [2024-07-26 09:02:56.959744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.236 [2024-07-26 09:02:56.959772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:43256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.236 [2024-07-26 09:02:56.959795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.236 [2024-07-26 09:02:56.959822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:43264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.236 [2024-07-26 09:02:56.959845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.236 [2024-07-26 09:02:56.959872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:43272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.236 [2024-07-26 09:02:56.959895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.236 [2024-07-26 09:02:56.959921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:43280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.236 [2024-07-26 09:02:56.959944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.236 [2024-07-26 09:02:56.959970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:43288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.236 [2024-07-26 09:02:56.959994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.236 [2024-07-26 09:02:56.960019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:43296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.236 [2024-07-26 09:02:56.960043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.236 [2024-07-26 09:02:56.960091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:43304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.236 [2024-07-26 09:02:56.960117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.236 [2024-07-26 09:02:56.960144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:43312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.236 [2024-07-26 09:02:56.960170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.236 [2024-07-26 09:02:56.960196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:43320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.236 [2024-07-26 09:02:56.960222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.236 [2024-07-26 09:02:56.960249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:43328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.236 [2024-07-26 09:02:56.960276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.236 [2024-07-26 09:02:56.960308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:43336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.236 [2024-07-26 09:02:56.960334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.236 [2024-07-26 09:02:56.960361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:43344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.236 [2024-07-26 09:02:56.960399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.236 [2024-07-26 09:02:56.960425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:43352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.236 [2024-07-26 09:02:56.960445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.236 [2024-07-26 09:02:56.960469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:43360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.236 [2024-07-26 09:02:56.960490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.236 [2024-07-26 09:02:56.960510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:43368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.236 [2024-07-26 09:02:56.960527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.236 [2024-07-26 09:02:56.960548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:43376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.236 [2024-07-26 09:02:56.960568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.236 [2024-07-26 09:02:56.960591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:43384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.236 [2024-07-26 09:02:56.960608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.236 [2024-07-26 09:02:56.960632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:43392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.236 [2024-07-26 09:02:56.960652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.236 [2024-07-26 09:02:56.960675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:43400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.236 [2024-07-26 09:02:56.960691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.236 [2024-07-26 09:02:56.960713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:43408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.236 [2024-07-26 09:02:56.960732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.236 [2024-07-26 09:02:56.960753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:43416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.236 [2024-07-26 09:02:56.960770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.236 [2024-07-26 09:02:56.960807] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:45.236 [2024-07-26 09:02:56.960828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43424 len:8 PRP1 0x0 PRP2 0x0 00:29:45.236 [2024-07-26 09:02:56.960848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.236 [2024-07-26 09:02:56.960878] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:45.236 [2024-07-26 09:02:56.960903] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:45.236 [2024-07-26 09:02:56.960923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43432 len:8 PRP1 0x0 PRP2 0x0 00:29:45.236 [2024-07-26 09:02:56.960944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.236 [2024-07-26 09:02:56.960965] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:45.236 [2024-07-26 09:02:56.960983] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:45.236 [2024-07-26 09:02:56.961002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43440 len:8 PRP1 0x0 PRP2 0x0 00:29:45.236 [2024-07-26 09:02:56.961022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.236 [2024-07-26 09:02:56.961044] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:45.236 [2024-07-26 09:02:56.961086] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:45.236 [2024-07-26 09:02:56.961106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43448 len:8 PRP1 0x0 PRP2 0x0 00:29:45.236 [2024-07-26 09:02:56.961126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.236 [2024-07-26 09:02:56.961149] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:45.236 [2024-07-26 09:02:56.961166] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:45.236 [2024-07-26 09:02:56.961184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43456 len:8 PRP1 0x0 PRP2 0x0 00:29:45.236 [2024-07-26 09:02:56.961205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.236 [2024-07-26 09:02:56.961224] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:45.236 [2024-07-26 09:02:56.961242] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:45.236 [2024-07-26 09:02:56.961263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43464 len:8 PRP1 0x0 PRP2 0x0 00:29:45.236 [2024-07-26 09:02:56.961284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.236 [2024-07-26 09:02:56.961303] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:45.236 [2024-07-26 09:02:56.961320] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:45.236 [2024-07-26 09:02:56.961341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43472 len:8 PRP1 0x0 PRP2 0x0 00:29:45.236 [2024-07-26 09:02:56.961362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.236 [2024-07-26 09:02:56.961397] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:45.236 [2024-07-26 09:02:56.961415] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:45.236 [2024-07-26 09:02:56.961441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43480 len:8 PRP1 0x0 PRP2 0x0 00:29:45.236 [2024-07-26 09:02:56.961462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.236 [2024-07-26 09:02:56.961480] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:45.236 [2024-07-26 09:02:56.961497] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:45.236 [2024-07-26 09:02:56.961517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43488 len:8 PRP1 0x0 PRP2 0x0 00:29:45.236 [2024-07-26 09:02:56.961537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.236 [2024-07-26 09:02:56.961560] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:45.236 [2024-07-26 09:02:56.961576] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:45.236 [2024-07-26 09:02:56.961597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43496 len:8 PRP1 0x0 PRP2 0x0 00:29:45.236 [2024-07-26 09:02:56.961616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.236 [2024-07-26 09:02:56.961633] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:45.236 [2024-07-26 09:02:56.961650] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:45.236 [2024-07-26 09:02:56.961670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43504 len:8 PRP1 0x0 PRP2 0x0 00:29:45.236 [2024-07-26 09:02:56.961690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.236 [2024-07-26 09:02:56.961709] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:45.236 [2024-07-26 09:02:56.961728] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:45.236 [2024-07-26 09:02:56.961748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43512 len:8 PRP1 0x0 PRP2 0x0 00:29:45.236 [2024-07-26 09:02:56.961769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.236 [2024-07-26 09:02:56.961788] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:45.236 [2024-07-26 09:02:56.961805] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:45.236 [2024-07-26 09:02:56.961826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42624 len:8 PRP1 0x0 PRP2 0x0 00:29:45.236 [2024-07-26 09:02:56.961848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.236 [2024-07-26 09:02:56.961866] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:45.236 [2024-07-26 09:02:56.961883] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:45.236 [2024-07-26 09:02:56.961900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42632 len:8 PRP1 0x0 PRP2 0x0 00:29:45.236 [2024-07-26 09:02:56.961919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.236 [2024-07-26 09:02:56.961941] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:45.236 [2024-07-26 09:02:56.961956] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:45.236 [2024-07-26 09:02:56.961976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42640 len:8 PRP1 0x0 PRP2 0x0 00:29:45.236 [2024-07-26 09:02:56.961994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.236 [2024-07-26 09:02:56.962013] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:45.236 [2024-07-26 09:02:56.962031] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:45.236 [2024-07-26 09:02:56.962057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42648 len:8 PRP1 0x0 PRP2 0x0 00:29:45.236 [2024-07-26 09:02:56.962100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.236 [2024-07-26 09:02:56.962119] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:45.236 [2024-07-26 09:02:56.962136] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:45.236 [2024-07-26 09:02:56.962158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42656 len:8 PRP1 0x0 PRP2 0x0 00:29:45.236 [2024-07-26 09:02:56.962181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.236 [2024-07-26 09:02:56.962201] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:45.236 [2024-07-26 09:02:56.962218] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:45.236 [2024-07-26 09:02:56.962239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42664 len:8 PRP1 0x0 PRP2 0x0 00:29:45.236 [2024-07-26 09:02:56.962261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.236 [2024-07-26 09:02:56.962280] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:45.236 [2024-07-26 09:02:56.962299] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:45.236 [2024-07-26 09:02:56.962321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42672 len:8 PRP1 0x0 PRP2 0x0 00:29:45.236 [2024-07-26 09:02:56.962342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.236 [2024-07-26 09:02:56.962361] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:45.236 [2024-07-26 09:02:56.962393] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:45.236 [2024-07-26 09:02:56.962413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42680 len:8 PRP1 0x0 PRP2 0x0 00:29:45.236 [2024-07-26 09:02:56.962431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.236 [2024-07-26 09:02:56.962449] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:45.236 [2024-07-26 09:02:56.962467] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:45.236 [2024-07-26 09:02:56.962485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42688 len:8 PRP1 0x0 PRP2 0x0 00:29:45.236 [2024-07-26 09:02:56.962503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.236 [2024-07-26 09:02:56.962526] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:45.236 [2024-07-26 09:02:56.962543] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:45.236 [2024-07-26 09:02:56.962564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42696 len:8 PRP1 0x0 PRP2 0x0 00:29:45.236 [2024-07-26 09:02:56.962582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.236 [2024-07-26 09:02:56.962599] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:45.236 [2024-07-26 09:02:56.962615] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:45.236 [2024-07-26 09:02:56.962636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42704 len:8 PRP1 0x0 PRP2 0x0 00:29:45.236 [2024-07-26 09:02:56.962657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.236 [2024-07-26 09:02:56.962675] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:45.236 [2024-07-26 09:02:56.962692] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:45.236 [2024-07-26 09:02:56.962714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42712 len:8 PRP1 0x0 PRP2 0x0 00:29:45.236 [2024-07-26 09:02:56.962735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.236 [2024-07-26 09:02:56.962754] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:45.236 [2024-07-26 09:02:56.962776] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:45.236 [2024-07-26 09:02:56.962796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42720 len:8 PRP1 0x0 PRP2 0x0 00:29:45.237 [2024-07-26 09:02:56.962814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.237 [2024-07-26 09:02:56.962832] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:45.237 [2024-07-26 09:02:56.962850] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:45.237 [2024-07-26 09:02:56.962869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42728 len:8 PRP1 0x0 PRP2 0x0 00:29:45.237 [2024-07-26 09:02:56.962888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.237 [2024-07-26 09:02:56.962909] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:45.237 [2024-07-26 09:02:56.962926] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:45.237 [2024-07-26 09:02:56.962947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42736 len:8 PRP1 0x0 PRP2 0x0 00:29:45.237 [2024-07-26 09:02:56.962968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.237 [2024-07-26 09:02:56.962987] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:45.237 [2024-07-26 09:02:56.963005] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:45.237 [2024-07-26 09:02:56.963026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42744 len:8 PRP1 0x0 PRP2 0x0 00:29:45.237 [2024-07-26 09:02:56.963043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.237 [2024-07-26 09:02:56.963085] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:45.237 [2024-07-26 09:02:56.963105] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:45.237 [2024-07-26 09:02:56.963123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42752 len:8 PRP1 0x0 PRP2 0x0 00:29:45.237 [2024-07-26 09:02:56.963141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.237 [2024-07-26 09:02:56.963166] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:45.237 [2024-07-26 09:02:56.963185] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:45.237 [2024-07-26 09:02:56.963207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42760 len:8 PRP1 0x0 PRP2 0x0 00:29:45.237 [2024-07-26 09:02:56.963225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.237 [2024-07-26 09:02:56.963242] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:45.237 [2024-07-26 09:02:56.963259] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:45.237 [2024-07-26 09:02:56.963279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42768 len:8 PRP1 0x0 PRP2 0x0 00:29:45.237 [2024-07-26 09:02:56.963300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.237 [2024-07-26 09:02:56.963318] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:45.237 [2024-07-26 09:02:56.963333] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:45.237 [2024-07-26 09:02:56.963359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42776 len:8 PRP1 0x0 PRP2 0x0 00:29:45.237 [2024-07-26 09:02:56.963391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.237 [2024-07-26 09:02:56.963415] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:45.237 [2024-07-26 09:02:56.963431] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:45.237 [2024-07-26 09:02:56.963448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42784 len:8 PRP1 0x0 PRP2 0x0 00:29:45.237 [2024-07-26 09:02:56.963468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.237 [2024-07-26 09:02:56.963487] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:45.237 [2024-07-26 09:02:56.963504] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:45.237 [2024-07-26 09:02:56.963524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42792 len:8 PRP1 0x0 PRP2 0x0 00:29:45.237 [2024-07-26 09:02:56.963544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.237 [2024-07-26 09:02:56.963562] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:45.237 [2024-07-26 09:02:56.963578] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:45.237 [2024-07-26 09:02:56.963599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42800 len:8 PRP1 0x0 PRP2 0x0 00:29:45.237 [2024-07-26 09:02:56.963620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.237 [2024-07-26 09:02:56.963639] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:45.237 [2024-07-26 09:02:56.963657] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:45.237 [2024-07-26 09:02:56.963677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42808 len:8 PRP1 0x0 PRP2 0x0 00:29:45.237 [2024-07-26 09:02:56.963700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.237 [2024-07-26 09:02:56.963774] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1634330 was disconnected and freed. reset controller. 00:29:45.237 [2024-07-26 09:02:56.963796] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:29:45.237 [2024-07-26 09:02:56.963858] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:45.237 [2024-07-26 09:02:56.963881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.237 [2024-07-26 09:02:56.963905] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:45.237 [2024-07-26 09:02:56.963926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.237 [2024-07-26 09:02:56.963949] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:45.237 [2024-07-26 09:02:56.963970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.237 [2024-07-26 09:02:56.963989] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:45.237 [2024-07-26 09:02:56.964008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.237 [2024-07-26 09:02:56.964031] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.237 [2024-07-26 09:02:56.964107] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1610850 (9): Bad file descriptor 00:29:45.237 [2024-07-26 09:02:56.967860] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.237 [2024-07-26 09:02:57.096648] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:45.237 00:29:45.237 Latency(us) 00:29:45.237 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:45.237 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:45.237 Verification LBA range: start 0x0 length 0x4000 00:29:45.237 NVMe0n1 : 15.02 8519.92 33.28 752.09 0.00 13777.75 825.27 16699.54 00:29:45.237 =================================================================================================================== 00:29:45.237 Total : 8519.92 33.28 752.09 0.00 13777.75 825.27 16699.54 00:29:45.237 Received shutdown signal, test time was about 15.000000 seconds 00:29:45.237 00:29:45.237 Latency(us) 00:29:45.237 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:45.237 =================================================================================================================== 00:29:45.237 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:45.237 09:03:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:29:45.237 09:03:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:29:45.237 09:03:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:29:45.237 09:03:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=1079636 00:29:45.237 09:03:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:29:45.237 09:03:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 1079636 /var/tmp/bdevperf.sock 00:29:45.237 09:03:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 1079636 ']' 00:29:45.237 09:03:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:45.237 09:03:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:45.237 09:03:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:45.237 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:45.237 09:03:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:45.237 09:03:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:29:45.237 09:03:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:45.237 09:03:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:29:45.237 09:03:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:29:45.237 [2024-07-26 09:03:03.460317] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:45.237 09:03:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:29:45.494 [2024-07-26 09:03:03.717012] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:29:45.494 09:03:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:45.752 NVMe0n1 00:29:46.011 09:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:46.268 00:29:46.269 09:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:46.526 00:29:46.526 09:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:46.526 09:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:29:46.784 09:03:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:47.042 09:03:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:29:50.329 09:03:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:50.329 09:03:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:29:50.329 09:03:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=1080717 00:29:50.329 09:03:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:50.329 09:03:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 1080717 00:29:51.771 0 00:29:51.771 09:03:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:51.771 [2024-07-26 09:03:02.961942] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:29:51.771 [2024-07-26 09:03:02.962024] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1079636 ] 00:29:51.771 EAL: No free 2048 kB hugepages reported on node 1 00:29:51.771 [2024-07-26 09:03:02.994975] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:29:51.771 [2024-07-26 09:03:03.023923] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:51.771 [2024-07-26 09:03:03.107350] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:51.771 [2024-07-26 09:03:05.397719] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:29:51.771 [2024-07-26 09:03:05.397802] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:51.771 [2024-07-26 09:03:05.397824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.771 [2024-07-26 09:03:05.397855] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:51.771 [2024-07-26 09:03:05.397869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.771 [2024-07-26 09:03:05.397883] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:51.771 [2024-07-26 09:03:05.397897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.771 [2024-07-26 09:03:05.397911] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:51.771 [2024-07-26 09:03:05.397925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.771 [2024-07-26 09:03:05.397939] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.771 [2024-07-26 09:03:05.397982] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.771 [2024-07-26 09:03:05.398013] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e23850 (9): Bad file descriptor 00:29:51.771 [2024-07-26 09:03:05.490208] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:51.771 Running I/O for 1 seconds... 00:29:51.771 00:29:51.771 Latency(us) 00:29:51.771 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:51.771 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:51.771 Verification LBA range: start 0x0 length 0x4000 00:29:51.771 NVMe0n1 : 1.01 8402.65 32.82 0.00 0.00 15168.79 1492.76 16019.91 00:29:51.771 =================================================================================================================== 00:29:51.771 Total : 8402.65 32.82 0.00 0.00 15168.79 1492.76 16019.91 00:29:51.771 09:03:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:51.771 09:03:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:29:51.771 09:03:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:52.028 09:03:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:52.028 09:03:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:29:52.286 09:03:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:52.543 09:03:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:29:55.825 09:03:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:55.825 09:03:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:29:55.825 09:03:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 1079636 00:29:55.825 09:03:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 1079636 ']' 00:29:55.825 09:03:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 1079636 00:29:55.825 09:03:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:29:55.825 09:03:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:55.825 09:03:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1079636 00:29:55.825 09:03:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:55.825 09:03:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:55.825 09:03:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1079636' 00:29:55.825 killing process with pid 1079636 00:29:55.825 09:03:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 1079636 00:29:55.825 09:03:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 1079636 00:29:56.083 09:03:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:29:56.083 09:03:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:56.342 09:03:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:29:56.342 09:03:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:56.342 09:03:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:29:56.342 09:03:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:56.342 09:03:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:29:56.342 09:03:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:56.342 09:03:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:29:56.342 09:03:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:56.342 09:03:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:56.342 rmmod nvme_tcp 00:29:56.342 rmmod nvme_fabrics 00:29:56.342 rmmod nvme_keyring 00:29:56.342 09:03:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:56.342 09:03:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:29:56.342 09:03:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:29:56.342 09:03:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 1077261 ']' 00:29:56.342 09:03:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 1077261 00:29:56.342 09:03:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 1077261 ']' 00:29:56.342 09:03:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 1077261 00:29:56.342 09:03:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:29:56.342 09:03:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:56.342 09:03:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1077261 00:29:56.342 09:03:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:29:56.342 09:03:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:29:56.342 09:03:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1077261' 00:29:56.342 killing process with pid 1077261 00:29:56.342 09:03:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 1077261 00:29:56.342 09:03:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 1077261 00:29:56.599 09:03:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:56.599 09:03:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:56.599 09:03:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:56.599 09:03:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:56.599 09:03:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:56.599 09:03:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:56.599 09:03:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:56.599 09:03:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:59.126 09:03:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:59.126 00:29:59.126 real 0m35.194s 00:29:59.126 user 2m2.424s 00:29:59.126 sys 0m6.741s 00:29:59.126 09:03:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:59.126 09:03:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:29:59.126 ************************************ 00:29:59.126 END TEST nvmf_failover 00:29:59.126 ************************************ 00:29:59.126 09:03:17 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:29:59.126 09:03:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:59.126 09:03:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:59.126 09:03:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:59.126 ************************************ 00:29:59.126 START TEST nvmf_host_discovery 00:29:59.126 ************************************ 00:29:59.126 09:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:29:59.126 * Looking for test storage... 00:29:59.126 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:59.126 09:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:59.126 09:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:29:59.126 09:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:59.126 09:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:59.126 09:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:59.126 09:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:59.126 09:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:59.126 09:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:59.126 09:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:59.126 09:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:59.126 09:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:59.126 09:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:59.126 09:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:59.126 09:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:59.126 09:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:59.126 09:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:59.126 09:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:59.126 09:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:59.126 09:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:59.126 09:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:59.126 09:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:59.126 09:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:59.126 09:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:59.126 09:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:59.126 09:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:59.126 09:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:29:59.126 09:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:59.126 09:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:29:59.126 09:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:59.126 09:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:59.126 09:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:59.126 09:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:59.126 09:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:59.126 09:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:59.126 09:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:59.126 09:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:59.126 09:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:29:59.126 09:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:29:59.126 09:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:29:59.126 09:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:29:59.126 09:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:29:59.126 09:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:29:59.126 09:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:29:59.126 09:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:59.126 09:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:59.126 09:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:59.126 09:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:59.126 09:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:59.126 09:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:59.126 09:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:59.126 09:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:59.126 09:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:59.126 09:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:59.126 09:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:29:59.126 09:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:01.027 09:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:01.027 09:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:30:01.027 09:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:01.027 09:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:01.027 09:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:01.027 09:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:01.027 09:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:01.027 09:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:30:01.027 09:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:01.027 09:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:30:01.027 09:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:30:01.027 09:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:30:01.027 09:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:30:01.027 09:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:30:01.027 09:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:30:01.027 09:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:01.027 09:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:01.027 09:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:01.027 09:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:01.027 09:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:01.027 09:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:01.027 09:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:01.027 09:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:01.027 09:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:01.027 09:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:01.027 09:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:01.027 09:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:01.027 09:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:01.027 09:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:01.027 09:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:01.027 09:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:01.027 09:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:01.027 09:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:01.027 09:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:01.027 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:01.027 09:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:01.028 09:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:01.028 09:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:01.028 09:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:01.028 09:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:01.028 09:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:01.028 09:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:01.028 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:01.028 09:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:01.028 09:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:01.028 09:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:01.028 09:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:01.028 09:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:01.028 09:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:01.028 09:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:01.028 09:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:01.028 09:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:01.028 09:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:01.028 09:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:01.028 09:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:01.028 09:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:01.028 09:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:01.028 09:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:01.028 09:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:01.028 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:01.028 09:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:01.028 09:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:01.028 09:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:01.028 09:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:01.028 09:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:01.028 09:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:01.028 09:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:01.028 09:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:01.028 09:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:01.028 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:01.028 09:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:01.028 09:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:01.028 09:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:30:01.028 09:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:01.028 09:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:01.028 09:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:01.028 09:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:01.028 09:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:01.028 09:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:01.028 09:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:01.028 09:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:01.028 09:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:01.028 09:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:01.028 09:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:01.028 09:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:01.028 09:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:01.028 09:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:01.028 09:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:01.028 09:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:01.028 09:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:01.028 09:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:01.028 09:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:01.028 09:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:01.028 09:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:01.028 09:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:01.028 09:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:01.028 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:01.028 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.220 ms 00:30:01.028 00:30:01.028 --- 10.0.0.2 ping statistics --- 00:30:01.028 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:01.028 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:30:01.028 09:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:01.028 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:01.028 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.169 ms 00:30:01.028 00:30:01.028 --- 10.0.0.1 ping statistics --- 00:30:01.028 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:01.028 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:30:01.028 09:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:01.028 09:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:30:01.028 09:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:01.028 09:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:01.028 09:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:01.028 09:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:01.028 09:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:01.028 09:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:01.028 09:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:01.028 09:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:30:01.028 09:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:01.028 09:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:01.028 09:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:01.028 09:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=1083527 00:30:01.028 09:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:30:01.028 09:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 1083527 00:30:01.028 09:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 1083527 ']' 00:30:01.028 09:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:01.028 09:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:01.028 09:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:01.028 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:01.028 09:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:01.028 09:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:01.028 [2024-07-26 09:03:19.346429] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:30:01.028 [2024-07-26 09:03:19.346519] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:01.028 EAL: No free 2048 kB hugepages reported on node 1 00:30:01.028 [2024-07-26 09:03:19.384995] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:30:01.028 [2024-07-26 09:03:19.411188] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:01.287 [2024-07-26 09:03:19.496326] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:01.287 [2024-07-26 09:03:19.496391] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:01.287 [2024-07-26 09:03:19.496420] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:01.287 [2024-07-26 09:03:19.496432] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:01.287 [2024-07-26 09:03:19.496443] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:01.287 [2024-07-26 09:03:19.496468] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:01.287 09:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:01.287 09:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:30:01.287 09:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:01.287 09:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:01.287 09:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:01.287 09:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:01.287 09:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:01.287 09:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:01.287 09:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:01.287 [2024-07-26 09:03:19.627313] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:01.287 09:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:01.287 09:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:30:01.287 09:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:01.287 09:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:01.287 [2024-07-26 09:03:19.635539] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:30:01.287 09:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:01.287 09:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:30:01.287 09:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:01.287 09:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:01.287 null0 00:30:01.287 09:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:01.287 09:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:30:01.287 09:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:01.287 09:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:01.287 null1 00:30:01.287 09:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:01.287 09:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:30:01.287 09:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:01.287 09:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:01.287 09:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:01.287 09:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=1083551 00:30:01.287 09:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:30:01.287 09:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 1083551 /tmp/host.sock 00:30:01.287 09:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 1083551 ']' 00:30:01.287 09:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:30:01.287 09:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:01.287 09:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:30:01.287 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:30:01.287 09:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:01.287 09:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:01.287 [2024-07-26 09:03:19.706471] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:30:01.287 [2024-07-26 09:03:19.706556] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1083551 ] 00:30:01.287 EAL: No free 2048 kB hugepages reported on node 1 00:30:01.287 [2024-07-26 09:03:19.738518] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:30:01.545 [2024-07-26 09:03:19.768428] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:01.545 [2024-07-26 09:03:19.858532] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:01.545 09:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:01.545 09:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:30:01.545 09:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:01.545 09:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:30:01.545 09:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:01.545 09:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:01.545 09:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:01.545 09:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:30:01.545 09:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:01.545 09:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:01.545 09:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:01.545 09:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:30:01.545 09:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:30:01.545 09:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:01.545 09:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:01.545 09:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:01.545 09:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:01.545 09:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:01.545 09:03:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:01.545 09:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:01.803 09:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:30:01.803 09:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:30:01.804 09:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:01.804 09:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:01.804 09:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:01.804 09:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:01.804 09:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:01.804 09:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:01.804 09:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:01.804 09:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:30:01.804 09:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:30:01.804 09:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:01.804 09:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:01.804 09:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:01.804 09:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:30:01.804 09:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:01.804 09:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:01.804 09:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:01.804 09:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:01.804 09:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:01.804 09:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:01.804 09:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:01.804 09:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:30:01.804 09:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:30:01.804 09:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:01.804 09:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:01.804 09:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:01.804 09:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:01.804 09:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:01.804 09:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:01.804 09:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:01.804 09:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:30:01.804 09:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:30:01.804 09:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:01.804 09:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:01.804 09:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:01.804 09:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:30:01.804 09:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:01.804 09:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:01.804 09:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:01.804 09:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:01.804 09:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:01.804 09:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:01.804 09:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:01.804 09:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:30:01.804 09:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:30:01.804 09:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:01.804 09:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:01.804 09:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:01.804 09:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:01.804 09:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:01.804 09:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:01.804 09:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:01.804 09:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:30:01.804 09:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:01.804 09:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:01.804 09:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:02.062 [2024-07-26 09:03:20.265242] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:02.062 09:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:02.062 09:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:30:02.062 09:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:02.062 09:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:02.062 09:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:02.062 09:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:02.062 09:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:02.062 09:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:02.062 09:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:02.062 09:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:30:02.062 09:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:30:02.062 09:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:02.062 09:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:02.062 09:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:02.062 09:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:02.062 09:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:02.062 09:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:02.062 09:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:02.062 09:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:30:02.062 09:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:30:02.062 09:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:30:02.062 09:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:30:02.062 09:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:30:02.062 09:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:30:02.062 09:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:30:02.062 09:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:30:02.062 09:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:30:02.062 09:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:30:02.062 09:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:30:02.062 09:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:02.062 09:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:02.062 09:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:02.062 09:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:30:02.062 09:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:30:02.062 09:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:30:02.062 09:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:30:02.062 09:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:30:02.062 09:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:02.062 09:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:02.062 09:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:02.062 09:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:30:02.062 09:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:30:02.062 09:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:30:02.062 09:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:30:02.062 09:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:30:02.062 09:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:30:02.062 09:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:02.062 09:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:02.062 09:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:02.062 09:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:02.062 09:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:02.062 09:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:02.062 09:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:02.062 09:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == \n\v\m\e\0 ]] 00:30:02.062 09:03:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:30:02.631 [2024-07-26 09:03:21.036309] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:30:02.631 [2024-07-26 09:03:21.036338] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:30:02.631 [2024-07-26 09:03:21.036380] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:30:02.889 [2024-07-26 09:03:21.122662] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:30:02.889 [2024-07-26 09:03:21.226111] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:30:02.889 [2024-07-26 09:03:21.226134] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:30:03.147 09:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:30:03.147 09:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:30:03.147 09:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:30:03.147 09:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:03.147 09:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:03.147 09:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:03.147 09:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:03.147 09:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:03.147 09:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:03.147 09:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:03.147 09:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:03.147 09:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:30:03.147 09:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:30:03.147 09:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:30:03.147 09:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:30:03.147 09:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:30:03.147 09:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:30:03.147 09:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:30:03.147 09:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:03.147 09:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:03.147 09:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:03.147 09:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:03.147 09:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:03.147 09:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:03.147 09:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:03.147 09:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:30:03.147 09:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:30:03.147 09:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:30:03.147 09:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:30:03.147 09:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:30:03.147 09:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:30:03.147 09:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:30:03.147 09:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:30:03.147 09:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:30:03.147 09:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:03.147 09:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:30:03.147 09:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:03.147 09:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:30:03.147 09:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:30:03.147 09:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:03.147 09:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0 ]] 00:30:03.147 09:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:30:03.147 09:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:30:03.147 09:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:30:03.147 09:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:30:03.147 09:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:30:03.147 09:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:30:03.147 09:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:30:03.147 09:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:30:03.147 09:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:30:03.147 09:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:30:03.147 09:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:30:03.147 09:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:03.147 09:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:03.147 09:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:03.406 09:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:30:03.406 09:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:30:03.406 09:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:30:03.406 09:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:30:03.406 09:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:30:03.406 09:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:03.406 09:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:03.406 09:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:03.406 09:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:30:03.406 09:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:30:03.406 09:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:30:03.406 09:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:30:03.406 09:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:30:03.406 09:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:30:03.406 09:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:03.406 09:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:03.406 09:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:03.406 09:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:03.406 09:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:03.406 09:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:03.406 09:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:03.666 09:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:30:03.666 09:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:30:03.666 09:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:30:03.667 09:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:30:03.667 09:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:30:03.667 09:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:30:03.667 09:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:30:03.667 09:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:30:03.667 09:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:30:03.667 09:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:30:03.667 09:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:30:03.667 09:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:30:03.667 09:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:03.667 09:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:03.667 09:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:03.667 09:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:30:03.667 09:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:30:03.667 09:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:30:03.667 09:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:30:03.667 09:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:30:03.667 09:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:03.667 09:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:03.667 [2024-07-26 09:03:21.922300] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:03.667 [2024-07-26 09:03:21.923094] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:30:03.667 [2024-07-26 09:03:21.923141] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:30:03.667 09:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:03.667 09:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:30:03.667 09:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:30:03.667 09:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:30:03.667 09:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:30:03.667 09:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:30:03.667 09:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:30:03.667 09:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:03.667 09:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:03.667 09:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:03.667 09:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:03.667 09:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:03.667 09:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:03.667 09:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:03.667 09:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:03.667 09:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:30:03.667 09:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:30:03.667 09:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:30:03.667 09:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:30:03.667 09:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:30:03.667 09:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:30:03.667 09:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:30:03.667 09:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:03.667 09:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:03.667 09:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:03.667 09:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:03.667 09:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:03.667 09:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:03.667 09:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:03.667 09:03:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:30:03.667 09:03:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:30:03.667 09:03:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:30:03.667 09:03:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:30:03.667 09:03:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:30:03.667 09:03:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:30:03.667 09:03:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:30:03.667 09:03:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:30:03.667 09:03:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:30:03.667 09:03:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:03.667 09:03:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:30:03.667 09:03:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:03.667 09:03:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:30:03.667 09:03:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:30:03.667 09:03:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:03.667 [2024-07-26 09:03:22.050962] bdev_nvme.c:6935:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:30:03.667 09:03:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:30:03.667 09:03:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:30:03.927 [2024-07-26 09:03:22.149701] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:30:03.927 [2024-07-26 09:03:22.149727] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:30:03.927 [2024-07-26 09:03:22.149738] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:30:04.866 09:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:30:04.866 09:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:30:04.866 09:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:30:04.866 09:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:30:04.866 09:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:30:04.866 09:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:04.866 09:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:04.866 09:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:30:04.866 09:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:30:04.866 09:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:04.866 09:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:30:04.866 09:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:30:04.866 09:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:30:04.866 09:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:30:04.866 09:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:30:04.866 09:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:30:04.866 09:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:30:04.866 09:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:30:04.866 09:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:30:04.866 09:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:30:04.866 09:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:30:04.866 09:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:30:04.866 09:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:04.866 09:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:04.866 09:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:04.866 09:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:30:04.866 09:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:30:04.866 09:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:30:04.866 09:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:30:04.866 09:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:04.866 09:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:04.866 09:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:04.866 [2024-07-26 09:03:23.159027] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:30:04.866 [2024-07-26 09:03:23.159083] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:30:04.866 [2024-07-26 09:03:23.159158] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:04.866 [2024-07-26 09:03:23.159190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.866 [2024-07-26 09:03:23.159207] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:04.866 [2024-07-26 09:03:23.159221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.866 [2024-07-26 09:03:23.159243] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:04.866 [2024-07-26 09:03:23.159265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.866 [2024-07-26 09:03:23.159280] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:04.866 [2024-07-26 09:03:23.159294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.866 [2024-07-26 09:03:23.159308] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186e6e0 is same with the state(5) to be set 00:30:04.866 09:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:04.866 09:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:30:04.866 09:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:30:04.866 09:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:30:04.866 09:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:30:04.866 09:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:30:04.866 09:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:30:04.866 09:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:04.866 09:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:04.866 09:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:04.866 09:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:04.866 09:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:04.866 09:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:04.866 [2024-07-26 09:03:23.169137] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x186e6e0 (9): Bad file descriptor 00:30:04.866 09:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:04.866 [2024-07-26 09:03:23.179179] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:04.866 [2024-07-26 09:03:23.179387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.866 [2024-07-26 09:03:23.179417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x186e6e0 with addr=10.0.0.2, port=4420 00:30:04.866 [2024-07-26 09:03:23.179435] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186e6e0 is same with the state(5) to be set 00:30:04.866 [2024-07-26 09:03:23.179458] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x186e6e0 (9): Bad file descriptor 00:30:04.866 [2024-07-26 09:03:23.179481] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:04.866 [2024-07-26 09:03:23.179496] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:04.866 [2024-07-26 09:03:23.179512] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:04.866 [2024-07-26 09:03:23.179533] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:04.866 [2024-07-26 09:03:23.189274] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:04.866 [2024-07-26 09:03:23.189484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.866 [2024-07-26 09:03:23.189512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x186e6e0 with addr=10.0.0.2, port=4420 00:30:04.866 [2024-07-26 09:03:23.189529] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186e6e0 is same with the state(5) to be set 00:30:04.866 [2024-07-26 09:03:23.189556] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x186e6e0 (9): Bad file descriptor 00:30:04.866 [2024-07-26 09:03:23.189591] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:04.866 [2024-07-26 09:03:23.189609] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:04.866 [2024-07-26 09:03:23.189622] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:04.866 [2024-07-26 09:03:23.189641] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:04.866 [2024-07-26 09:03:23.199370] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:04.866 [2024-07-26 09:03:23.199589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.866 [2024-07-26 09:03:23.199617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x186e6e0 with addr=10.0.0.2, port=4420 00:30:04.866 [2024-07-26 09:03:23.199633] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186e6e0 is same with the state(5) to be set 00:30:04.866 [2024-07-26 09:03:23.199655] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x186e6e0 (9): Bad file descriptor 00:30:04.866 [2024-07-26 09:03:23.199676] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:04.866 [2024-07-26 09:03:23.199689] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:04.866 [2024-07-26 09:03:23.199703] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:04.867 [2024-07-26 09:03:23.199722] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:04.867 09:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:04.867 09:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:30:04.867 09:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:30:04.867 09:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:30:04.867 [2024-07-26 09:03:23.209439] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:04.867 09:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:30:04.867 [2024-07-26 09:03:23.209687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.867 [2024-07-26 09:03:23.209716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x186e6e0 with addr=10.0.0.2, port=4420 00:30:04.867 [2024-07-26 09:03:23.209732] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186e6e0 is same with the state(5) to be set 00:30:04.867 [2024-07-26 09:03:23.209755] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x186e6e0 (9): Bad file descriptor 00:30:04.867 [2024-07-26 09:03:23.209792] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:04.867 [2024-07-26 09:03:23.209810] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:04.867 [2024-07-26 09:03:23.209824] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:04.867 09:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:30:04.867 [2024-07-26 09:03:23.209843] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:04.867 09:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:30:04.867 09:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:30:04.867 09:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:04.867 09:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:04.867 09:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:04.867 09:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:04.867 09:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:04.867 09:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:04.867 [2024-07-26 09:03:23.219526] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:04.867 [2024-07-26 09:03:23.219770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.867 [2024-07-26 09:03:23.219798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x186e6e0 with addr=10.0.0.2, port=4420 00:30:04.867 [2024-07-26 09:03:23.219815] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186e6e0 is same with the state(5) to be set 00:30:04.867 [2024-07-26 09:03:23.219838] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x186e6e0 (9): Bad file descriptor 00:30:04.867 [2024-07-26 09:03:23.219870] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:04.867 [2024-07-26 09:03:23.219888] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:04.867 [2024-07-26 09:03:23.219902] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:04.867 [2024-07-26 09:03:23.219921] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:04.867 [2024-07-26 09:03:23.229611] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:04.867 [2024-07-26 09:03:23.229835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.867 [2024-07-26 09:03:23.229863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x186e6e0 with addr=10.0.0.2, port=4420 00:30:04.867 [2024-07-26 09:03:23.229880] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186e6e0 is same with the state(5) to be set 00:30:04.867 [2024-07-26 09:03:23.229902] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x186e6e0 (9): Bad file descriptor 00:30:04.867 [2024-07-26 09:03:23.229947] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:04.867 [2024-07-26 09:03:23.229966] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:04.867 [2024-07-26 09:03:23.229980] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:04.867 [2024-07-26 09:03:23.229999] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:04.867 [2024-07-26 09:03:23.239693] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:04.867 [2024-07-26 09:03:23.239916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.867 [2024-07-26 09:03:23.239943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x186e6e0 with addr=10.0.0.2, port=4420 00:30:04.867 [2024-07-26 09:03:23.239960] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186e6e0 is same with the state(5) to be set 00:30:04.867 [2024-07-26 09:03:23.239981] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x186e6e0 (9): Bad file descriptor 00:30:04.867 [2024-07-26 09:03:23.240014] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:04.867 [2024-07-26 09:03:23.240031] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:04.867 [2024-07-26 09:03:23.240044] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:04.867 [2024-07-26 09:03:23.240077] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:04.867 09:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:04.867 [2024-07-26 09:03:23.247028] bdev_nvme.c:6798:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:30:04.867 [2024-07-26 09:03:23.247078] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:30:04.867 09:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:30:04.867 09:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:30:04.867 09:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:30:04.867 09:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:30:04.867 09:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:30:04.867 09:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:30:04.867 09:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:30:04.867 09:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:30:04.867 09:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:30:04.867 09:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:04.867 09:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:30:04.867 09:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:04.867 09:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:30:04.867 09:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:30:04.867 09:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:04.867 09:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4421 == \4\4\2\1 ]] 00:30:04.867 09:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:30:04.867 09:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:30:04.867 09:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:30:04.867 09:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:30:04.867 09:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:30:04.867 09:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:30:04.867 09:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:30:04.867 09:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:30:04.867 09:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:30:04.867 09:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:30:04.867 09:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:04.867 09:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:30:04.867 09:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:04.867 09:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:05.128 09:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:30:05.128 09:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:30:05.128 09:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:30:05.128 09:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:30:05.128 09:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:30:05.128 09:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:05.128 09:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:05.128 09:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:05.128 09:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:30:05.128 09:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:30:05.128 09:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:30:05.128 09:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:30:05.128 09:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:30:05.128 09:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:30:05.128 09:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:05.128 09:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:05.128 09:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:05.128 09:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:05.128 09:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:05.128 09:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:05.128 09:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:05.128 09:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:30:05.128 09:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:30:05.128 09:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:30:05.128 09:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:30:05.128 09:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:30:05.128 09:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:30:05.128 09:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:30:05.128 09:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:30:05.128 09:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:05.128 09:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:05.128 09:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:05.128 09:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:05.128 09:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:05.128 09:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:05.128 09:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:05.128 09:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:30:05.128 09:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:30:05.128 09:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:30:05.128 09:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:30:05.128 09:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:30:05.128 09:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:30:05.128 09:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:30:05.128 09:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:30:05.128 09:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:30:05.128 09:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:30:05.128 09:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:30:05.128 09:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:30:05.128 09:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:05.128 09:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:05.128 09:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:05.128 09:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:30:05.128 09:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:30:05.128 09:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:30:05.128 09:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:30:05.128 09:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:05.128 09:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:05.128 09:03:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:06.067 [2024-07-26 09:03:24.514139] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:30:06.067 [2024-07-26 09:03:24.514176] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:30:06.067 [2024-07-26 09:03:24.514199] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:30:06.325 [2024-07-26 09:03:24.641647] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:30:06.583 [2024-07-26 09:03:24.949780] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:30:06.583 [2024-07-26 09:03:24.949834] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:30:06.583 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:06.583 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:06.583 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:30:06.583 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:06.583 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:30:06.583 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:06.583 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:30:06.583 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:06.583 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:06.583 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:06.583 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:06.583 request: 00:30:06.583 { 00:30:06.583 "name": "nvme", 00:30:06.583 "trtype": "tcp", 00:30:06.583 "traddr": "10.0.0.2", 00:30:06.583 "adrfam": "ipv4", 00:30:06.583 "trsvcid": "8009", 00:30:06.583 "hostnqn": "nqn.2021-12.io.spdk:test", 00:30:06.583 "wait_for_attach": true, 00:30:06.583 "method": "bdev_nvme_start_discovery", 00:30:06.583 "req_id": 1 00:30:06.583 } 00:30:06.583 Got JSON-RPC error response 00:30:06.583 response: 00:30:06.583 { 00:30:06.583 "code": -17, 00:30:06.583 "message": "File exists" 00:30:06.583 } 00:30:06.583 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:30:06.583 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:30:06.583 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:06.583 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:30:06.583 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:06.583 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:30:06.583 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:30:06.583 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:30:06.583 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:06.583 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:06.583 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:30:06.583 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:30:06.583 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:06.583 09:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:30:06.583 09:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:30:06.583 09:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:06.583 09:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:06.583 09:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:06.583 09:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:06.583 09:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:06.583 09:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:06.583 09:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:06.841 09:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:30:06.841 09:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:06.841 09:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:30:06.841 09:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:06.841 09:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:30:06.841 09:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:06.841 09:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:30:06.841 09:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:06.841 09:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:06.841 09:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:06.841 09:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:06.841 request: 00:30:06.841 { 00:30:06.841 "name": "nvme_second", 00:30:06.841 "trtype": "tcp", 00:30:06.841 "traddr": "10.0.0.2", 00:30:06.841 "adrfam": "ipv4", 00:30:06.841 "trsvcid": "8009", 00:30:06.841 "hostnqn": "nqn.2021-12.io.spdk:test", 00:30:06.841 "wait_for_attach": true, 00:30:06.841 "method": "bdev_nvme_start_discovery", 00:30:06.841 "req_id": 1 00:30:06.841 } 00:30:06.841 Got JSON-RPC error response 00:30:06.841 response: 00:30:06.841 { 00:30:06.841 "code": -17, 00:30:06.841 "message": "File exists" 00:30:06.841 } 00:30:06.841 09:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:30:06.841 09:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:30:06.841 09:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:06.841 09:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:30:06.841 09:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:06.841 09:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:30:06.841 09:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:30:06.841 09:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:06.841 09:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:06.841 09:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:30:06.841 09:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:30:06.841 09:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:30:06.841 09:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:06.841 09:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:30:06.841 09:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:30:06.841 09:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:06.841 09:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:06.841 09:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:06.841 09:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:06.841 09:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:06.841 09:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:06.841 09:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:06.841 09:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:30:06.841 09:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:30:06.841 09:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:30:06.841 09:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:30:06.841 09:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:30:06.841 09:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:06.841 09:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:30:06.841 09:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:06.841 09:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:30:06.841 09:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:06.841 09:03:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:07.777 [2024-07-26 09:03:26.165783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.777 [2024-07-26 09:03:26.165831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ace40 with addr=10.0.0.2, port=8010 00:30:07.777 [2024-07-26 09:03:26.165857] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:30:07.777 [2024-07-26 09:03:26.165873] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:30:07.777 [2024-07-26 09:03:26.165887] bdev_nvme.c:7073:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:30:08.756 [2024-07-26 09:03:27.168165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.756 [2024-07-26 09:03:27.168200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ace40 with addr=10.0.0.2, port=8010 00:30:08.756 [2024-07-26 09:03:27.168221] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:30:08.756 [2024-07-26 09:03:27.168234] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:30:08.756 [2024-07-26 09:03:27.168246] bdev_nvme.c:7073:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:30:10.130 [2024-07-26 09:03:28.170387] bdev_nvme.c:7054:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:30:10.130 request: 00:30:10.130 { 00:30:10.130 "name": "nvme_second", 00:30:10.130 "trtype": "tcp", 00:30:10.130 "traddr": "10.0.0.2", 00:30:10.130 "adrfam": "ipv4", 00:30:10.130 "trsvcid": "8010", 00:30:10.130 "hostnqn": "nqn.2021-12.io.spdk:test", 00:30:10.130 "wait_for_attach": false, 00:30:10.130 "attach_timeout_ms": 3000, 00:30:10.130 "method": "bdev_nvme_start_discovery", 00:30:10.130 "req_id": 1 00:30:10.130 } 00:30:10.130 Got JSON-RPC error response 00:30:10.130 response: 00:30:10.130 { 00:30:10.130 "code": -110, 00:30:10.130 "message": "Connection timed out" 00:30:10.130 } 00:30:10.130 09:03:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:30:10.130 09:03:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:30:10.130 09:03:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:10.130 09:03:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:30:10.130 09:03:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:10.130 09:03:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:30:10.130 09:03:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:30:10.130 09:03:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:30:10.130 09:03:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:10.130 09:03:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:10.130 09:03:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:30:10.130 09:03:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:30:10.130 09:03:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:10.130 09:03:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:30:10.130 09:03:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:30:10.130 09:03:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 1083551 00:30:10.130 09:03:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:30:10.130 09:03:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:10.130 09:03:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:30:10.130 09:03:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:10.130 09:03:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:30:10.131 09:03:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:10.131 09:03:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:10.131 rmmod nvme_tcp 00:30:10.131 rmmod nvme_fabrics 00:30:10.131 rmmod nvme_keyring 00:30:10.131 09:03:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:10.131 09:03:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:30:10.131 09:03:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:30:10.131 09:03:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 1083527 ']' 00:30:10.131 09:03:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 1083527 00:30:10.131 09:03:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@950 -- # '[' -z 1083527 ']' 00:30:10.131 09:03:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # kill -0 1083527 00:30:10.131 09:03:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # uname 00:30:10.131 09:03:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:10.131 09:03:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1083527 00:30:10.131 09:03:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:30:10.131 09:03:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:30:10.131 09:03:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1083527' 00:30:10.131 killing process with pid 1083527 00:30:10.131 09:03:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@969 -- # kill 1083527 00:30:10.131 09:03:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@974 -- # wait 1083527 00:30:10.131 09:03:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:10.131 09:03:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:10.131 09:03:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:10.131 09:03:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:10.131 09:03:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:10.131 09:03:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:10.131 09:03:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:10.131 09:03:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:12.671 09:03:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:12.671 00:30:12.671 real 0m13.540s 00:30:12.671 user 0m19.667s 00:30:12.671 sys 0m2.904s 00:30:12.671 09:03:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:12.672 09:03:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:12.672 ************************************ 00:30:12.672 END TEST nvmf_host_discovery 00:30:12.672 ************************************ 00:30:12.672 09:03:30 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:30:12.672 09:03:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:30:12.672 09:03:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:12.672 09:03:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:12.672 ************************************ 00:30:12.672 START TEST nvmf_host_multipath_status 00:30:12.672 ************************************ 00:30:12.672 09:03:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:30:12.672 * Looking for test storage... 00:30:12.672 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:12.672 09:03:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:12.672 09:03:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:30:12.672 09:03:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:12.672 09:03:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:12.672 09:03:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:12.672 09:03:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:12.672 09:03:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:12.672 09:03:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:12.672 09:03:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:12.672 09:03:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:12.672 09:03:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:12.672 09:03:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:12.672 09:03:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:12.672 09:03:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:12.672 09:03:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:12.672 09:03:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:12.672 09:03:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:12.672 09:03:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:12.672 09:03:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:12.672 09:03:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:12.672 09:03:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:12.672 09:03:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:12.672 09:03:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:12.672 09:03:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:12.672 09:03:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:12.672 09:03:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:30:12.672 09:03:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:12.672 09:03:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:30:12.672 09:03:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:12.672 09:03:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:12.672 09:03:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:12.672 09:03:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:12.672 09:03:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:12.672 09:03:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:12.672 09:03:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:12.672 09:03:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:12.672 09:03:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:30:12.672 09:03:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:30:12.672 09:03:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:12.672 09:03:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:30:12.672 09:03:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:12.672 09:03:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:30:12.672 09:03:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:30:12.672 09:03:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:12.672 09:03:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:12.672 09:03:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:12.672 09:03:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:12.672 09:03:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:12.672 09:03:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:12.672 09:03:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:12.672 09:03:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:12.672 09:03:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:12.672 09:03:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:12.672 09:03:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:30:12.672 09:03:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:30:14.577 09:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:14.577 09:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:30:14.577 09:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:14.577 09:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:14.577 09:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:14.577 09:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:14.577 09:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:14.577 09:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:30:14.577 09:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:14.577 09:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:30:14.577 09:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:30:14.577 09:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:30:14.577 09:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:30:14.577 09:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:30:14.577 09:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:30:14.577 09:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:14.577 09:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:14.577 09:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:14.577 09:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:14.577 09:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:14.577 09:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:14.577 09:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:14.577 09:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:14.577 09:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:14.577 09:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:14.577 09:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:14.577 09:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:14.577 09:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:14.577 09:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:14.577 09:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:14.577 09:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:14.577 09:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:14.577 09:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:14.577 09:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:14.577 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:14.577 09:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:14.577 09:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:14.577 09:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:14.577 09:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:14.577 09:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:14.577 09:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:14.577 09:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:14.577 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:14.577 09:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:14.578 09:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:14.578 09:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:14.578 09:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:14.578 09:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:14.578 09:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:14.578 09:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:14.578 09:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:14.578 09:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:14.578 09:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:14.578 09:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:14.578 09:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:14.578 09:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:14.578 09:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:14.578 09:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:14.578 09:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:14.578 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:14.578 09:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:14.578 09:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:14.578 09:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:14.578 09:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:14.578 09:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:14.578 09:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:14.578 09:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:14.578 09:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:14.578 09:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:14.578 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:14.578 09:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:14.578 09:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:14.578 09:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:30:14.578 09:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:14.578 09:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:14.578 09:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:14.578 09:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:14.578 09:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:14.578 09:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:14.578 09:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:14.578 09:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:14.578 09:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:14.579 09:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:14.579 09:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:14.579 09:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:14.579 09:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:14.579 09:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:14.579 09:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:14.579 09:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:14.579 09:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:14.579 09:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:14.579 09:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:14.579 09:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:14.579 09:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:14.579 09:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:14.579 09:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:14.579 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:14.579 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.219 ms 00:30:14.579 00:30:14.579 --- 10.0.0.2 ping statistics --- 00:30:14.579 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:14.579 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:30:14.579 09:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:14.579 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:14.579 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.088 ms 00:30:14.579 00:30:14.579 --- 10.0.0.1 ping statistics --- 00:30:14.579 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:14.579 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:30:14.579 09:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:14.579 09:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:30:14.579 09:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:14.579 09:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:14.579 09:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:14.579 09:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:14.579 09:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:14.579 09:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:14.579 09:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:14.579 09:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:30:14.579 09:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:14.579 09:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:14.579 09:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:30:14.579 09:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=1086613 00:30:14.579 09:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:30:14.579 09:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 1086613 00:30:14.579 09:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 1086613 ']' 00:30:14.579 09:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:14.579 09:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:14.579 09:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:14.579 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:14.579 09:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:14.579 09:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:30:14.579 [2024-07-26 09:03:32.804653] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:30:14.579 [2024-07-26 09:03:32.804735] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:14.579 EAL: No free 2048 kB hugepages reported on node 1 00:30:14.579 [2024-07-26 09:03:32.843379] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:30:14.579 [2024-07-26 09:03:32.875295] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:14.579 [2024-07-26 09:03:32.964443] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:14.579 [2024-07-26 09:03:32.964508] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:14.579 [2024-07-26 09:03:32.964525] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:14.579 [2024-07-26 09:03:32.964538] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:14.579 [2024-07-26 09:03:32.964550] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:14.579 [2024-07-26 09:03:32.964634] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:14.579 [2024-07-26 09:03:32.964641] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:14.838 09:03:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:14.838 09:03:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:30:14.838 09:03:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:14.838 09:03:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:14.838 09:03:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:30:14.838 09:03:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:14.838 09:03:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=1086613 00:30:14.838 09:03:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:15.109 [2024-07-26 09:03:33.347391] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:15.109 09:03:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:30:15.368 Malloc0 00:30:15.368 09:03:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:30:15.628 09:03:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:15.886 09:03:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:16.146 [2024-07-26 09:03:34.364027] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:16.146 09:03:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:16.405 [2024-07-26 09:03:34.620839] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:16.405 09:03:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=1086865 00:30:16.405 09:03:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:30:16.405 09:03:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:30:16.405 09:03:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 1086865 /var/tmp/bdevperf.sock 00:30:16.405 09:03:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 1086865 ']' 00:30:16.405 09:03:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:16.405 09:03:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:16.405 09:03:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:16.405 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:16.405 09:03:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:16.405 09:03:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:30:16.664 09:03:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:16.664 09:03:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:30:16.664 09:03:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:30:16.922 09:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:30:17.491 Nvme0n1 00:30:17.491 09:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:30:17.751 Nvme0n1 00:30:17.751 09:03:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:30:17.751 09:03:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:30:19.743 09:03:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:30:19.743 09:03:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:30:20.001 09:03:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:30:20.261 09:03:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:30:21.196 09:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:30:21.196 09:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:30:21.196 09:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:21.196 09:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:21.454 09:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:21.454 09:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:30:21.454 09:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:21.454 09:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:21.712 09:03:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:21.712 09:03:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:21.712 09:03:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:21.712 09:03:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:21.970 09:03:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:21.970 09:03:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:21.970 09:03:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:21.970 09:03:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:22.229 09:03:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:22.229 09:03:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:22.229 09:03:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:22.229 09:03:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:22.487 09:03:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:22.487 09:03:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:30:22.487 09:03:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:22.487 09:03:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:22.745 09:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:22.745 09:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:30:22.745 09:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:30:23.003 09:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:30:23.261 09:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:30:24.197 09:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:30:24.197 09:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:30:24.197 09:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:24.197 09:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:24.455 09:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:24.455 09:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:30:24.455 09:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:24.455 09:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:24.713 09:03:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:24.713 09:03:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:24.713 09:03:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:24.713 09:03:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:24.971 09:03:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:24.971 09:03:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:24.971 09:03:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:24.971 09:03:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:25.228 09:03:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:25.228 09:03:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:25.228 09:03:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:25.229 09:03:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:25.517 09:03:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:25.517 09:03:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:30:25.517 09:03:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:25.517 09:03:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:25.775 09:03:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:25.775 09:03:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:30:25.775 09:03:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:30:26.032 09:03:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:30:26.290 09:03:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:30:27.225 09:03:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:30:27.225 09:03:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:30:27.225 09:03:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:27.225 09:03:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:27.482 09:03:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:27.482 09:03:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:30:27.482 09:03:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:27.482 09:03:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:27.740 09:03:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:27.740 09:03:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:27.740 09:03:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:27.740 09:03:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:27.997 09:03:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:27.997 09:03:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:27.997 09:03:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:27.997 09:03:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:28.255 09:03:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:28.255 09:03:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:28.255 09:03:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:28.255 09:03:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:28.513 09:03:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:28.513 09:03:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:30:28.513 09:03:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:28.513 09:03:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:28.771 09:03:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:28.771 09:03:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:30:28.771 09:03:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:30:29.030 09:03:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:30:29.288 09:03:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:30:30.225 09:03:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:30:30.225 09:03:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:30:30.482 09:03:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:30.482 09:03:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:30.482 09:03:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:30.482 09:03:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:30:30.482 09:03:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:30.482 09:03:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:30.740 09:03:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:30.740 09:03:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:30.740 09:03:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:30.740 09:03:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:30.997 09:03:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:30.997 09:03:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:30.997 09:03:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:30.997 09:03:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:31.255 09:03:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:31.255 09:03:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:31.255 09:03:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:31.255 09:03:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:31.513 09:03:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:31.513 09:03:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:30:31.513 09:03:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:31.513 09:03:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:32.081 09:03:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:32.081 09:03:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:30:32.081 09:03:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:30:32.081 09:03:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:30:32.339 09:03:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:30:33.712 09:03:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:30:33.712 09:03:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:30:33.712 09:03:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:33.712 09:03:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:33.712 09:03:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:33.712 09:03:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:30:33.712 09:03:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:33.712 09:03:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:33.969 09:03:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:33.970 09:03:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:33.970 09:03:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:33.970 09:03:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:34.245 09:03:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:34.245 09:03:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:34.245 09:03:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:34.245 09:03:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:34.504 09:03:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:34.504 09:03:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:30:34.504 09:03:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:34.504 09:03:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:34.761 09:03:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:34.761 09:03:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:30:34.761 09:03:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:34.761 09:03:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:35.019 09:03:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:35.019 09:03:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:30:35.019 09:03:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:30:35.276 09:03:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:30:35.276 09:03:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:30:36.657 09:03:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:30:36.657 09:03:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:30:36.657 09:03:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:36.657 09:03:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:36.657 09:03:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:36.657 09:03:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:30:36.657 09:03:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:36.657 09:03:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:36.915 09:03:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:36.916 09:03:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:36.916 09:03:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:36.916 09:03:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:37.173 09:03:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:37.173 09:03:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:37.173 09:03:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:37.174 09:03:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:37.432 09:03:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:37.432 09:03:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:30:37.432 09:03:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:37.432 09:03:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:37.690 09:03:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:37.690 09:03:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:30:37.690 09:03:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:37.690 09:03:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:37.948 09:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:37.948 09:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:30:38.206 09:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:30:38.206 09:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:30:38.465 09:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:30:38.724 09:03:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:30:39.661 09:03:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:30:39.661 09:03:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:30:39.661 09:03:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:39.661 09:03:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:39.920 09:03:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:39.920 09:03:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:30:39.920 09:03:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:39.920 09:03:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:40.177 09:03:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:40.177 09:03:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:40.177 09:03:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:40.177 09:03:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:40.445 09:03:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:40.445 09:03:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:40.445 09:03:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:40.445 09:03:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:40.738 09:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:40.738 09:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:40.738 09:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:40.738 09:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:40.996 09:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:40.996 09:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:30:40.996 09:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:40.996 09:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:41.253 09:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:41.253 09:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:30:41.253 09:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:30:41.510 09:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:30:41.769 09:04:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:30:42.707 09:04:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:30:42.707 09:04:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:30:42.707 09:04:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:42.707 09:04:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:42.965 09:04:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:42.965 09:04:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:30:42.965 09:04:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:42.965 09:04:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:43.223 09:04:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:43.223 09:04:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:43.223 09:04:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:43.223 09:04:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:43.480 09:04:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:43.480 09:04:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:43.480 09:04:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:43.480 09:04:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:43.737 09:04:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:43.737 09:04:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:43.737 09:04:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:43.737 09:04:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:43.995 09:04:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:43.995 09:04:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:30:43.995 09:04:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:43.995 09:04:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:44.253 09:04:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:44.253 09:04:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:30:44.253 09:04:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:30:44.510 09:04:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:30:44.768 09:04:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:30:45.701 09:04:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:30:45.701 09:04:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:30:45.701 09:04:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:45.702 09:04:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:45.960 09:04:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:45.960 09:04:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:30:45.960 09:04:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:45.960 09:04:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:46.218 09:04:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:46.218 09:04:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:46.218 09:04:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:46.218 09:04:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:46.476 09:04:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:46.476 09:04:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:46.476 09:04:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:46.476 09:04:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:46.733 09:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:46.733 09:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:46.733 09:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:46.733 09:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:46.991 09:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:46.991 09:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:30:46.991 09:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:46.991 09:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:47.249 09:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:47.249 09:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:30:47.249 09:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:30:47.507 09:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:30:47.766 09:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:30:48.703 09:04:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:30:48.703 09:04:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:30:48.703 09:04:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:48.703 09:04:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:48.961 09:04:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:48.961 09:04:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:30:48.961 09:04:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:48.961 09:04:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:49.218 09:04:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:49.218 09:04:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:49.218 09:04:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:49.218 09:04:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:49.477 09:04:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:49.477 09:04:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:49.477 09:04:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:49.477 09:04:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:49.734 09:04:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:49.734 09:04:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:49.734 09:04:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:49.734 09:04:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:49.992 09:04:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:49.992 09:04:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:30:49.992 09:04:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:49.992 09:04:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:50.250 09:04:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:50.250 09:04:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 1086865 00:30:50.250 09:04:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 1086865 ']' 00:30:50.250 09:04:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 1086865 00:30:50.250 09:04:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:30:50.250 09:04:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:50.250 09:04:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1086865 00:30:50.520 09:04:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:30:50.520 09:04:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:30:50.520 09:04:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1086865' 00:30:50.520 killing process with pid 1086865 00:30:50.520 09:04:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 1086865 00:30:50.520 09:04:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 1086865 00:30:50.520 Connection closed with partial response: 00:30:50.520 00:30:50.520 00:30:50.520 09:04:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 1086865 00:30:50.520 09:04:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:50.520 [2024-07-26 09:03:34.679695] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:30:50.520 [2024-07-26 09:03:34.679776] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1086865 ] 00:30:50.520 EAL: No free 2048 kB hugepages reported on node 1 00:30:50.520 [2024-07-26 09:03:34.711802] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:30:50.520 [2024-07-26 09:03:34.739809] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:50.520 [2024-07-26 09:03:34.824116] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:30:50.520 Running I/O for 90 seconds... 00:30:50.520 [2024-07-26 09:03:50.489776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:52096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.520 [2024-07-26 09:03:50.489838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:30:50.520 [2024-07-26 09:03:50.489918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:52104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.520 [2024-07-26 09:03:50.489938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:50.520 [2024-07-26 09:03:50.489962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:52112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.520 [2024-07-26 09:03:50.489978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:50.520 [2024-07-26 09:03:50.490015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:52120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.520 [2024-07-26 09:03:50.490031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:30:50.520 [2024-07-26 09:03:50.490052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:52128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.520 [2024-07-26 09:03:50.490092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:30:50.520 [2024-07-26 09:03:50.490126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:52136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.520 [2024-07-26 09:03:50.490142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:30:50.520 [2024-07-26 09:03:50.490179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:52144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.520 [2024-07-26 09:03:50.490196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:30:50.520 [2024-07-26 09:03:50.490219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:52152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.521 [2024-07-26 09:03:50.490234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:30:50.521 [2024-07-26 09:03:50.490293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:52160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.521 [2024-07-26 09:03:50.490315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:50.521 [2024-07-26 09:03:50.490855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:52168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.521 [2024-07-26 09:03:50.490879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:50.521 [2024-07-26 09:03:50.490920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:52176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.521 [2024-07-26 09:03:50.490939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:50.521 [2024-07-26 09:03:50.490963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:52184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.521 [2024-07-26 09:03:50.490979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:30:50.521 [2024-07-26 09:03:50.491002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:52192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.521 [2024-07-26 09:03:50.491018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:30:50.521 [2024-07-26 09:03:50.491040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:52200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.521 [2024-07-26 09:03:50.491056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:30:50.521 [2024-07-26 09:03:50.491088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:52208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.521 [2024-07-26 09:03:50.491111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:30:50.521 [2024-07-26 09:03:50.491134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:52216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.521 [2024-07-26 09:03:50.491149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:50.521 [2024-07-26 09:03:50.491171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:52224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.521 [2024-07-26 09:03:50.491187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:30:50.521 [2024-07-26 09:03:50.491209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:52232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.521 [2024-07-26 09:03:50.491224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:50.521 [2024-07-26 09:03:50.491247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:52240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.521 [2024-07-26 09:03:50.491262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:30:50.521 [2024-07-26 09:03:50.491284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:52248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.521 [2024-07-26 09:03:50.491300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:30:50.521 [2024-07-26 09:03:50.491322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:52256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.521 [2024-07-26 09:03:50.491338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:50.521 [2024-07-26 09:03:50.491384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:52264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.521 [2024-07-26 09:03:50.491400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:50.521 [2024-07-26 09:03:50.491422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:52272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.521 [2024-07-26 09:03:50.491442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.521 [2024-07-26 09:03:50.491465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:52280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.521 [2024-07-26 09:03:50.491481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:50.521 [2024-07-26 09:03:50.491502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:52288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.521 [2024-07-26 09:03:50.491517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:50.521 [2024-07-26 09:03:50.491539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:52296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.521 [2024-07-26 09:03:50.491555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:30:50.521 [2024-07-26 09:03:50.491576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:52304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.521 [2024-07-26 09:03:50.491591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:30:50.521 [2024-07-26 09:03:50.491612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:52312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.521 [2024-07-26 09:03:50.491627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:30:50.521 [2024-07-26 09:03:50.491649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:52320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.521 [2024-07-26 09:03:50.491664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:30:50.521 [2024-07-26 09:03:50.491686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:52328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.521 [2024-07-26 09:03:50.491700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:30:50.521 [2024-07-26 09:03:50.491722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:52336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.521 [2024-07-26 09:03:50.491736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:30:50.521 [2024-07-26 09:03:50.491758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:52344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.521 [2024-07-26 09:03:50.491773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:30:50.521 [2024-07-26 09:03:50.491794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:52352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.521 [2024-07-26 09:03:50.491809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:30:50.521 [2024-07-26 09:03:50.491831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:52360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.521 [2024-07-26 09:03:50.491846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:50.521 [2024-07-26 09:03:50.491868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:52368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.521 [2024-07-26 09:03:50.491887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:30:50.521 [2024-07-26 09:03:50.491910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:52376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.521 [2024-07-26 09:03:50.491926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:30:50.521 [2024-07-26 09:03:50.491948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:52384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.521 [2024-07-26 09:03:50.491963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:30:50.521 [2024-07-26 09:03:50.491984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:52392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.521 [2024-07-26 09:03:50.491999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:30:50.521 [2024-07-26 09:03:50.492021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:52400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.521 [2024-07-26 09:03:50.492050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:30:50.521 [2024-07-26 09:03:50.492084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:52408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.521 [2024-07-26 09:03:50.492101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:30:50.521 [2024-07-26 09:03:50.492124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:52416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.521 [2024-07-26 09:03:50.492139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:50.521 [2024-07-26 09:03:50.492162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:52424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.521 [2024-07-26 09:03:50.492177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:30:50.521 [2024-07-26 09:03:50.492199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:52432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.521 [2024-07-26 09:03:50.492215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:30:50.521 [2024-07-26 09:03:50.492237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:52440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.521 [2024-07-26 09:03:50.492252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:30:50.521 [2024-07-26 09:03:50.492274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:52448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.521 [2024-07-26 09:03:50.492290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:30:50.521 [2024-07-26 09:03:50.492312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:52456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.522 [2024-07-26 09:03:50.492328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:30:50.522 [2024-07-26 09:03:50.492364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:52464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.522 [2024-07-26 09:03:50.492384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:30:50.522 [2024-07-26 09:03:50.492408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:52472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.522 [2024-07-26 09:03:50.492423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:30:50.522 [2024-07-26 09:03:50.492445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:52480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.522 [2024-07-26 09:03:50.492460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:30:50.522 [2024-07-26 09:03:50.492482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:52488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.522 [2024-07-26 09:03:50.492497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:50.522 [2024-07-26 09:03:50.492519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:52496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.522 [2024-07-26 09:03:50.492533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:30:50.522 [2024-07-26 09:03:50.492556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:52504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.522 [2024-07-26 09:03:50.492571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:30:50.522 [2024-07-26 09:03:50.492593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:52512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.522 [2024-07-26 09:03:50.492608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:30:50.522 [2024-07-26 09:03:50.492630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:52520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.522 [2024-07-26 09:03:50.492645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:30:50.522 [2024-07-26 09:03:50.492667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:52528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.522 [2024-07-26 09:03:50.492682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:50.522 [2024-07-26 09:03:50.492704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:52536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.522 [2024-07-26 09:03:50.492719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:50.522 [2024-07-26 09:03:50.492741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:52544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.522 [2024-07-26 09:03:50.492756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:50.522 [2024-07-26 09:03:50.492778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:52552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.522 [2024-07-26 09:03:50.492794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:30:50.522 [2024-07-26 09:03:50.492815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:52560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.522 [2024-07-26 09:03:50.492830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:30:50.522 [2024-07-26 09:03:50.492856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:52568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.522 [2024-07-26 09:03:50.492872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:30:50.522 [2024-07-26 09:03:50.492894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:52576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.522 [2024-07-26 09:03:50.492910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:30:50.522 [2024-07-26 09:03:50.492932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:52584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.522 [2024-07-26 09:03:50.492947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:30:50.522 [2024-07-26 09:03:50.492969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:52592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.522 [2024-07-26 09:03:50.492985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:30:50.522 [2024-07-26 09:03:50.493007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:52600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.522 [2024-07-26 09:03:50.493022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:30:50.522 [2024-07-26 09:03:50.493067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:52608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.522 [2024-07-26 09:03:50.493085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:30:50.522 [2024-07-26 09:03:50.493118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:52616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.522 [2024-07-26 09:03:50.493134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:50.522 [2024-07-26 09:03:50.493157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:52624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.522 [2024-07-26 09:03:50.493172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:30:50.522 [2024-07-26 09:03:50.493195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:52632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.522 [2024-07-26 09:03:50.493211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:30:50.522 [2024-07-26 09:03:50.493233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:52640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.522 [2024-07-26 09:03:50.493249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:30:50.522 [2024-07-26 09:03:50.493271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:52648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.522 [2024-07-26 09:03:50.493287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:30:50.522 [2024-07-26 09:03:50.493311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:52656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.522 [2024-07-26 09:03:50.493327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:30:50.522 [2024-07-26 09:03:50.493369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:52664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.522 [2024-07-26 09:03:50.493385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:30:50.522 [2024-07-26 09:03:50.493408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:52672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.522 [2024-07-26 09:03:50.493424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:50.522 [2024-07-26 09:03:50.493597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:52680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.522 [2024-07-26 09:03:50.493619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:30:50.522 [2024-07-26 09:03:50.493649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:52688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.522 [2024-07-26 09:03:50.493667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:30:50.522 [2024-07-26 09:03:50.493695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:52696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.522 [2024-07-26 09:03:50.493711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:30:50.522 [2024-07-26 09:03:50.493738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:52704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.522 [2024-07-26 09:03:50.493754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:30:50.522 [2024-07-26 09:03:50.493781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:52712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.522 [2024-07-26 09:03:50.493797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:30:50.522 [2024-07-26 09:03:50.493824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:52720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.522 [2024-07-26 09:03:50.493840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:30:50.522 [2024-07-26 09:03:50.493867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:52728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.522 [2024-07-26 09:03:50.493882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:30:50.522 [2024-07-26 09:03:50.493909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:52024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.522 [2024-07-26 09:03:50.493925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:30:50.522 [2024-07-26 09:03:50.493952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:52032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.522 [2024-07-26 09:03:50.493968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:50.522 [2024-07-26 09:03:50.493995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:52736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.523 [2024-07-26 09:03:50.494010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:50.523 [2024-07-26 09:03:50.494052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:52744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.523 [2024-07-26 09:03:50.494085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:30:50.523 [2024-07-26 09:03:50.494117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:52752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.523 [2024-07-26 09:03:50.494134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:30:50.523 [2024-07-26 09:03:50.494162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:52760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.523 [2024-07-26 09:03:50.494178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:30:50.523 [2024-07-26 09:03:50.494207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:52768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.523 [2024-07-26 09:03:50.494224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:50.523 [2024-07-26 09:03:50.494252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:52776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.523 [2024-07-26 09:03:50.494268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:50.523 [2024-07-26 09:03:50.494295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:52784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.523 [2024-07-26 09:03:50.494312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:50.523 [2024-07-26 09:03:50.494356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:52792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.523 [2024-07-26 09:03:50.494372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:30:50.523 [2024-07-26 09:03:50.494400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:52800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.523 [2024-07-26 09:03:50.494416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:30:50.523 [2024-07-26 09:03:50.494443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:52808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.523 [2024-07-26 09:03:50.494460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:30:50.523 [2024-07-26 09:03:50.494487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:52816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.523 [2024-07-26 09:03:50.494503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:30:50.523 [2024-07-26 09:03:50.494530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:52824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.523 [2024-07-26 09:03:50.494546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:30:50.523 [2024-07-26 09:03:50.494573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:52832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.523 [2024-07-26 09:03:50.494589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:30:50.523 [2024-07-26 09:03:50.494616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:52840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.523 [2024-07-26 09:03:50.494632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:30:50.523 [2024-07-26 09:03:50.494663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:52848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.523 [2024-07-26 09:03:50.494680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:30:50.523 [2024-07-26 09:03:50.494707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:52856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.523 [2024-07-26 09:03:50.494723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:50.523 [2024-07-26 09:03:50.494750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:52864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.523 [2024-07-26 09:03:50.494766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:30:50.523 [2024-07-26 09:03:50.494792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:52872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.523 [2024-07-26 09:03:50.494809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:50.523 [2024-07-26 09:03:50.494835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:52880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.523 [2024-07-26 09:03:50.494851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:30:50.523 [2024-07-26 09:03:50.494879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:52888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.523 [2024-07-26 09:03:50.494895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:30:50.523 [2024-07-26 09:03:50.494922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:52896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.523 [2024-07-26 09:03:50.494938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:30:50.523 [2024-07-26 09:03:50.494965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:52904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.523 [2024-07-26 09:03:50.494981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:30:50.523 [2024-07-26 09:03:50.495008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:52912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.523 [2024-07-26 09:03:50.495024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:50.523 [2024-07-26 09:03:50.495074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:52920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.523 [2024-07-26 09:03:50.495094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:30:50.523 [2024-07-26 09:03:50.495126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:52928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.523 [2024-07-26 09:03:50.495142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:30:50.523 [2024-07-26 09:03:50.495170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:52936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.523 [2024-07-26 09:03:50.495187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:50.523 [2024-07-26 09:03:50.495219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:52944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.523 [2024-07-26 09:03:50.495236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:30:50.523 [2024-07-26 09:03:50.495264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:52952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.523 [2024-07-26 09:03:50.495280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:30:50.523 [2024-07-26 09:03:50.495308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:52960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.523 [2024-07-26 09:03:50.495324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:30:50.523 [2024-07-26 09:03:50.495366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:52968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.523 [2024-07-26 09:03:50.495383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:30:50.523 [2024-07-26 09:03:50.495410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:52976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.523 [2024-07-26 09:03:50.495426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:30:50.523 [2024-07-26 09:03:50.495452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:52984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.523 [2024-07-26 09:03:50.495468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:50.523 [2024-07-26 09:03:50.495495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:52992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.523 [2024-07-26 09:03:50.495511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:30:50.523 [2024-07-26 09:03:50.495538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:53000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.523 [2024-07-26 09:03:50.495554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:30:50.523 [2024-07-26 09:03:50.495582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:53008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.523 [2024-07-26 09:03:50.495598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:30:50.523 [2024-07-26 09:03:50.495625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:53016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.523 [2024-07-26 09:03:50.495641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:30:50.523 [2024-07-26 09:03:50.495668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:53024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.523 [2024-07-26 09:03:50.495684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:50.523 [2024-07-26 09:03:50.495711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:53032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.524 [2024-07-26 09:03:50.495727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:50.524 [2024-07-26 09:03:50.495754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:52040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.524 [2024-07-26 09:03:50.495774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:50.524 [2024-07-26 09:03:50.495801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:52048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.524 [2024-07-26 09:03:50.495817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:30:50.524 [2024-07-26 09:03:50.495844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:52056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.524 [2024-07-26 09:03:50.495860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:30:50.524 [2024-07-26 09:03:50.495887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:52064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.524 [2024-07-26 09:03:50.495902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:50.524 [2024-07-26 09:03:50.495929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:52072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.524 [2024-07-26 09:03:50.495945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:30:50.524 [2024-07-26 09:03:50.495972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:52080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.524 [2024-07-26 09:03:50.495988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:50.524 [2024-07-26 09:03:50.496015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:52088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.524 [2024-07-26 09:03:50.496031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:30:50.524 [2024-07-26 09:04:06.126110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:28272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.524 [2024-07-26 09:04:06.126177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:50.524 [2024-07-26 09:04:06.126227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:28288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.524 [2024-07-26 09:04:06.126246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:50.524 [2024-07-26 09:04:06.126269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:28304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.524 [2024-07-26 09:04:06.126285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:50.524 [2024-07-26 09:04:06.126307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:28320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.524 [2024-07-26 09:04:06.126323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:30:50.524 [2024-07-26 09:04:06.126345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:28336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.524 [2024-07-26 09:04:06.126360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:30:50.524 [2024-07-26 09:04:06.126398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:28352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.524 [2024-07-26 09:04:06.126429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:30:50.524 [2024-07-26 09:04:06.126452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:28368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.524 [2024-07-26 09:04:06.126467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:30:50.524 [2024-07-26 09:04:06.126489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.524 [2024-07-26 09:04:06.126504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:30:50.524 [2024-07-26 09:04:06.126525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:28056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.524 [2024-07-26 09:04:06.126541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:30:50.524 [2024-07-26 09:04:06.126562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:28088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.524 [2024-07-26 09:04:06.126577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:30:50.524 [2024-07-26 09:04:06.126598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:28120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.524 [2024-07-26 09:04:06.126613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:30:50.524 [2024-07-26 09:04:06.129010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:28392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.524 [2024-07-26 09:04:06.129037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:50.524 [2024-07-26 09:04:06.129087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:28408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.524 [2024-07-26 09:04:06.129107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:30:50.524 [2024-07-26 09:04:06.129145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:28424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.524 [2024-07-26 09:04:06.129162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:30:50.524 [2024-07-26 09:04:06.129184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:28440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.524 [2024-07-26 09:04:06.129201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:30:50.524 [2024-07-26 09:04:06.129223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:28456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.524 [2024-07-26 09:04:06.129239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:30:50.524 [2024-07-26 09:04:06.129260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:28472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.524 [2024-07-26 09:04:06.129276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:30:50.524 [2024-07-26 09:04:06.129304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.524 [2024-07-26 09:04:06.129322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:30:50.524 [2024-07-26 09:04:06.129353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:28504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.524 [2024-07-26 09:04:06.129369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:50.524 [2024-07-26 09:04:06.129391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.524 [2024-07-26 09:04:06.129407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:30:50.524 [2024-07-26 09:04:06.129428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:28536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.524 [2024-07-26 09:04:06.129444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:30:50.524 [2024-07-26 09:04:06.129465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.524 [2024-07-26 09:04:06.129480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:30:50.524 [2024-07-26 09:04:06.129502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:28568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.524 [2024-07-26 09:04:06.129517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:30:50.524 [2024-07-26 09:04:06.129539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:28584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.524 [2024-07-26 09:04:06.129554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:30:50.524 [2024-07-26 09:04:06.129575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:28600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.524 [2024-07-26 09:04:06.129591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:30:50.524 [2024-07-26 09:04:06.129612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:28616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.524 [2024-07-26 09:04:06.129628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:30:50.524 [2024-07-26 09:04:06.129649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:28048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.524 [2024-07-26 09:04:06.129665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:30:50.524 [2024-07-26 09:04:06.129702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:28080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.524 [2024-07-26 09:04:06.129717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:50.524 [2024-07-26 09:04:06.129738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:28112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.524 [2024-07-26 09:04:06.129753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:50.525 [2024-07-26 09:04:06.129773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:28624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.525 [2024-07-26 09:04:06.129788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:30:50.525 [2024-07-26 09:04:06.129813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:28640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.525 [2024-07-26 09:04:06.129829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:30:50.525 [2024-07-26 09:04:06.129850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:28656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.525 [2024-07-26 09:04:06.129865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:30:50.525 [2024-07-26 09:04:06.129885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:28672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.525 [2024-07-26 09:04:06.129901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:50.525 [2024-07-26 09:04:06.129921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:28688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.525 [2024-07-26 09:04:06.129936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:50.525 [2024-07-26 09:04:06.129958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:28704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.525 [2024-07-26 09:04:06.129973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:50.525 [2024-07-26 09:04:06.130314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:28720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.525 [2024-07-26 09:04:06.130338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:30:50.525 [2024-07-26 09:04:06.130365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:28736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.525 [2024-07-26 09:04:06.130382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:30:50.525 [2024-07-26 09:04:06.130404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:28752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.525 [2024-07-26 09:04:06.130420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:30:50.525 [2024-07-26 09:04:06.130447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:28768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.525 [2024-07-26 09:04:06.130463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:30:50.525 [2024-07-26 09:04:06.130484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:28784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.525 [2024-07-26 09:04:06.130500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:30:50.525 [2024-07-26 09:04:06.130521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:28800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.525 [2024-07-26 09:04:06.130537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:30:50.525 [2024-07-26 09:04:06.130559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:28816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.525 [2024-07-26 09:04:06.130575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:30:50.525 [2024-07-26 09:04:06.130596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:28832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.525 [2024-07-26 09:04:06.130617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:30:50.525 [2024-07-26 09:04:06.130640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:28848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.525 [2024-07-26 09:04:06.130656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:50.525 [2024-07-26 09:04:06.130677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:28864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.525 [2024-07-26 09:04:06.130693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:30:50.525 [2024-07-26 09:04:06.130714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:28880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.525 [2024-07-26 09:04:06.130729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:50.525 [2024-07-26 09:04:06.130767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:28160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.525 [2024-07-26 09:04:06.130782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:30:50.525 [2024-07-26 09:04:06.130803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:28192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.525 [2024-07-26 09:04:06.130818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:30:50.525 [2024-07-26 09:04:06.130838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:28224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.525 [2024-07-26 09:04:06.130853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:30:50.525 [2024-07-26 09:04:06.130874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:28256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.525 [2024-07-26 09:04:06.130890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:30:50.525 [2024-07-26 09:04:06.130910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:28896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.525 [2024-07-26 09:04:06.130926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:50.525 [2024-07-26 09:04:06.130967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:28912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.525 [2024-07-26 09:04:06.130985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:30:50.525 [2024-07-26 09:04:06.131007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:28928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.525 [2024-07-26 09:04:06.131023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:30:50.525 [2024-07-26 09:04:06.131044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.525 [2024-07-26 09:04:06.131066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:50.525 [2024-07-26 09:04:06.131090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:28960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.525 [2024-07-26 09:04:06.131110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:30:50.525 [2024-07-26 09:04:06.131133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:28976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.525 [2024-07-26 09:04:06.131149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:30:50.525 [2024-07-26 09:04:06.131171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:28992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.525 [2024-07-26 09:04:06.131186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:30:50.525 [2024-07-26 09:04:06.131208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:29008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.525 [2024-07-26 09:04:06.131223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:30:50.525 [2024-07-26 09:04:06.131245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:29024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.525 [2024-07-26 09:04:06.131260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:30:50.525 [2024-07-26 09:04:06.131286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:29040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.526 [2024-07-26 09:04:06.131302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:50.526 [2024-07-26 09:04:06.131324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:28152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.526 [2024-07-26 09:04:06.131339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:30:50.526 [2024-07-26 09:04:06.131361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:28184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.526 [2024-07-26 09:04:06.131377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:30:50.526 [2024-07-26 09:04:06.131398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:28216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.526 [2024-07-26 09:04:06.131414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:30:50.526 [2024-07-26 09:04:06.131435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:28248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.526 [2024-07-26 09:04:06.131450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:30:50.526 [2024-07-26 09:04:06.131472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:28280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.526 [2024-07-26 09:04:06.131488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:50.526 [2024-07-26 09:04:06.131509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:28312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.526 [2024-07-26 09:04:06.131525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:50.526 [2024-07-26 09:04:06.131546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:28344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.526 [2024-07-26 09:04:06.131566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:50.526 [2024-07-26 09:04:06.131588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:28376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.526 [2024-07-26 09:04:06.131603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:30:50.526 [2024-07-26 09:04:06.131640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:28400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.526 [2024-07-26 09:04:06.131656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:30:50.526 [2024-07-26 09:04:06.131678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:28432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.526 [2024-07-26 09:04:06.131693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:50.526 [2024-07-26 09:04:06.131714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:28464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.526 [2024-07-26 09:04:06.131729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:30:50.526 [2024-07-26 09:04:06.132738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:28496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.526 [2024-07-26 09:04:06.132763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:50.526 [2024-07-26 09:04:06.132792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:29056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.526 [2024-07-26 09:04:06.132809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:30:50.526 [2024-07-26 09:04:06.132830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:28512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.526 [2024-07-26 09:04:06.132846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:30:50.526 [2024-07-26 09:04:06.132867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:28544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.526 [2024-07-26 09:04:06.132883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:30:50.526 [2024-07-26 09:04:06.132905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:28576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.526 [2024-07-26 09:04:06.132920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:50.526 [2024-07-26 09:04:06.132941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:28608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.526 [2024-07-26 09:04:06.132957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:50.526 [2024-07-26 09:04:06.132978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:28288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.526 [2024-07-26 09:04:06.132994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:30:50.526 [2024-07-26 09:04:06.133030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:28320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.526 [2024-07-26 09:04:06.133045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:30:50.526 [2024-07-26 09:04:06.133094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:28352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.526 [2024-07-26 09:04:06.133113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:30:50.526 [2024-07-26 09:04:06.133137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:28024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.526 [2024-07-26 09:04:06.133152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:30:50.526 [2024-07-26 09:04:06.133174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:28088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.526 [2024-07-26 09:04:06.133189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:30:50.526 [2024-07-26 09:04:06.133210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:28632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.526 [2024-07-26 09:04:06.133226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:50.526 [2024-07-26 09:04:06.133247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:28664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.526 [2024-07-26 09:04:06.133263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:50.526 [2024-07-26 09:04:06.133284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:28696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.526 [2024-07-26 09:04:06.133299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:50.526 [2024-07-26 09:04:06.133321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:29072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.526 [2024-07-26 09:04:06.133336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:30:50.526 [2024-07-26 09:04:06.133358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:29088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.526 [2024-07-26 09:04:06.133373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:30:50.526 [2024-07-26 09:04:06.133410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:28408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.526 [2024-07-26 09:04:06.133425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:30:50.526 [2024-07-26 09:04:06.133446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:28440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.526 [2024-07-26 09:04:06.133461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:30:50.526 [2024-07-26 09:04:06.133481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:28472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.526 [2024-07-26 09:04:06.133496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:50.526 [2024-07-26 09:04:06.133517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:28504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.526 [2024-07-26 09:04:06.133532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:30:50.526 [2024-07-26 09:04:06.133557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:28536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.526 [2024-07-26 09:04:06.133573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:50.526 [2024-07-26 09:04:06.133594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:28568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.526 [2024-07-26 09:04:06.133609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:30:50.526 [2024-07-26 09:04:06.133629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:28600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.526 [2024-07-26 09:04:06.133666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:30:50.526 [2024-07-26 09:04:06.133692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:28048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.526 [2024-07-26 09:04:06.133708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:50.526 [2024-07-26 09:04:06.133730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:28112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.526 [2024-07-26 09:04:06.133746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:50.526 [2024-07-26 09:04:06.133768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:28640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.526 [2024-07-26 09:04:06.133783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.526 [2024-07-26 09:04:06.133805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:28672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.527 [2024-07-26 09:04:06.133820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:50.527 [2024-07-26 09:04:06.133842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:28704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.527 [2024-07-26 09:04:06.133858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:50.527 [2024-07-26 09:04:06.134625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:28744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.527 [2024-07-26 09:04:06.134650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:30:50.527 [2024-07-26 09:04:06.134677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:28776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.527 [2024-07-26 09:04:06.134694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:30:50.527 [2024-07-26 09:04:06.134716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:28808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.527 [2024-07-26 09:04:06.134731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:30:50.527 [2024-07-26 09:04:06.134753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:28840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.527 [2024-07-26 09:04:06.134768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:30:50.527 [2024-07-26 09:04:06.134789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:28872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.527 [2024-07-26 09:04:06.134808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:30:50.527 [2024-07-26 09:04:06.134830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:28904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.527 [2024-07-26 09:04:06.134845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:30:50.527 [2024-07-26 09:04:06.134867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:28936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.527 [2024-07-26 09:04:06.134882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:30:50.527 [2024-07-26 09:04:06.134923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:28968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.527 [2024-07-26 09:04:06.134941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:30:50.527 [2024-07-26 09:04:06.134964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:29000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.527 [2024-07-26 09:04:06.134983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:50.527 [2024-07-26 09:04:06.135006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:29032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.527 [2024-07-26 09:04:06.135022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:30:50.527 [2024-07-26 09:04:06.135043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:28736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.527 [2024-07-26 09:04:06.135066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:30:50.527 [2024-07-26 09:04:06.135090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:28768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.527 [2024-07-26 09:04:06.135107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:30:50.527 [2024-07-26 09:04:06.135129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:28800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.527 [2024-07-26 09:04:06.135144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:30:50.527 [2024-07-26 09:04:06.135165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:28832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.527 [2024-07-26 09:04:06.135181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:30:50.527 [2024-07-26 09:04:06.135202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:28864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.527 [2024-07-26 09:04:06.135218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:30:50.527 [2024-07-26 09:04:06.135240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:28160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.527 [2024-07-26 09:04:06.135255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:50.527 [2024-07-26 09:04:06.135277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:28224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.527 [2024-07-26 09:04:06.135296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:30:50.527 [2024-07-26 09:04:06.135319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.527 [2024-07-26 09:04:06.135335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:30:50.527 [2024-07-26 09:04:06.135372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:28928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.527 [2024-07-26 09:04:06.135388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:30:50.527 [2024-07-26 09:04:06.135409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:28960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.527 [2024-07-26 09:04:06.135425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:30:50.527 [2024-07-26 09:04:06.135446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:28992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.527 [2024-07-26 09:04:06.135460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:30:50.527 [2024-07-26 09:04:06.135481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:29024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.527 [2024-07-26 09:04:06.135496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:30:50.527 [2024-07-26 09:04:06.135517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:28152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.527 [2024-07-26 09:04:06.135532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:30:50.527 [2024-07-26 09:04:06.135553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:28216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.527 [2024-07-26 09:04:06.135567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:30:50.527 [2024-07-26 09:04:06.135588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:28280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.527 [2024-07-26 09:04:06.135603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:50.527 [2024-07-26 09:04:06.135624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:28344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.527 [2024-07-26 09:04:06.135639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:30:50.527 [2024-07-26 09:04:06.135660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:28400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.527 [2024-07-26 09:04:06.135674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:30:50.527 [2024-07-26 09:04:06.135696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:28464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.527 [2024-07-26 09:04:06.135710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:30:50.527 [2024-07-26 09:04:06.137348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.527 [2024-07-26 09:04:06.137388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:30:50.527 [2024-07-26 09:04:06.137421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:28544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.527 [2024-07-26 09:04:06.137439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:50.527 [2024-07-26 09:04:06.137461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:28608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.527 [2024-07-26 09:04:06.137477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:50.527 [2024-07-26 09:04:06.137499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:28320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.527 [2024-07-26 09:04:06.137515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:50.527 [2024-07-26 09:04:06.137537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:28024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.527 [2024-07-26 09:04:06.137552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:30:50.527 [2024-07-26 09:04:06.137574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:28632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.527 [2024-07-26 09:04:06.137590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:30:50.527 [2024-07-26 09:04:06.137611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:28696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.527 [2024-07-26 09:04:06.137626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:30:50.527 [2024-07-26 09:04:06.137647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:29088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.527 [2024-07-26 09:04:06.137663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:30:50.527 [2024-07-26 09:04:06.137685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:28440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.527 [2024-07-26 09:04:06.137716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:30:50.528 [2024-07-26 09:04:06.137738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:28504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.528 [2024-07-26 09:04:06.137753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:30:50.528 [2024-07-26 09:04:06.137774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:28568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.528 [2024-07-26 09:04:06.137789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:30:50.528 [2024-07-26 09:04:06.137810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:28048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.528 [2024-07-26 09:04:06.137825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:30:50.528 [2024-07-26 09:04:06.137846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:28640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.528 [2024-07-26 09:04:06.137861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:50.528 [2024-07-26 09:04:06.137886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.528 [2024-07-26 09:04:06.137902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:30:50.528 [2024-07-26 09:04:06.137923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.528 [2024-07-26 09:04:06.137938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:30:50.528 [2024-07-26 09:04:06.137958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:29104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.528 [2024-07-26 09:04:06.137973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:30:50.528 [2024-07-26 09:04:06.137994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:29120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.528 [2024-07-26 09:04:06.138009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:30:50.528 [2024-07-26 09:04:06.138030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:28304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.528 [2024-07-26 09:04:06.138045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:30:50.528 [2024-07-26 09:04:06.138088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:28368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.528 [2024-07-26 09:04:06.138107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:30:50.528 [2024-07-26 09:04:06.138130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:28776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.528 [2024-07-26 09:04:06.138146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:50.528 [2024-07-26 09:04:06.138167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:28840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.528 [2024-07-26 09:04:06.138183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:30:50.528 [2024-07-26 09:04:06.138205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:28904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.528 [2024-07-26 09:04:06.138220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:30:50.528 [2024-07-26 09:04:06.138242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:28968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.528 [2024-07-26 09:04:06.138257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:30:50.528 [2024-07-26 09:04:06.138279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:29032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.528 [2024-07-26 09:04:06.138294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:30:50.528 [2024-07-26 09:04:06.138316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:28768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.528 [2024-07-26 09:04:06.138331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:30:50.528 [2024-07-26 09:04:06.138353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:28832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.528 [2024-07-26 09:04:06.138387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:30:50.528 [2024-07-26 09:04:06.138411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:28160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.528 [2024-07-26 09:04:06.138426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:30:50.528 [2024-07-26 09:04:06.138448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:28896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.528 [2024-07-26 09:04:06.138463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:30:50.528 [2024-07-26 09:04:06.138484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:28960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.528 [2024-07-26 09:04:06.138499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:50.528 [2024-07-26 09:04:06.138520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:29024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.528 [2024-07-26 09:04:06.138535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:50.528 [2024-07-26 09:04:06.138556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:28216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.528 [2024-07-26 09:04:06.138571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:30:50.528 [2024-07-26 09:04:06.138591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:28344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.528 [2024-07-26 09:04:06.138606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:30:50.528 [2024-07-26 09:04:06.138628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:28464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.528 [2024-07-26 09:04:06.138643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:30:50.528 [2024-07-26 09:04:06.140947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:29096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.528 [2024-07-26 09:04:06.140972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:50.528 [2024-07-26 09:04:06.141016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:28424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.528 [2024-07-26 09:04:06.141033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:50.528 [2024-07-26 09:04:06.141056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:28488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.528 [2024-07-26 09:04:06.141083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:50.528 [2024-07-26 09:04:06.141107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:28552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.528 [2024-07-26 09:04:06.141123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:30:50.528 [2024-07-26 09:04:06.141144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:28616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.528 [2024-07-26 09:04:06.141165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:30:50.528 [2024-07-26 09:04:06.141187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:28656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.528 [2024-07-26 09:04:06.141203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:30:50.528 [2024-07-26 09:04:06.141225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:29136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.528 [2024-07-26 09:04:06.141241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:30:50.528 [2024-07-26 09:04:06.141263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.528 [2024-07-26 09:04:06.141278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:30:50.528 [2024-07-26 09:04:06.141300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.528 [2024-07-26 09:04:06.141316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:30:50.528 [2024-07-26 09:04:06.141337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:29184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.528 [2024-07-26 09:04:06.141367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:30:50.528 [2024-07-26 09:04:06.141390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:29200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.528 [2024-07-26 09:04:06.141405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:30:50.528 [2024-07-26 09:04:06.141427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:29216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.528 [2024-07-26 09:04:06.141442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:50.528 [2024-07-26 09:04:06.141463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:29232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.529 [2024-07-26 09:04:06.141478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:30:50.529 [2024-07-26 09:04:06.141499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:29248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.529 [2024-07-26 09:04:06.141514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:50.529 [2024-07-26 09:04:06.141535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:29264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.529 [2024-07-26 09:04:06.141550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:30:50.529 [2024-07-26 09:04:06.141571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:29280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.529 [2024-07-26 09:04:06.141586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:30:50.529 [2024-07-26 09:04:06.141607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:29296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.529 [2024-07-26 09:04:06.141622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:30:50.529 [2024-07-26 09:04:06.141647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:28544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.529 [2024-07-26 09:04:06.141663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:30:50.529 [2024-07-26 09:04:06.141684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:28320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.529 [2024-07-26 09:04:06.141699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:50.529 [2024-07-26 09:04:06.141720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:28632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.529 [2024-07-26 09:04:06.141736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:30:50.529 [2024-07-26 09:04:06.141757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:29088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.529 [2024-07-26 09:04:06.141772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:30:50.529 [2024-07-26 09:04:06.141793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:28504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.529 [2024-07-26 09:04:06.141808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:50.529 [2024-07-26 09:04:06.141829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:28048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.529 [2024-07-26 09:04:06.141844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:30:50.529 [2024-07-26 09:04:06.141866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:28704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.529 [2024-07-26 09:04:06.141881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:30:50.529 [2024-07-26 09:04:06.142922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:29104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.529 [2024-07-26 09:04:06.142947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:30:50.529 [2024-07-26 09:04:06.142974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:28304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.529 [2024-07-26 09:04:06.142991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:30:50.529 [2024-07-26 09:04:06.143027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:28776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.529 [2024-07-26 09:04:06.143044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:30:50.529 [2024-07-26 09:04:06.143073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:28904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.529 [2024-07-26 09:04:06.143107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:50.529 [2024-07-26 09:04:06.143130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:29032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.529 [2024-07-26 09:04:06.143145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:30:50.529 [2024-07-26 09:04:06.143174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:28832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.529 [2024-07-26 09:04:06.143191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:30:50.529 [2024-07-26 09:04:06.143218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:28896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.529 [2024-07-26 09:04:06.143235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:30:50.529 [2024-07-26 09:04:06.143257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:29024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.529 [2024-07-26 09:04:06.143272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:30:50.529 [2024-07-26 09:04:06.143294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:28344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.529 [2024-07-26 09:04:06.143310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:50.529 [2024-07-26 09:04:06.143331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:28720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.529 [2024-07-26 09:04:06.143347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:50.529 [2024-07-26 09:04:06.143368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:28784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.529 [2024-07-26 09:04:06.143399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:50.529 [2024-07-26 09:04:06.143421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:28848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.529 [2024-07-26 09:04:06.143440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:30:50.529 [2024-07-26 09:04:06.143462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.529 [2024-07-26 09:04:06.143477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:30:50.529 [2024-07-26 09:04:06.143498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:28976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.529 [2024-07-26 09:04:06.143513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:50.529 [2024-07-26 09:04:06.143534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.529 [2024-07-26 09:04:06.143549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:30:50.529 [2024-07-26 09:04:06.143570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:28352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.529 [2024-07-26 09:04:06.143585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:50.529 [2024-07-26 09:04:06.143606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:28408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.529 [2024-07-26 09:04:06.143621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:30:50.529 [2024-07-26 09:04:06.143642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:28536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.529 [2024-07-26 09:04:06.143661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:30:50.529 [2024-07-26 09:04:06.143683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:28672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.529 [2024-07-26 09:04:06.143698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:30:50.529 [2024-07-26 09:04:06.143719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:29312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.529 [2024-07-26 09:04:06.143734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:50.529 [2024-07-26 09:04:06.143770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:29328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.530 [2024-07-26 09:04:06.143785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:50.530 [2024-07-26 09:04:06.143806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:29344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.530 [2024-07-26 09:04:06.143836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:30:50.530 [2024-07-26 09:04:06.143858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:29360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.530 [2024-07-26 09:04:06.143873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:30:50.530 [2024-07-26 09:04:06.143895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:29376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.530 [2024-07-26 09:04:06.143910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:30:50.530 [2024-07-26 09:04:06.143931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:29392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.530 [2024-07-26 09:04:06.143946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:30:50.530 [2024-07-26 09:04:06.143967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.530 [2024-07-26 09:04:06.143983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:30:50.530 [2024-07-26 09:04:06.144755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:29424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.530 [2024-07-26 09:04:06.144779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:50.530 [2024-07-26 09:04:06.144805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:29440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.530 [2024-07-26 09:04:06.144822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:50.530 [2024-07-26 09:04:06.144845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:29456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.530 [2024-07-26 09:04:06.144860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:50.530 [2024-07-26 09:04:06.144881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:29472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.530 [2024-07-26 09:04:06.144901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:30:50.530 [2024-07-26 09:04:06.144923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:29112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.530 [2024-07-26 09:04:06.144943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:30:50.530 [2024-07-26 09:04:06.144982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:28424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.530 [2024-07-26 09:04:06.144999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:30:50.530 [2024-07-26 09:04:06.145020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:28552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.530 [2024-07-26 09:04:06.145036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:30:50.530 [2024-07-26 09:04:06.145057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:28656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.530 [2024-07-26 09:04:06.145083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:50.530 [2024-07-26 09:04:06.145106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:29152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.530 [2024-07-26 09:04:06.145122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:30:50.530 [2024-07-26 09:04:06.145144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:29184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.530 [2024-07-26 09:04:06.145159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:50.530 [2024-07-26 09:04:06.145181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:29216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.530 [2024-07-26 09:04:06.145196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:30:50.530 [2024-07-26 09:04:06.145218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:29248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.530 [2024-07-26 09:04:06.145233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:30:50.530 [2024-07-26 09:04:06.145255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:29280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.530 [2024-07-26 09:04:06.145270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:50.530 [2024-07-26 09:04:06.145292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:28544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.530 [2024-07-26 09:04:06.145307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:50.530 [2024-07-26 09:04:06.145344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:28632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.530 [2024-07-26 09:04:06.145360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.530 [2024-07-26 09:04:06.145386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:28504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.530 [2024-07-26 09:04:06.145407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:50.530 [2024-07-26 09:04:06.145429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:28704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.530 [2024-07-26 09:04:06.145445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:50.530 [2024-07-26 09:04:06.145826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:28800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.530 [2024-07-26 09:04:06.145850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:30:50.530 [2024-07-26 09:04:06.145877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:28928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.530 [2024-07-26 09:04:06.145895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:30:50.530 [2024-07-26 09:04:06.145917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:29488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.530 [2024-07-26 09:04:06.145933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:30:50.530 [2024-07-26 09:04:06.145955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.530 [2024-07-26 09:04:06.145970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:30:50.530 [2024-07-26 09:04:06.145992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.530 [2024-07-26 09:04:06.146008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:30:50.530 [2024-07-26 09:04:06.146029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:29536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.530 [2024-07-26 09:04:06.146044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:30:50.530 [2024-07-26 09:04:06.146073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:29552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.530 [2024-07-26 09:04:06.146092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:30:50.530 [2024-07-26 09:04:06.146114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:29568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.530 [2024-07-26 09:04:06.146130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:30:50.530 [2024-07-26 09:04:06.146151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:28304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.530 [2024-07-26 09:04:06.146167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:50.530 [2024-07-26 09:04:06.146188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:28904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.530 [2024-07-26 09:04:06.146204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:30:50.530 [2024-07-26 09:04:06.146225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:28832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.530 [2024-07-26 09:04:06.146241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:30:50.530 [2024-07-26 09:04:06.146268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:29024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.530 [2024-07-26 09:04:06.146284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:30:50.530 [2024-07-26 09:04:06.146306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:28720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.530 [2024-07-26 09:04:06.146322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:30:50.530 [2024-07-26 09:04:06.146359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:28848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.530 [2024-07-26 09:04:06.146374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:30:50.530 [2024-07-26 09:04:06.146409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:28976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.530 [2024-07-26 09:04:06.146429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:30:50.530 [2024-07-26 09:04:06.146452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:28352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.530 [2024-07-26 09:04:06.146483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:50.531 [2024-07-26 09:04:06.146505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:28536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.531 [2024-07-26 09:04:06.146519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:30:50.531 [2024-07-26 09:04:06.146540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:29312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.531 [2024-07-26 09:04:06.146555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:30:50.531 [2024-07-26 09:04:06.146576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:29344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.531 [2024-07-26 09:04:06.146591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:30:50.531 [2024-07-26 09:04:06.146611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.531 [2024-07-26 09:04:06.146626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:30:50.531 [2024-07-26 09:04:06.146648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:29408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.531 [2024-07-26 09:04:06.146663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:30:50.531 [2024-07-26 09:04:06.148467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:29160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.531 [2024-07-26 09:04:06.148490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:30:50.531 [2024-07-26 09:04:06.148530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.531 [2024-07-26 09:04:06.148546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:30:50.531 [2024-07-26 09:04:06.148572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:29224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.531 [2024-07-26 09:04:06.148588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:30:50.531 [2024-07-26 09:04:06.148608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:29256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.531 [2024-07-26 09:04:06.148623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:50.531 [2024-07-26 09:04:06.148643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:29288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.531 [2024-07-26 09:04:06.148658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:30:50.531 [2024-07-26 09:04:06.148678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:28440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.531 [2024-07-26 09:04:06.148693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:30:50.531 [2024-07-26 09:04:06.148713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:28640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.531 [2024-07-26 09:04:06.148727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:30:50.531 [2024-07-26 09:04:06.148748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:29440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.531 [2024-07-26 09:04:06.148762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:30:50.531 [2024-07-26 09:04:06.148782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:29472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.531 [2024-07-26 09:04:06.148796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:50.531 [2024-07-26 09:04:06.148817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:28424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.531 [2024-07-26 09:04:06.148831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:50.531 [2024-07-26 09:04:06.148852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:28656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.531 [2024-07-26 09:04:06.148866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:50.531 [2024-07-26 09:04:06.148886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:29184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.531 [2024-07-26 09:04:06.148901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:30:50.531 [2024-07-26 09:04:06.148921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:29248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.531 [2024-07-26 09:04:06.148936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:30:50.531 [2024-07-26 09:04:06.148956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:28544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.531 [2024-07-26 09:04:06.148971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:30:50.531 [2024-07-26 09:04:06.148991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:28504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.531 [2024-07-26 09:04:06.149009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:30:50.531 [2024-07-26 09:04:06.149031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.531 [2024-07-26 09:04:06.149068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:30:50.531 [2024-07-26 09:04:06.149092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:28928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.531 [2024-07-26 09:04:06.149124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:30:50.531 [2024-07-26 09:04:06.149147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.531 [2024-07-26 09:04:06.149164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:30:50.531 [2024-07-26 09:04:06.149185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:29536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.531 [2024-07-26 09:04:06.149201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:30:50.531 [2024-07-26 09:04:06.149223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:29568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.531 [2024-07-26 09:04:06.149239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:50.531 [2024-07-26 09:04:06.149260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:28904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.531 [2024-07-26 09:04:06.149276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:30:50.531 [2024-07-26 09:04:06.149297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:29024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.531 [2024-07-26 09:04:06.149313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:30:50.531 [2024-07-26 09:04:06.149334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:28848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.531 [2024-07-26 09:04:06.149365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:30:50.531 [2024-07-26 09:04:06.149387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:28352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.531 [2024-07-26 09:04:06.149417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:30:50.531 [2024-07-26 09:04:06.149438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:29312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.531 [2024-07-26 09:04:06.149452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:30:50.531 [2024-07-26 09:04:06.149473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:29376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.531 [2024-07-26 09:04:06.149487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:30:50.531 [2024-07-26 09:04:06.152941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:29584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.531 [2024-07-26 09:04:06.152971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:50.531 [2024-07-26 09:04:06.153000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:29600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.531 [2024-07-26 09:04:06.153019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:30:50.531 [2024-07-26 09:04:06.153041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.531 [2024-07-26 09:04:06.153056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:30:50.531 [2024-07-26 09:04:06.153088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:29632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.531 [2024-07-26 09:04:06.153105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:30:50.531 [2024-07-26 09:04:06.153127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:29648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.531 [2024-07-26 09:04:06.153142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:30:50.531 [2024-07-26 09:04:06.153164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:29664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.531 [2024-07-26 09:04:06.153179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:30:50.531 [2024-07-26 09:04:06.153201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.531 [2024-07-26 09:04:06.153216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:30:50.532 [2024-07-26 09:04:06.153238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.532 [2024-07-26 09:04:06.153254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:30:50.532 [2024-07-26 09:04:06.153280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:29712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.532 [2024-07-26 09:04:06.153298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:30:50.532 [2024-07-26 09:04:06.153320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:29728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.532 [2024-07-26 09:04:06.153336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:50.532 [2024-07-26 09:04:06.153378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:29744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.532 [2024-07-26 09:04:06.153394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:50.532 [2024-07-26 09:04:06.153415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:29760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.532 [2024-07-26 09:04:06.153429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:30:50.532 [2024-07-26 09:04:06.153451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:29776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.532 [2024-07-26 09:04:06.153466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:30:50.532 [2024-07-26 09:04:06.153491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:29304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.532 [2024-07-26 09:04:06.153507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:30:50.532 [2024-07-26 09:04:06.153528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:29336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.532 [2024-07-26 09:04:06.153543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:50.532 [2024-07-26 09:04:06.153564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:29368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.532 [2024-07-26 09:04:06.153579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:50.532 [2024-07-26 09:04:06.153600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:29400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.532 [2024-07-26 09:04:06.153615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:50.532 [2024-07-26 09:04:06.153651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:29432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.532 [2024-07-26 09:04:06.153670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:30:50.532 [2024-07-26 09:04:06.153693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:29464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.532 [2024-07-26 09:04:06.153723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:30:50.532 [2024-07-26 09:04:06.153745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:29192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.532 [2024-07-26 09:04:06.153760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:30:50.532 [2024-07-26 09:04:06.153781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:29256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.532 [2024-07-26 09:04:06.153812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:30:50.532 [2024-07-26 09:04:06.153835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:28440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.532 [2024-07-26 09:04:06.153850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:30:50.532 [2024-07-26 09:04:06.153872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:29440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.532 [2024-07-26 09:04:06.153887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:30:50.532 [2024-07-26 09:04:06.153909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:28424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.532 [2024-07-26 09:04:06.153924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:30:50.532 [2024-07-26 09:04:06.153946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:29184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.532 [2024-07-26 09:04:06.153961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:30:50.532 [2024-07-26 09:04:06.153987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:28544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.532 [2024-07-26 09:04:06.154003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:50.532 [2024-07-26 09:04:06.154025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:28768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.532 [2024-07-26 09:04:06.154055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:30:50.532 [2024-07-26 09:04:06.154087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.532 [2024-07-26 09:04:06.154103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:50.532 [2024-07-26 09:04:06.154142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:29568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.532 [2024-07-26 09:04:06.154157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:30:50.532 [2024-07-26 09:04:06.154179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:29024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.532 [2024-07-26 09:04:06.154195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:30:50.532 [2024-07-26 09:04:06.154216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:28352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.532 [2024-07-26 09:04:06.154232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:30:50.532 [2024-07-26 09:04:06.154253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:29376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.532 [2024-07-26 09:04:06.154269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:30:50.532 [2024-07-26 09:04:06.154291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:29168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.532 [2024-07-26 09:04:06.154306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:50.532 [2024-07-26 09:04:06.154328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:29232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.532 [2024-07-26 09:04:06.154343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:30:50.532 [2024-07-26 09:04:06.154365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:29296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.532 [2024-07-26 09:04:06.154380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:30:50.532 [2024-07-26 09:04:06.154402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:29088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.532 [2024-07-26 09:04:06.154433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:50.532 [2024-07-26 09:04:06.154455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:29800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.532 [2024-07-26 09:04:06.154469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:30:50.532 [2024-07-26 09:04:06.154490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:29816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.532 [2024-07-26 09:04:06.154509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:30:50.532 [2024-07-26 09:04:06.154531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:29512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.532 [2024-07-26 09:04:06.154560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:30:50.532 [2024-07-26 09:04:06.154582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:29544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.532 [2024-07-26 09:04:06.154597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:30:50.532 [2024-07-26 09:04:06.154617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:29576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.532 [2024-07-26 09:04:06.154632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:30:50.532 [2024-07-26 09:04:06.154652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:28896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.532 [2024-07-26 09:04:06.154667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:50.532 [2024-07-26 09:04:06.154687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:29328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.532 [2024-07-26 09:04:06.154701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:30:50.532 [2024-07-26 09:04:06.154721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.532 [2024-07-26 09:04:06.154736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:30:50.532 [2024-07-26 09:04:06.154756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:29840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.532 [2024-07-26 09:04:06.154770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:30:50.532 [2024-07-26 09:04:06.154791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:29424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.533 [2024-07-26 09:04:06.154805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:30:50.533 [2024-07-26 09:04:06.155515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:29152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.533 [2024-07-26 09:04:06.155539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:50.533 [2024-07-26 09:04:06.155566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:29280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.533 [2024-07-26 09:04:06.155583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:50.533 [2024-07-26 09:04:06.155606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:29856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.533 [2024-07-26 09:04:06.155622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:50.533 [2024-07-26 09:04:06.155658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:29872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.533 [2024-07-26 09:04:06.155678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:30:50.533 [2024-07-26 09:04:06.155700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:29888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.533 [2024-07-26 09:04:06.155730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:30:50.533 [2024-07-26 09:04:06.155752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:29904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.533 [2024-07-26 09:04:06.155766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:50.533 [2024-07-26 09:04:06.155786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:29920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.533 [2024-07-26 09:04:06.155800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:30:50.533 [2024-07-26 09:04:06.155821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:29936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.533 [2024-07-26 09:04:06.155835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:50.533 [2024-07-26 09:04:06.155856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:29952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.533 [2024-07-26 09:04:06.155870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:30:50.533 [2024-07-26 09:04:06.155890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.533 [2024-07-26 09:04:06.155905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:30:50.533 [2024-07-26 09:04:06.155925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:29984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.533 [2024-07-26 09:04:06.155939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:30:50.533 [2024-07-26 09:04:06.155960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:30000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.533 [2024-07-26 09:04:06.155974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:50.533 [2024-07-26 09:04:06.155994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:29520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.533 [2024-07-26 09:04:06.156009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:50.533 [2024-07-26 09:04:06.156030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:28832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.533 [2024-07-26 09:04:06.156069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:30:50.533 [2024-07-26 09:04:06.156095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:29408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.533 [2024-07-26 09:04:06.156127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:30:50.533 [2024-07-26 09:04:06.156150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:30016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.533 [2024-07-26 09:04:06.156165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:30:50.533 [2024-07-26 09:04:06.156191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:30032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.533 [2024-07-26 09:04:06.156207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:30:50.533 [2024-07-26 09:04:06.156229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:30048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.533 [2024-07-26 09:04:06.156251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:30:50.533 [2024-07-26 09:04:06.156717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:29608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.533 [2024-07-26 09:04:06.156741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:50.533 [2024-07-26 09:04:06.156768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:29640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.533 [2024-07-26 09:04:06.156785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:50.533 [2024-07-26 09:04:06.156807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:29672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.533 [2024-07-26 09:04:06.156822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:50.533 [2024-07-26 09:04:06.156844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:29704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.533 [2024-07-26 09:04:06.156859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:30:50.533 [2024-07-26 09:04:06.156881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:29736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.533 [2024-07-26 09:04:06.156896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:30:50.533 [2024-07-26 09:04:06.156917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.533 [2024-07-26 09:04:06.156933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:30:50.533 [2024-07-26 09:04:06.156971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:29600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.533 [2024-07-26 09:04:06.156987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:30:50.533 [2024-07-26 09:04:06.157008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:29632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.533 [2024-07-26 09:04:06.157023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:50.533 [2024-07-26 09:04:06.157045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.533 [2024-07-26 09:04:06.157084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:30:50.533 [2024-07-26 09:04:06.157107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:29696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.533 [2024-07-26 09:04:06.157137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:50.533 [2024-07-26 09:04:06.157163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:29728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.533 [2024-07-26 09:04:06.157179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:30:50.533 [2024-07-26 09:04:06.157200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:29760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.533 [2024-07-26 09:04:06.157235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:30:50.533 [2024-07-26 09:04:06.157259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:29304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.533 [2024-07-26 09:04:06.157275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:50.533 [2024-07-26 09:04:06.157296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:29368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.533 [2024-07-26 09:04:06.157312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:50.533 [2024-07-26 09:04:06.157333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:29432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.533 [2024-07-26 09:04:06.157349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.533 [2024-07-26 09:04:06.157370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:29192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.533 [2024-07-26 09:04:06.157385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:50.533 [2024-07-26 09:04:06.157407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:28440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.533 [2024-07-26 09:04:06.157438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:50.533 [2024-07-26 09:04:06.157460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:28424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.533 [2024-07-26 09:04:06.157475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:30:50.533 [2024-07-26 09:04:06.157496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:28544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.533 [2024-07-26 09:04:06.157511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:30:50.533 [2024-07-26 09:04:06.157532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:29504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.533 [2024-07-26 09:04:06.157547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:30:50.533 [2024-07-26 09:04:06.157583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:29024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.534 [2024-07-26 09:04:06.157599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:30:50.534 [2024-07-26 09:04:06.157619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:29376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.534 [2024-07-26 09:04:06.157634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:30:50.534 [2024-07-26 09:04:06.157654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:29232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.534 [2024-07-26 09:04:06.157672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:30:50.534 [2024-07-26 09:04:06.157693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:29088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.534 [2024-07-26 09:04:06.157707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:30:50.534 [2024-07-26 09:04:06.157728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.534 [2024-07-26 09:04:06.157742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:30:50.534 [2024-07-26 09:04:06.157762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:29544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.534 [2024-07-26 09:04:06.157777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:50.534 [2024-07-26 09:04:06.157813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:28896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.534 [2024-07-26 09:04:06.157829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:30:50.534 [2024-07-26 09:04:06.157850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:29392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.534 [2024-07-26 09:04:06.157865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:30:50.534 [2024-07-26 09:04:06.157886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:29424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.534 [2024-07-26 09:04:06.157901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:30:50.534 [2024-07-26 09:04:06.159864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:29248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.534 [2024-07-26 09:04:06.159901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:30:50.534 [2024-07-26 09:04:06.159927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:29536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.534 [2024-07-26 09:04:06.159958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:30:50.534 [2024-07-26 09:04:06.159981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:29280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.534 [2024-07-26 09:04:06.159996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:30:50.534 [2024-07-26 09:04:06.160016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:29872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.534 [2024-07-26 09:04:06.160030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:50.534 [2024-07-26 09:04:06.160075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:29904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.534 [2024-07-26 09:04:06.160093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:30:50.534 [2024-07-26 09:04:06.160114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:29936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.534 [2024-07-26 09:04:06.160136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:30:50.534 [2024-07-26 09:04:06.160159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:29968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.534 [2024-07-26 09:04:06.160174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:30:50.534 [2024-07-26 09:04:06.160195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:30000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.534 [2024-07-26 09:04:06.160210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:30:50.534 [2024-07-26 09:04:06.160231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:28832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.534 [2024-07-26 09:04:06.160246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:30:50.534 [2024-07-26 09:04:06.160267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:30016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.534 [2024-07-26 09:04:06.160282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:30:50.534 [2024-07-26 09:04:06.160302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:30048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.534 [2024-07-26 09:04:06.160318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:30:50.534 [2024-07-26 09:04:06.160339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:29808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.534 [2024-07-26 09:04:06.160367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:30:50.534 [2024-07-26 09:04:06.160389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:29832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.534 [2024-07-26 09:04:06.160403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:50.534 [2024-07-26 09:04:06.160424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:29640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.534 [2024-07-26 09:04:06.160438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:30:50.534 [2024-07-26 09:04:06.160459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:29704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.534 [2024-07-26 09:04:06.160473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:30:50.534 [2024-07-26 09:04:06.160493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:29768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.534 [2024-07-26 09:04:06.160508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:30:50.534 [2024-07-26 09:04:06.160528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:29632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.534 [2024-07-26 09:04:06.160543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:30:50.534 [2024-07-26 09:04:06.160563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.534 [2024-07-26 09:04:06.160581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:50.534 [2024-07-26 09:04:06.160602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:29760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.534 [2024-07-26 09:04:06.160617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:50.534 [2024-07-26 09:04:06.160638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:29368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.534 [2024-07-26 09:04:06.160652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:50.534 [2024-07-26 09:04:06.160672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:29192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.534 [2024-07-26 09:04:06.160687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:30:50.534 [2024-07-26 09:04:06.160707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:28424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.534 [2024-07-26 09:04:06.160722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:30:50.534 [2024-07-26 09:04:06.160742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:29504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.534 [2024-07-26 09:04:06.160756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:30:50.534 [2024-07-26 09:04:06.160777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:29376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.534 [2024-07-26 09:04:06.160791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:30:50.534 [2024-07-26 09:04:06.160812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:29088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.535 [2024-07-26 09:04:06.160826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:30:50.535 [2024-07-26 09:04:06.160846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:29544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.535 [2024-07-26 09:04:06.160861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:30:50.535 [2024-07-26 09:04:06.160882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:29392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.535 [2024-07-26 09:04:06.160897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:30:50.535 [2024-07-26 09:04:06.163315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:30056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.535 [2024-07-26 09:04:06.163341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:30:50.535 [2024-07-26 09:04:06.163369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:30072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.535 [2024-07-26 09:04:06.163387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:50.535 [2024-07-26 09:04:06.163410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:30088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.535 [2024-07-26 09:04:06.163426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:30:50.535 [2024-07-26 09:04:06.163453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:30104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.535 [2024-07-26 09:04:06.163470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:30:50.535 [2024-07-26 09:04:06.163492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:30120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.535 [2024-07-26 09:04:06.163507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:30:50.535 [2024-07-26 09:04:06.163529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:30136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.535 [2024-07-26 09:04:06.163545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:30:50.535 [2024-07-26 09:04:06.163567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.535 [2024-07-26 09:04:06.163598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:30:50.535 [2024-07-26 09:04:06.163620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:29912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.535 [2024-07-26 09:04:06.163635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:30:50.535 [2024-07-26 09:04:06.163671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:29944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.535 [2024-07-26 09:04:06.163686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:50.535 [2024-07-26 09:04:06.163706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.535 [2024-07-26 09:04:06.163721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:30:50.535 [2024-07-26 09:04:06.163746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:30008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.535 [2024-07-26 09:04:06.163778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:30:50.535 [2024-07-26 09:04:06.163800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:30040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.535 [2024-07-26 09:04:06.163815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:30:50.535 [2024-07-26 09:04:06.163836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:30152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.535 [2024-07-26 09:04:06.163851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:30:50.535 [2024-07-26 09:04:06.163872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:30168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.535 [2024-07-26 09:04:06.163887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:30:50.535 [2024-07-26 09:04:06.163908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:30184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.535 [2024-07-26 09:04:06.163923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:30:50.535 [2024-07-26 09:04:06.163948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:30200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.535 [2024-07-26 09:04:06.163964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:30:50.535 [2024-07-26 09:04:06.163989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:30216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.535 [2024-07-26 09:04:06.164005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:30:50.535 [2024-07-26 09:04:06.164027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.535 [2024-07-26 09:04:06.164042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:50.535 [2024-07-26 09:04:06.164089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:30248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.535 [2024-07-26 09:04:06.164108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:50.535 [2024-07-26 09:04:06.164131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:30264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.535 [2024-07-26 09:04:06.164147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:30:50.535 [2024-07-26 09:04:06.164169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:30280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.535 [2024-07-26 09:04:06.164185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:30:50.535 [2024-07-26 09:04:06.164206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:29536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.535 [2024-07-26 09:04:06.164222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:30:50.535 [2024-07-26 09:04:06.164244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:29872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.535 [2024-07-26 09:04:06.164259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:50.535 [2024-07-26 09:04:06.164281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:29936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.535 [2024-07-26 09:04:06.164297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:50.535 [2024-07-26 09:04:06.164318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:30000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.535 [2024-07-26 09:04:06.164333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:50.535 [2024-07-26 09:04:06.164355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:30016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.535 [2024-07-26 09:04:06.164385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:30:50.535 [2024-07-26 09:04:06.164408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:29808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.535 [2024-07-26 09:04:06.164423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:30:50.535 [2024-07-26 09:04:06.164459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:29640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.535 [2024-07-26 09:04:06.164478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:30:50.535 [2024-07-26 09:04:06.164500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:29768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.535 [2024-07-26 09:04:06.164516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:30:50.535 [2024-07-26 09:04:06.164536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:29696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.535 [2024-07-26 09:04:06.164550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:30:50.535 [2024-07-26 09:04:06.164571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.535 [2024-07-26 09:04:06.164586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:30:50.535 [2024-07-26 09:04:06.164606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:28424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.535 [2024-07-26 09:04:06.164621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:30:50.535 [2024-07-26 09:04:06.164641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:29376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.535 [2024-07-26 09:04:06.164656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:30:50.535 [2024-07-26 09:04:06.164676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:29544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.535 [2024-07-26 09:04:06.164705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:50.536 [2024-07-26 09:04:06.164727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:29584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.536 [2024-07-26 09:04:06.164742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:30:50.536 [2024-07-26 09:04:06.164763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:29648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.536 [2024-07-26 09:04:06.164778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:50.536 [2024-07-26 09:04:06.164800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:29712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.536 [2024-07-26 09:04:06.164815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:30:50.536 [2024-07-26 09:04:06.164835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.536 [2024-07-26 09:04:06.164850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:30:50.536 [2024-07-26 09:04:06.164871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:29184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.536 [2024-07-26 09:04:06.164886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:30:50.536 [2024-07-26 09:04:06.164908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:29800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.536 [2024-07-26 09:04:06.164926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:30:50.536 [2024-07-26 09:04:06.166311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:29856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.536 [2024-07-26 09:04:06.166337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:50.536 [2024-07-26 09:04:06.166365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:29920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.536 [2024-07-26 09:04:06.166382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:30:50.536 [2024-07-26 09:04:06.166405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:30296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.536 [2024-07-26 09:04:06.166420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:30:50.536 [2024-07-26 09:04:06.166447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:30312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.536 [2024-07-26 09:04:06.166465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:50.536 [2024-07-26 09:04:06.166487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:30328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.536 [2024-07-26 09:04:06.166502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:30:50.536 [2024-07-26 09:04:06.166524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:30344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.536 [2024-07-26 09:04:06.166539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:30:50.536 [2024-07-26 09:04:06.166560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:30360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.536 [2024-07-26 09:04:06.166576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:30:50.536 [2024-07-26 09:04:06.166597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:29952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.536 [2024-07-26 09:04:06.166612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:30:50.536 [2024-07-26 09:04:06.166634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.536 [2024-07-26 09:04:06.166649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:30:50.536 [2024-07-26 09:04:06.166671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:30384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.536 [2024-07-26 09:04:06.166686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:50.536 [2024-07-26 09:04:06.166708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:30400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.536 [2024-07-26 09:04:06.166723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:30:50.536 [2024-07-26 09:04:06.166763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:30416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.536 [2024-07-26 09:04:06.166778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:30:50.536 [2024-07-26 09:04:06.166819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:30432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.536 [2024-07-26 09:04:06.166835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:30:50.536 [2024-07-26 09:04:06.166855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:30448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.536 [2024-07-26 09:04:06.166870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:30:50.536 [2024-07-26 09:04:06.166890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:29600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.536 [2024-07-26 09:04:06.166905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:50.536 [2024-07-26 09:04:06.166925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:29728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.536 [2024-07-26 09:04:06.166939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:50.536 [2024-07-26 09:04:06.166974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:30072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.536 [2024-07-26 09:04:06.166990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:50.536 [2024-07-26 09:04:06.167011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:30104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.536 [2024-07-26 09:04:06.167026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:30:50.536 [2024-07-26 09:04:06.167047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:30136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.536 [2024-07-26 09:04:06.167084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:30:50.536 [2024-07-26 09:04:06.167110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:29912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.536 [2024-07-26 09:04:06.167126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:50.536 [2024-07-26 09:04:06.167148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:29976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.536 [2024-07-26 09:04:06.167164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:30:50.536 [2024-07-26 09:04:06.167185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:30040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.536 [2024-07-26 09:04:06.167201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:50.536 [2024-07-26 09:04:06.167222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:30168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.536 [2024-07-26 09:04:06.167238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:30:50.536 [2024-07-26 09:04:06.167259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.536 [2024-07-26 09:04:06.167290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:30:50.536 [2024-07-26 09:04:06.167315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:30232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.536 [2024-07-26 09:04:06.167331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:30:50.536 [2024-07-26 09:04:06.167352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:30264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.536 [2024-07-26 09:04:06.167382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:50.536 [2024-07-26 09:04:06.167404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:29536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.536 [2024-07-26 09:04:06.167418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:50.536 [2024-07-26 09:04:06.168194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:29936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.536 [2024-07-26 09:04:06.168219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:30:50.536 [2024-07-26 09:04:06.168246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:30016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.536 [2024-07-26 09:04:06.168263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:30:50.536 [2024-07-26 09:04:06.168302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:29640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.536 [2024-07-26 09:04:06.168318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:30:50.537 [2024-07-26 09:04:06.168339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:29696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.537 [2024-07-26 09:04:06.168354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:30:50.537 [2024-07-26 09:04:06.168390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:28424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.537 [2024-07-26 09:04:06.168405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:30:50.537 [2024-07-26 09:04:06.168425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:29544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.537 [2024-07-26 09:04:06.168439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:50.537 [2024-07-26 09:04:06.168459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:29648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.537 [2024-07-26 09:04:06.168473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:50.537 [2024-07-26 09:04:06.168494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:29776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.537 [2024-07-26 09:04:06.168508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:50.537 [2024-07-26 09:04:06.168529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:29800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.537 [2024-07-26 09:04:06.168543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:30:50.537 [2024-07-26 09:04:06.170466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:30464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.537 [2024-07-26 09:04:06.170509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:30:50.537 [2024-07-26 09:04:06.170552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:30480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.537 [2024-07-26 09:04:06.170569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:30:50.537 [2024-07-26 09:04:06.170592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.537 [2024-07-26 09:04:06.170607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:30:50.537 [2024-07-26 09:04:06.170628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:30512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.537 [2024-07-26 09:04:06.170643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:50.537 [2024-07-26 09:04:06.170664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:30528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.537 [2024-07-26 09:04:06.170679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:30:50.537 [2024-07-26 09:04:06.170700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:30544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.537 [2024-07-26 09:04:06.170715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:50.537 [2024-07-26 09:04:06.170737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:30560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.537 [2024-07-26 09:04:06.170767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:30:50.537 [2024-07-26 09:04:06.170790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:30576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.537 [2024-07-26 09:04:06.170805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:30:50.537 [2024-07-26 09:04:06.170842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:30080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.537 [2024-07-26 09:04:06.170857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:50.537 [2024-07-26 09:04:06.170879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:30112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.537 [2024-07-26 09:04:06.170894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:50.537 [2024-07-26 09:04:06.170915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:29920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.537 [2024-07-26 09:04:06.170930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.537 [2024-07-26 09:04:06.170951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:30312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.537 [2024-07-26 09:04:06.170966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:50.537 [2024-07-26 09:04:06.170987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:30344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.537 [2024-07-26 09:04:06.171005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:50.537 [2024-07-26 09:04:06.171027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:29952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.537 [2024-07-26 09:04:06.171057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:30:50.537 [2024-07-26 09:04:06.171090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:30384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.537 [2024-07-26 09:04:06.171107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:30:50.537 [2024-07-26 09:04:06.171128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:30416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.537 [2024-07-26 09:04:06.171143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:30:50.537 [2024-07-26 09:04:06.171165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:30448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.537 [2024-07-26 09:04:06.171195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:30:50.537 [2024-07-26 09:04:06.171217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:29728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.537 [2024-07-26 09:04:06.171232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:30:50.537 [2024-07-26 09:04:06.171253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:30104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.537 [2024-07-26 09:04:06.171268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:30:50.537 [2024-07-26 09:04:06.171290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:29912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.537 [2024-07-26 09:04:06.171305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:30:50.537 [2024-07-26 09:04:06.171326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:30040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.537 [2024-07-26 09:04:06.171341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:30:50.537 [2024-07-26 09:04:06.171378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:30200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.537 [2024-07-26 09:04:06.171392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:50.537 [2024-07-26 09:04:06.171413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.537 [2024-07-26 09:04:06.171427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:30:50.537 [2024-07-26 09:04:06.171447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:30144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.537 [2024-07-26 09:04:06.171462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:30:50.537 [2024-07-26 09:04:06.171482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:30176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.537 [2024-07-26 09:04:06.171497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:30:50.537 [2024-07-26 09:04:06.171531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:30208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.537 [2024-07-26 09:04:06.171546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:30:50.537 [2024-07-26 09:04:06.171567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:30240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.537 [2024-07-26 09:04:06.171581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:30:50.537 [2024-07-26 09:04:06.171602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:30272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.537 [2024-07-26 09:04:06.171616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:30:50.537 [2024-07-26 09:04:06.171637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:29968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.537 [2024-07-26 09:04:06.171651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:50.537 [2024-07-26 09:04:06.171672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:29632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.537 [2024-07-26 09:04:06.171686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:30:50.537 [2024-07-26 09:04:06.171707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:29504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.537 [2024-07-26 09:04:06.171721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:30:50.538 [2024-07-26 09:04:06.171741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:30016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.538 [2024-07-26 09:04:06.171756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:30:50.538 [2024-07-26 09:04:06.171776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:29696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.538 [2024-07-26 09:04:06.171790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:30:50.538 [2024-07-26 09:04:06.171811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:29544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.538 [2024-07-26 09:04:06.171825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:30:50.538 [2024-07-26 09:04:06.171845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:29776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.538 [2024-07-26 09:04:06.171860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:30:50.538 [2024-07-26 09:04:06.171880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:30584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.538 [2024-07-26 09:04:06.171895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:30:50.538 [2024-07-26 09:04:06.171915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:30600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.538 [2024-07-26 09:04:06.171929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:30:50.538 [2024-07-26 09:04:06.171954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:30616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.538 [2024-07-26 09:04:06.171969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:50.538 [2024-07-26 09:04:06.171989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:30632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.538 [2024-07-26 09:04:06.172004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:30:50.538 [2024-07-26 09:04:06.172024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:30288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.538 [2024-07-26 09:04:06.172038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:30:50.538 [2024-07-26 09:04:06.172066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:30320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.538 [2024-07-26 09:04:06.172083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:30:50.538 [2024-07-26 09:04:06.172104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:30352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.538 [2024-07-26 09:04:06.172119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:30:50.538 [2024-07-26 09:04:06.174070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:30376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.538 [2024-07-26 09:04:06.174109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:50.538 [2024-07-26 09:04:06.174137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:30408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.538 [2024-07-26 09:04:06.174154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:50.538 [2024-07-26 09:04:06.174176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:30440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.538 [2024-07-26 09:04:06.174192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:50.538 [2024-07-26 09:04:06.174214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:30648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.538 [2024-07-26 09:04:06.174230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:30:50.538 [2024-07-26 09:04:06.174252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.538 [2024-07-26 09:04:06.174267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:30:50.538 [2024-07-26 09:04:06.174289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:30680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.538 [2024-07-26 09:04:06.174304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:30:50.538 [2024-07-26 09:04:06.174326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:30696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.538 [2024-07-26 09:04:06.174342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:30:50.538 [2024-07-26 09:04:06.174363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:30712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.538 [2024-07-26 09:04:06.174399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:30:50.538 [2024-07-26 09:04:06.174422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:30728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.538 [2024-07-26 09:04:06.174438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:30:50.538 [2024-07-26 09:04:06.174474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:30744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.538 [2024-07-26 09:04:06.174489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:30:50.538 [2024-07-26 09:04:06.174510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:30056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.538 [2024-07-26 09:04:06.174525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:30:50.538 [2024-07-26 09:04:06.174546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:30120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.538 [2024-07-26 09:04:06.174560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:50.538 [2024-07-26 09:04:06.174580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:30184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.538 [2024-07-26 09:04:06.174595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:30:50.538 [2024-07-26 09:04:06.174615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:30248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.538 [2024-07-26 09:04:06.174629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:30:50.538 [2024-07-26 09:04:06.174654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:29872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.538 [2024-07-26 09:04:06.174687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:30:50.538 [2024-07-26 09:04:06.175121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:30480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.538 [2024-07-26 09:04:06.175146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:30:50.538 [2024-07-26 09:04:06.175174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:30512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.538 [2024-07-26 09:04:06.175191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:30:50.538 [2024-07-26 09:04:06.175213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:30544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.538 [2024-07-26 09:04:06.175229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:30:50.538 [2024-07-26 09:04:06.175250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:30576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.538 [2024-07-26 09:04:06.175266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:50.538 [2024-07-26 09:04:06.175288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:30112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.538 [2024-07-26 09:04:06.175324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:30:50.538 [2024-07-26 09:04:06.175347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:30312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.538 [2024-07-26 09:04:06.175363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:30:50.538 [2024-07-26 09:04:06.175399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:29952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.538 [2024-07-26 09:04:06.175414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:30:50.538 [2024-07-26 09:04:06.175434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:30416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.538 [2024-07-26 09:04:06.175449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:30:50.538 [2024-07-26 09:04:06.175469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:29728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.538 [2024-07-26 09:04:06.175484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:30:50.538 [2024-07-26 09:04:06.175504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:29912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.538 [2024-07-26 09:04:06.175518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:30:50.538 [2024-07-26 09:04:06.175539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:30200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.539 [2024-07-26 09:04:06.175553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:30:50.539 [2024-07-26 09:04:06.175573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:30144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.539 [2024-07-26 09:04:06.175587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:30:50.539 [2024-07-26 09:04:06.175608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:30208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.539 [2024-07-26 09:04:06.175622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:50.539 [2024-07-26 09:04:06.175642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.539 [2024-07-26 09:04:06.175656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:50.539 [2024-07-26 09:04:06.175700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:29632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.539 [2024-07-26 09:04:06.175715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:30:50.539 [2024-07-26 09:04:06.175736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:30016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.539 [2024-07-26 09:04:06.175751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:30:50.539 [2024-07-26 09:04:06.175772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:29544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.539 [2024-07-26 09:04:06.175787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:30:50.539 [2024-07-26 09:04:06.175812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:30584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.539 [2024-07-26 09:04:06.175828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:50.539 [2024-07-26 09:04:06.175866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:30616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.539 [2024-07-26 09:04:06.175881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:50.539 [2024-07-26 09:04:06.175903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.539 [2024-07-26 09:04:06.175918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:50.539 [2024-07-26 09:04:06.175940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:30352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.539 [2024-07-26 09:04:06.175955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:30:50.539 [2024-07-26 09:04:06.175976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:30760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.539 [2024-07-26 09:04:06.175992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:30:50.539 [2024-07-26 09:04:06.176028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:30776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.539 [2024-07-26 09:04:06.176044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:30:50.539 [2024-07-26 09:04:06.176074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:30792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.539 [2024-07-26 09:04:06.176108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:30:50.539 [2024-07-26 09:04:06.176131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:30808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.539 [2024-07-26 09:04:06.176151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:30:50.539 [2024-07-26 09:04:06.176174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:30824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.539 [2024-07-26 09:04:06.176190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:30:50.539 [2024-07-26 09:04:06.176211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:30840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.539 [2024-07-26 09:04:06.176227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:30:50.539 [2024-07-26 09:04:06.176248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:30488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.539 [2024-07-26 09:04:06.176264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:30:50.539 [2024-07-26 09:04:06.176286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.539 [2024-07-26 09:04:06.176301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:50.539 [2024-07-26 09:04:06.176330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:30552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.539 [2024-07-26 09:04:06.176346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:30:50.539 [2024-07-26 09:04:06.177731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:30296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.539 [2024-07-26 09:04:06.177770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:50.539 [2024-07-26 09:04:06.177797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:30360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.539 [2024-07-26 09:04:06.177829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:30:50.539 [2024-07-26 09:04:06.177853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:30432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.539 [2024-07-26 09:04:06.177883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:30:50.539 [2024-07-26 09:04:06.177905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:30136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.539 [2024-07-26 09:04:06.177921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:30:50.539 [2024-07-26 09:04:06.177941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:30848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.539 [2024-07-26 09:04:06.177956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:30:50.539 [2024-07-26 09:04:06.177977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:30864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.539 [2024-07-26 09:04:06.177992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:50.539 [2024-07-26 09:04:06.178013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:30880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.539 [2024-07-26 09:04:06.178028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:30:50.539 [2024-07-26 09:04:06.178070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:30408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.539 [2024-07-26 09:04:06.178088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:30:50.539 [2024-07-26 09:04:06.178109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:30648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.539 [2024-07-26 09:04:06.178140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:50.539 [2024-07-26 09:04:06.178162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:30680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.539 [2024-07-26 09:04:06.178176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:30:50.539 [2024-07-26 09:04:06.178197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:30712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.539 [2024-07-26 09:04:06.178228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:30:50.539 [2024-07-26 09:04:06.178251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:30744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.539 [2024-07-26 09:04:06.178272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:30:50.540 [2024-07-26 09:04:06.178295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:30120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.540 [2024-07-26 09:04:06.178311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:30:50.540 [2024-07-26 09:04:06.178332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:30248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.540 [2024-07-26 09:04:06.178348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:30:50.540 [2024-07-26 09:04:06.178370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.540 [2024-07-26 09:04:06.178385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:50.540 [2024-07-26 09:04:06.178407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:30608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.540 [2024-07-26 09:04:06.178437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:30:50.540 [2024-07-26 09:04:06.178459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:30640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.540 [2024-07-26 09:04:06.178474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:30:50.540 [2024-07-26 09:04:06.178495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:30512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.540 [2024-07-26 09:04:06.178510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:30:50.540 [2024-07-26 09:04:06.178532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.540 [2024-07-26 09:04:06.178561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:30:50.540 [2024-07-26 09:04:06.178583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:30312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.540 [2024-07-26 09:04:06.178597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:50.540 [2024-07-26 09:04:06.178617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:30416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.540 [2024-07-26 09:04:06.178632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:50.540 [2024-07-26 09:04:06.178652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:29912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.540 [2024-07-26 09:04:06.178667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:50.540 [2024-07-26 09:04:06.178687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:30144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.540 [2024-07-26 09:04:06.178702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:30:50.540 [2024-07-26 09:04:06.178722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:30272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.540 [2024-07-26 09:04:06.178740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:30:50.540 [2024-07-26 09:04:06.178761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:30016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.540 [2024-07-26 09:04:06.178776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:50.540 [2024-07-26 09:04:06.178796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:30584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.540 [2024-07-26 09:04:06.178811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:30:50.540 [2024-07-26 09:04:06.178831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:30288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.540 [2024-07-26 09:04:06.178845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:50.540 [2024-07-26 09:04:06.178866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.540 [2024-07-26 09:04:06.178880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:30:50.540 [2024-07-26 09:04:06.178901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:30792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.540 [2024-07-26 09:04:06.178915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:30:50.540 [2024-07-26 09:04:06.178936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:30824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.540 [2024-07-26 09:04:06.178950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:30:50.540 [2024-07-26 09:04:06.178970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:30488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.540 [2024-07-26 09:04:06.178985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:50.540 [2024-07-26 09:04:06.179005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:30552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.540 [2024-07-26 09:04:06.179020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:50.540 [2024-07-26 09:04:06.180704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:30896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.540 [2024-07-26 09:04:06.180728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:30:50.540 [2024-07-26 09:04:06.180785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.540 [2024-07-26 09:04:06.180819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:30:50.540 [2024-07-26 09:04:06.180842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:30928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.540 [2024-07-26 09:04:06.180857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:30:50.540 [2024-07-26 09:04:06.180878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:30944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.540 [2024-07-26 09:04:06.180896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:30:50.540 [2024-07-26 09:04:06.180918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:30960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.540 [2024-07-26 09:04:06.180932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:30:50.540 [2024-07-26 09:04:06.180953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:30976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.540 [2024-07-26 09:04:06.180967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:50.540 [2024-07-26 09:04:06.180988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:30992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.540 [2024-07-26 09:04:06.181018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:50.540 [2024-07-26 09:04:06.181040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:31008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.540 [2024-07-26 09:04:06.181055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:50.540 [2024-07-26 09:04:06.181103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:31024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.540 [2024-07-26 09:04:06.181120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:30:50.540 [2024-07-26 09:04:06.181142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:30672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.540 [2024-07-26 09:04:06.181157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:30:50.540 [2024-07-26 09:04:06.181179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:30704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.540 [2024-07-26 09:04:06.181195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:30:50.540 [2024-07-26 09:04:06.181218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:30736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.540 [2024-07-26 09:04:06.181233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:30:50.540 Received shutdown signal, test time was about 32.484183 seconds 00:30:50.540 00:30:50.540 Latency(us) 00:30:50.540 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:50.540 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:30:50.540 Verification LBA range: start 0x0 length 0x4000 00:30:50.540 Nvme0n1 : 32.48 7827.29 30.58 0.00 0.00 16327.56 546.13 4026531.84 00:30:50.540 =================================================================================================================== 00:30:50.540 Total : 7827.29 30.58 0.00 0.00 16327.56 546.13 4026531.84 00:30:50.540 09:04:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:50.798 09:04:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:30:50.798 09:04:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:50.798 09:04:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:30:50.798 09:04:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:50.798 09:04:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:30:50.798 09:04:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:50.798 09:04:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:30:50.798 09:04:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:50.798 09:04:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:50.798 rmmod nvme_tcp 00:30:50.798 rmmod nvme_fabrics 00:30:50.798 rmmod nvme_keyring 00:30:51.057 09:04:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:51.057 09:04:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:30:51.057 09:04:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:30:51.057 09:04:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 1086613 ']' 00:30:51.057 09:04:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 1086613 00:30:51.057 09:04:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 1086613 ']' 00:30:51.057 09:04:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 1086613 00:30:51.057 09:04:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:30:51.057 09:04:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:51.057 09:04:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1086613 00:30:51.057 09:04:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:51.057 09:04:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:51.057 09:04:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1086613' 00:30:51.057 killing process with pid 1086613 00:30:51.057 09:04:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 1086613 00:30:51.057 09:04:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 1086613 00:30:51.315 09:04:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:51.315 09:04:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:51.315 09:04:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:51.315 09:04:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:51.315 09:04:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:51.315 09:04:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:51.315 09:04:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:51.315 09:04:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:53.217 09:04:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:53.217 00:30:53.217 real 0m40.943s 00:30:53.217 user 2m3.405s 00:30:53.217 sys 0m10.711s 00:30:53.217 09:04:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:53.217 09:04:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:30:53.217 ************************************ 00:30:53.217 END TEST nvmf_host_multipath_status 00:30:53.217 ************************************ 00:30:53.217 09:04:11 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:30:53.217 09:04:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:30:53.217 09:04:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:53.217 09:04:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:53.217 ************************************ 00:30:53.217 START TEST nvmf_discovery_remove_ifc 00:30:53.217 ************************************ 00:30:53.217 09:04:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:30:53.475 * Looking for test storage... 00:30:53.475 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:53.475 09:04:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:53.475 09:04:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:30:53.475 09:04:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:53.475 09:04:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:53.475 09:04:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:53.475 09:04:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:53.475 09:04:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:53.475 09:04:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:53.475 09:04:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:53.475 09:04:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:53.475 09:04:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:53.475 09:04:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:53.475 09:04:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:53.475 09:04:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:53.475 09:04:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:53.475 09:04:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:53.475 09:04:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:53.475 09:04:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:53.475 09:04:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:53.475 09:04:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:53.475 09:04:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:53.475 09:04:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:53.475 09:04:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:53.475 09:04:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:53.475 09:04:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:53.475 09:04:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:30:53.475 09:04:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:53.475 09:04:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:30:53.475 09:04:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:53.475 09:04:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:53.475 09:04:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:53.475 09:04:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:53.475 09:04:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:53.475 09:04:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:53.475 09:04:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:53.475 09:04:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:53.475 09:04:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:30:53.475 09:04:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:30:53.475 09:04:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:30:53.475 09:04:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:30:53.475 09:04:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:30:53.475 09:04:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:30:53.475 09:04:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:30:53.475 09:04:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:53.475 09:04:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:53.475 09:04:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:53.475 09:04:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:53.475 09:04:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:53.475 09:04:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:53.475 09:04:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:53.475 09:04:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:53.475 09:04:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:53.475 09:04:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:53.475 09:04:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:30:53.475 09:04:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:55.416 09:04:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:55.416 09:04:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:30:55.416 09:04:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:55.416 09:04:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:55.416 09:04:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:55.416 09:04:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:55.416 09:04:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:55.416 09:04:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:30:55.416 09:04:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:55.416 09:04:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:30:55.416 09:04:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:30:55.416 09:04:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:30:55.416 09:04:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:30:55.416 09:04:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:30:55.416 09:04:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:30:55.416 09:04:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:55.416 09:04:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:55.416 09:04:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:55.416 09:04:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:55.416 09:04:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:55.416 09:04:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:55.416 09:04:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:55.416 09:04:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:55.416 09:04:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:55.416 09:04:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:55.416 09:04:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:55.416 09:04:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:55.416 09:04:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:55.416 09:04:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:55.416 09:04:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:55.417 09:04:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:55.417 09:04:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:55.417 09:04:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:55.417 09:04:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:55.417 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:55.417 09:04:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:55.417 09:04:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:55.417 09:04:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:55.417 09:04:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:55.417 09:04:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:55.417 09:04:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:55.417 09:04:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:55.417 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:55.417 09:04:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:55.417 09:04:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:55.417 09:04:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:55.417 09:04:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:55.417 09:04:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:55.417 09:04:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:55.417 09:04:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:55.417 09:04:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:55.417 09:04:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:55.417 09:04:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:55.417 09:04:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:55.417 09:04:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:55.417 09:04:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:55.417 09:04:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:55.417 09:04:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:55.417 09:04:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:55.417 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:55.417 09:04:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:55.417 09:04:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:55.417 09:04:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:55.417 09:04:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:55.417 09:04:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:55.417 09:04:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:55.417 09:04:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:55.417 09:04:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:55.417 09:04:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:55.417 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:55.417 09:04:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:55.417 09:04:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:55.417 09:04:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:30:55.417 09:04:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:55.417 09:04:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:55.417 09:04:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:55.417 09:04:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:55.417 09:04:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:55.417 09:04:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:55.417 09:04:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:55.417 09:04:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:55.417 09:04:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:55.417 09:04:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:55.417 09:04:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:55.417 09:04:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:55.417 09:04:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:55.417 09:04:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:55.417 09:04:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:55.417 09:04:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:55.417 09:04:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:55.417 09:04:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:55.417 09:04:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:55.417 09:04:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:55.676 09:04:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:55.676 09:04:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:55.676 09:04:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:55.676 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:55.676 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.130 ms 00:30:55.676 00:30:55.676 --- 10.0.0.2 ping statistics --- 00:30:55.676 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:55.676 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:30:55.676 09:04:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:55.676 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:55.676 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.135 ms 00:30:55.676 00:30:55.676 --- 10.0.0.1 ping statistics --- 00:30:55.676 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:55.676 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:30:55.676 09:04:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:55.676 09:04:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:30:55.676 09:04:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:55.676 09:04:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:55.676 09:04:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:55.676 09:04:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:55.676 09:04:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:55.676 09:04:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:55.676 09:04:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:55.676 09:04:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:30:55.676 09:04:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:55.676 09:04:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:55.676 09:04:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:55.676 09:04:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=1093059 00:30:55.676 09:04:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:30:55.676 09:04:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 1093059 00:30:55.676 09:04:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 1093059 ']' 00:30:55.676 09:04:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:55.676 09:04:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:55.676 09:04:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:55.676 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:55.676 09:04:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:55.676 09:04:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:55.676 [2024-07-26 09:04:13.947376] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:30:55.676 [2024-07-26 09:04:13.947446] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:55.676 EAL: No free 2048 kB hugepages reported on node 1 00:30:55.676 [2024-07-26 09:04:13.983354] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:30:55.676 [2024-07-26 09:04:14.009655] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:55.676 [2024-07-26 09:04:14.092953] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:55.676 [2024-07-26 09:04:14.093010] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:55.676 [2024-07-26 09:04:14.093038] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:55.676 [2024-07-26 09:04:14.093049] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:55.676 [2024-07-26 09:04:14.093065] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:55.676 [2024-07-26 09:04:14.093108] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:55.934 09:04:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:55.934 09:04:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:30:55.934 09:04:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:55.934 09:04:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:55.934 09:04:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:55.934 09:04:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:55.934 09:04:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:30:55.934 09:04:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:55.934 09:04:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:55.934 [2024-07-26 09:04:14.232996] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:55.934 [2024-07-26 09:04:14.241228] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:30:55.934 null0 00:30:55.934 [2024-07-26 09:04:14.273150] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:55.934 09:04:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:55.934 09:04:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=1093078 00:30:55.934 09:04:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:30:55.934 09:04:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 1093078 /tmp/host.sock 00:30:55.934 09:04:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 1093078 ']' 00:30:55.934 09:04:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:30:55.934 09:04:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:55.934 09:04:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:30:55.934 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:30:55.934 09:04:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:55.934 09:04:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:55.934 [2024-07-26 09:04:14.338394] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:30:55.934 [2024-07-26 09:04:14.338477] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1093078 ] 00:30:55.934 EAL: No free 2048 kB hugepages reported on node 1 00:30:55.934 [2024-07-26 09:04:14.370704] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:30:56.192 [2024-07-26 09:04:14.401257] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:56.192 [2024-07-26 09:04:14.498122] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:56.192 09:04:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:56.192 09:04:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:30:56.192 09:04:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:56.192 09:04:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:30:56.192 09:04:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:56.192 09:04:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:56.192 09:04:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:56.192 09:04:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:30:56.192 09:04:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:56.192 09:04:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:56.192 09:04:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:56.192 09:04:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:30:56.192 09:04:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:56.192 09:04:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:57.572 [2024-07-26 09:04:15.702232] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:30:57.572 [2024-07-26 09:04:15.702256] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:30:57.572 [2024-07-26 09:04:15.702278] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:30:57.573 [2024-07-26 09:04:15.789555] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:30:57.573 [2024-07-26 09:04:15.852274] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:30:57.573 [2024-07-26 09:04:15.852335] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:30:57.573 [2024-07-26 09:04:15.852371] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:30:57.573 [2024-07-26 09:04:15.852392] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:30:57.573 [2024-07-26 09:04:15.852415] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:30:57.573 09:04:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:57.573 09:04:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:30:57.573 09:04:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:57.573 09:04:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:57.573 09:04:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:57.573 09:04:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:57.573 09:04:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:57.573 09:04:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:57.573 09:04:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:57.573 [2024-07-26 09:04:15.859091] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x135f370 was disconnected and freed. delete nvme_qpair. 00:30:57.573 09:04:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:57.573 09:04:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:30:57.573 09:04:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:30:57.573 09:04:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:30:57.573 09:04:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:30:57.573 09:04:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:57.573 09:04:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:57.573 09:04:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:57.573 09:04:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:57.573 09:04:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:57.573 09:04:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:57.573 09:04:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:57.573 09:04:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:57.573 09:04:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:30:57.573 09:04:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:58.950 09:04:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:58.950 09:04:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:58.950 09:04:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:58.950 09:04:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:58.950 09:04:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:58.950 09:04:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:58.950 09:04:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:58.950 09:04:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:58.950 09:04:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:30:58.950 09:04:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:59.886 09:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:59.886 09:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:59.886 09:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:59.886 09:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:59.886 09:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:59.886 09:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:59.886 09:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:59.886 09:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:59.886 09:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:30:59.886 09:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:00.823 09:04:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:00.823 09:04:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:00.823 09:04:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:00.823 09:04:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:00.823 09:04:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:00.823 09:04:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:00.823 09:04:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:00.823 09:04:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:00.823 09:04:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:00.823 09:04:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:01.762 09:04:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:01.762 09:04:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:01.762 09:04:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:01.762 09:04:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:01.762 09:04:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:01.762 09:04:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:01.762 09:04:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:01.762 09:04:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:01.762 09:04:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:01.762 09:04:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:03.144 09:04:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:03.144 09:04:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:03.144 09:04:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:03.144 09:04:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:03.144 09:04:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:03.144 09:04:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:03.144 09:04:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:03.144 09:04:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:03.144 09:04:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:03.144 09:04:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:03.144 [2024-07-26 09:04:21.293496] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:31:03.144 [2024-07-26 09:04:21.293575] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:03.144 [2024-07-26 09:04:21.293600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.144 [2024-07-26 09:04:21.293621] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:03.144 [2024-07-26 09:04:21.293636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.144 [2024-07-26 09:04:21.293652] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:03.144 [2024-07-26 09:04:21.293667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.144 [2024-07-26 09:04:21.293684] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:03.144 [2024-07-26 09:04:21.293699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.144 [2024-07-26 09:04:21.293715] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:31:03.144 [2024-07-26 09:04:21.293731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.144 [2024-07-26 09:04:21.293747] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1325d70 is same with the state(5) to be set 00:31:03.144 [2024-07-26 09:04:21.303516] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1325d70 (9): Bad file descriptor 00:31:03.144 [2024-07-26 09:04:21.313563] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:04.082 09:04:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:04.082 09:04:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:04.083 09:04:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:04.083 09:04:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:04.083 09:04:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:04.083 09:04:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:04.083 09:04:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:04.083 [2024-07-26 09:04:22.349100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:31:04.083 [2024-07-26 09:04:22.349166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1325d70 with addr=10.0.0.2, port=4420 00:31:04.083 [2024-07-26 09:04:22.349195] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1325d70 is same with the state(5) to be set 00:31:04.083 [2024-07-26 09:04:22.349244] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1325d70 (9): Bad file descriptor 00:31:04.083 [2024-07-26 09:04:22.349725] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:04.083 [2024-07-26 09:04:22.349776] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:04.083 [2024-07-26 09:04:22.349799] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:04.083 [2024-07-26 09:04:22.349819] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:04.083 [2024-07-26 09:04:22.349850] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:04.083 [2024-07-26 09:04:22.349870] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:04.083 09:04:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:04.083 09:04:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:04.083 09:04:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:05.017 [2024-07-26 09:04:23.352371] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:05.017 [2024-07-26 09:04:23.352407] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:05.018 [2024-07-26 09:04:23.352425] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:05.018 [2024-07-26 09:04:23.352441] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:31:05.018 [2024-07-26 09:04:23.352465] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:05.018 [2024-07-26 09:04:23.352504] bdev_nvme.c:6762:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:31:05.018 [2024-07-26 09:04:23.352551] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:05.018 [2024-07-26 09:04:23.352575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.018 [2024-07-26 09:04:23.352598] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:05.018 [2024-07-26 09:04:23.352614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.018 [2024-07-26 09:04:23.352630] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:05.018 [2024-07-26 09:04:23.352646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.018 [2024-07-26 09:04:23.352662] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:05.018 [2024-07-26 09:04:23.352677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.018 [2024-07-26 09:04:23.352693] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:31:05.018 [2024-07-26 09:04:23.352716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.018 [2024-07-26 09:04:23.352732] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:31:05.018 [2024-07-26 09:04:23.352905] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1325210 (9): Bad file descriptor 00:31:05.018 [2024-07-26 09:04:23.353931] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:31:05.018 [2024-07-26 09:04:23.353956] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:31:05.018 09:04:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:05.018 09:04:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:05.018 09:04:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:05.018 09:04:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:05.018 09:04:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:05.018 09:04:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:05.018 09:04:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:05.018 09:04:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:05.018 09:04:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:31:05.018 09:04:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:05.018 09:04:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:05.018 09:04:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:31:05.018 09:04:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:05.018 09:04:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:05.018 09:04:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:05.018 09:04:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:05.018 09:04:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:05.018 09:04:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:05.018 09:04:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:05.018 09:04:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:05.276 09:04:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:31:05.276 09:04:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:06.214 09:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:06.214 09:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:06.214 09:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:06.214 09:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:06.214 09:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:06.214 09:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:06.214 09:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:06.214 09:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:06.214 09:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:31:06.214 09:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:07.154 [2024-07-26 09:04:25.406908] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:31:07.154 [2024-07-26 09:04:25.406954] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:31:07.154 [2024-07-26 09:04:25.406980] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:07.154 [2024-07-26 09:04:25.538405] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:31:07.154 09:04:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:07.154 09:04:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:07.154 09:04:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:07.154 09:04:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:07.154 09:04:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:07.154 09:04:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:07.154 09:04:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:07.154 09:04:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:07.154 09:04:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:31:07.154 09:04:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:07.414 [2024-07-26 09:04:25.761986] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:31:07.414 [2024-07-26 09:04:25.762042] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:31:07.414 [2024-07-26 09:04:25.762089] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:31:07.414 [2024-07-26 09:04:25.762126] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:31:07.414 [2024-07-26 09:04:25.762138] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:31:07.414 [2024-07-26 09:04:25.765307] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1368900 was disconnected and freed. delete nvme_qpair. 00:31:08.355 09:04:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:08.355 09:04:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:08.355 09:04:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:08.355 09:04:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:08.355 09:04:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:08.355 09:04:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:08.355 09:04:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:08.355 09:04:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:08.355 09:04:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:31:08.355 09:04:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:31:08.355 09:04:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 1093078 00:31:08.355 09:04:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 1093078 ']' 00:31:08.355 09:04:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 1093078 00:31:08.355 09:04:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:31:08.355 09:04:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:08.355 09:04:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1093078 00:31:08.355 09:04:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:08.355 09:04:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:08.355 09:04:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1093078' 00:31:08.355 killing process with pid 1093078 00:31:08.355 09:04:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 1093078 00:31:08.355 09:04:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 1093078 00:31:08.614 09:04:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:31:08.614 09:04:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:08.614 09:04:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:31:08.614 09:04:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:08.614 09:04:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:31:08.614 09:04:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:08.614 09:04:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:08.614 rmmod nvme_tcp 00:31:08.614 rmmod nvme_fabrics 00:31:08.614 rmmod nvme_keyring 00:31:08.614 09:04:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:08.614 09:04:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:31:08.614 09:04:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:31:08.614 09:04:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 1093059 ']' 00:31:08.614 09:04:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 1093059 00:31:08.614 09:04:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 1093059 ']' 00:31:08.614 09:04:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 1093059 00:31:08.614 09:04:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:31:08.614 09:04:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:08.614 09:04:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1093059 00:31:08.614 09:04:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:31:08.614 09:04:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:31:08.614 09:04:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1093059' 00:31:08.614 killing process with pid 1093059 00:31:08.614 09:04:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 1093059 00:31:08.614 09:04:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 1093059 00:31:08.874 09:04:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:08.874 09:04:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:08.874 09:04:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:08.874 09:04:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:08.874 09:04:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:08.874 09:04:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:08.874 09:04:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:08.875 09:04:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:11.408 09:04:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:11.408 00:31:11.408 real 0m17.618s 00:31:11.408 user 0m25.490s 00:31:11.408 sys 0m3.023s 00:31:11.408 09:04:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:11.408 09:04:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:11.408 ************************************ 00:31:11.408 END TEST nvmf_discovery_remove_ifc 00:31:11.408 ************************************ 00:31:11.408 09:04:29 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:31:11.408 09:04:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:31:11.408 09:04:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:11.408 09:04:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:11.408 ************************************ 00:31:11.408 START TEST nvmf_identify_kernel_target 00:31:11.408 ************************************ 00:31:11.408 09:04:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:31:11.408 * Looking for test storage... 00:31:11.408 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:11.408 09:04:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:11.408 09:04:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:31:11.408 09:04:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:11.408 09:04:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:11.408 09:04:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:11.408 09:04:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:11.408 09:04:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:11.408 09:04:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:11.408 09:04:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:11.408 09:04:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:11.408 09:04:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:11.408 09:04:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:11.408 09:04:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:31:11.408 09:04:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:31:11.408 09:04:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:11.408 09:04:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:11.408 09:04:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:11.408 09:04:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:11.408 09:04:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:11.408 09:04:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:11.408 09:04:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:11.409 09:04:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:11.409 09:04:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:11.409 09:04:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:11.409 09:04:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:11.409 09:04:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:31:11.409 09:04:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:11.409 09:04:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:31:11.409 09:04:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:11.409 09:04:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:11.409 09:04:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:11.409 09:04:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:11.409 09:04:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:11.409 09:04:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:11.409 09:04:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:11.409 09:04:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:11.409 09:04:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:31:11.409 09:04:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:11.409 09:04:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:11.409 09:04:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:11.409 09:04:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:11.409 09:04:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:11.409 09:04:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:11.409 09:04:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:11.409 09:04:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:11.409 09:04:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:11.409 09:04:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:11.409 09:04:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:31:11.409 09:04:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:31:13.315 09:04:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:13.315 09:04:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:31:13.315 09:04:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:13.315 09:04:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:13.315 09:04:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:13.315 09:04:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:13.315 09:04:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:13.315 09:04:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:31:13.315 09:04:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:13.315 09:04:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:31:13.315 09:04:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:31:13.315 09:04:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:31:13.315 09:04:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:31:13.315 09:04:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:31:13.315 09:04:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:31:13.315 09:04:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:13.315 09:04:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:13.315 09:04:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:13.315 09:04:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:13.315 09:04:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:13.315 09:04:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:13.315 09:04:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:13.315 09:04:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:13.315 09:04:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:13.315 09:04:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:13.315 09:04:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:13.315 09:04:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:13.315 09:04:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:13.315 09:04:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:13.315 09:04:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:13.315 09:04:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:13.315 09:04:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:13.315 09:04:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:13.315 09:04:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:31:13.315 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:31:13.315 09:04:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:13.315 09:04:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:13.315 09:04:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:13.315 09:04:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:13.315 09:04:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:13.315 09:04:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:13.315 09:04:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:31:13.315 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:31:13.315 09:04:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:13.315 09:04:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:13.315 09:04:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:13.315 09:04:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:13.315 09:04:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:13.315 09:04:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:13.315 09:04:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:13.315 09:04:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:13.315 09:04:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:13.315 09:04:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:13.315 09:04:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:13.316 09:04:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:13.316 09:04:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:13.316 09:04:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:13.316 09:04:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:13.316 09:04:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:31:13.316 Found net devices under 0000:0a:00.0: cvl_0_0 00:31:13.316 09:04:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:13.316 09:04:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:13.316 09:04:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:13.316 09:04:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:13.316 09:04:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:13.316 09:04:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:13.316 09:04:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:13.316 09:04:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:13.316 09:04:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:31:13.316 Found net devices under 0000:0a:00.1: cvl_0_1 00:31:13.316 09:04:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:13.316 09:04:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:13.316 09:04:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:31:13.316 09:04:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:13.316 09:04:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:13.316 09:04:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:13.316 09:04:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:13.316 09:04:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:13.316 09:04:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:13.316 09:04:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:13.316 09:04:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:13.316 09:04:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:13.316 09:04:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:13.316 09:04:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:13.316 09:04:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:13.316 09:04:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:13.316 09:04:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:13.316 09:04:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:13.316 09:04:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:13.316 09:04:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:13.316 09:04:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:13.316 09:04:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:13.316 09:04:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:13.316 09:04:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:13.316 09:04:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:13.316 09:04:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:13.316 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:13.316 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.204 ms 00:31:13.316 00:31:13.316 --- 10.0.0.2 ping statistics --- 00:31:13.316 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:13.316 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:31:13.316 09:04:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:13.316 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:13.316 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.137 ms 00:31:13.316 00:31:13.316 --- 10.0.0.1 ping statistics --- 00:31:13.316 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:13.316 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:31:13.316 09:04:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:13.316 09:04:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:31:13.316 09:04:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:13.316 09:04:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:13.316 09:04:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:13.316 09:04:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:13.316 09:04:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:13.316 09:04:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:13.316 09:04:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:13.316 09:04:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:31:13.316 09:04:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:31:13.316 09:04:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:31:13.316 09:04:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:13.316 09:04:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:13.316 09:04:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:13.316 09:04:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:13.316 09:04:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:13.316 09:04:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:13.316 09:04:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:13.316 09:04:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:13.316 09:04:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:13.316 09:04:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:31:13.316 09:04:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:31:13.316 09:04:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:31:13.316 09:04:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:31:13.316 09:04:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:13.316 09:04:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:13.316 09:04:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:31:13.316 09:04:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:31:13.316 09:04:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:31:13.316 09:04:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:31:13.316 09:04:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:31:13.316 09:04:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:31:14.286 Waiting for block devices as requested 00:31:14.286 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:31:14.545 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:31:14.545 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:31:14.803 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:31:14.803 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:31:14.803 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:31:14.803 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:31:15.061 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:31:15.061 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:31:15.061 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:31:15.061 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:31:15.319 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:31:15.319 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:31:15.319 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:31:15.319 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:31:15.319 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:31:15.578 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:31:15.578 09:04:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:31:15.578 09:04:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:31:15.578 09:04:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:31:15.578 09:04:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:31:15.578 09:04:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:31:15.578 09:04:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:31:15.578 09:04:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:31:15.578 09:04:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:31:15.578 09:04:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:31:15.578 No valid GPT data, bailing 00:31:15.578 09:04:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:31:15.578 09:04:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:31:15.578 09:04:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:31:15.578 09:04:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:31:15.578 09:04:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:31:15.578 09:04:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:15.578 09:04:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:15.578 09:04:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:31:15.578 09:04:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:31:15.578 09:04:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:31:15.578 09:04:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:31:15.578 09:04:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:31:15.578 09:04:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:31:15.578 09:04:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:31:15.578 09:04:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:31:15.578 09:04:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:31:15.578 09:04:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:31:15.838 09:04:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:31:15.838 00:31:15.838 Discovery Log Number of Records 2, Generation counter 2 00:31:15.838 =====Discovery Log Entry 0====== 00:31:15.838 trtype: tcp 00:31:15.838 adrfam: ipv4 00:31:15.838 subtype: current discovery subsystem 00:31:15.838 treq: not specified, sq flow control disable supported 00:31:15.838 portid: 1 00:31:15.838 trsvcid: 4420 00:31:15.838 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:31:15.838 traddr: 10.0.0.1 00:31:15.838 eflags: none 00:31:15.838 sectype: none 00:31:15.838 =====Discovery Log Entry 1====== 00:31:15.838 trtype: tcp 00:31:15.838 adrfam: ipv4 00:31:15.838 subtype: nvme subsystem 00:31:15.838 treq: not specified, sq flow control disable supported 00:31:15.838 portid: 1 00:31:15.838 trsvcid: 4420 00:31:15.838 subnqn: nqn.2016-06.io.spdk:testnqn 00:31:15.838 traddr: 10.0.0.1 00:31:15.838 eflags: none 00:31:15.838 sectype: none 00:31:15.838 09:04:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:31:15.838 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:31:15.838 EAL: No free 2048 kB hugepages reported on node 1 00:31:15.838 ===================================================== 00:31:15.838 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:31:15.838 ===================================================== 00:31:15.838 Controller Capabilities/Features 00:31:15.838 ================================ 00:31:15.838 Vendor ID: 0000 00:31:15.838 Subsystem Vendor ID: 0000 00:31:15.838 Serial Number: 41f4e458cf9fb2bc25fa 00:31:15.838 Model Number: Linux 00:31:15.838 Firmware Version: 6.7.0-68 00:31:15.838 Recommended Arb Burst: 0 00:31:15.838 IEEE OUI Identifier: 00 00 00 00:31:15.838 Multi-path I/O 00:31:15.838 May have multiple subsystem ports: No 00:31:15.838 May have multiple controllers: No 00:31:15.838 Associated with SR-IOV VF: No 00:31:15.838 Max Data Transfer Size: Unlimited 00:31:15.838 Max Number of Namespaces: 0 00:31:15.838 Max Number of I/O Queues: 1024 00:31:15.838 NVMe Specification Version (VS): 1.3 00:31:15.838 NVMe Specification Version (Identify): 1.3 00:31:15.838 Maximum Queue Entries: 1024 00:31:15.838 Contiguous Queues Required: No 00:31:15.838 Arbitration Mechanisms Supported 00:31:15.838 Weighted Round Robin: Not Supported 00:31:15.838 Vendor Specific: Not Supported 00:31:15.838 Reset Timeout: 7500 ms 00:31:15.838 Doorbell Stride: 4 bytes 00:31:15.838 NVM Subsystem Reset: Not Supported 00:31:15.838 Command Sets Supported 00:31:15.838 NVM Command Set: Supported 00:31:15.838 Boot Partition: Not Supported 00:31:15.838 Memory Page Size Minimum: 4096 bytes 00:31:15.838 Memory Page Size Maximum: 4096 bytes 00:31:15.838 Persistent Memory Region: Not Supported 00:31:15.839 Optional Asynchronous Events Supported 00:31:15.839 Namespace Attribute Notices: Not Supported 00:31:15.839 Firmware Activation Notices: Not Supported 00:31:15.839 ANA Change Notices: Not Supported 00:31:15.839 PLE Aggregate Log Change Notices: Not Supported 00:31:15.839 LBA Status Info Alert Notices: Not Supported 00:31:15.839 EGE Aggregate Log Change Notices: Not Supported 00:31:15.839 Normal NVM Subsystem Shutdown event: Not Supported 00:31:15.839 Zone Descriptor Change Notices: Not Supported 00:31:15.839 Discovery Log Change Notices: Supported 00:31:15.839 Controller Attributes 00:31:15.839 128-bit Host Identifier: Not Supported 00:31:15.839 Non-Operational Permissive Mode: Not Supported 00:31:15.839 NVM Sets: Not Supported 00:31:15.839 Read Recovery Levels: Not Supported 00:31:15.839 Endurance Groups: Not Supported 00:31:15.839 Predictable Latency Mode: Not Supported 00:31:15.839 Traffic Based Keep ALive: Not Supported 00:31:15.839 Namespace Granularity: Not Supported 00:31:15.839 SQ Associations: Not Supported 00:31:15.839 UUID List: Not Supported 00:31:15.839 Multi-Domain Subsystem: Not Supported 00:31:15.839 Fixed Capacity Management: Not Supported 00:31:15.839 Variable Capacity Management: Not Supported 00:31:15.839 Delete Endurance Group: Not Supported 00:31:15.839 Delete NVM Set: Not Supported 00:31:15.839 Extended LBA Formats Supported: Not Supported 00:31:15.839 Flexible Data Placement Supported: Not Supported 00:31:15.839 00:31:15.839 Controller Memory Buffer Support 00:31:15.839 ================================ 00:31:15.839 Supported: No 00:31:15.839 00:31:15.839 Persistent Memory Region Support 00:31:15.839 ================================ 00:31:15.839 Supported: No 00:31:15.839 00:31:15.839 Admin Command Set Attributes 00:31:15.839 ============================ 00:31:15.839 Security Send/Receive: Not Supported 00:31:15.839 Format NVM: Not Supported 00:31:15.839 Firmware Activate/Download: Not Supported 00:31:15.839 Namespace Management: Not Supported 00:31:15.839 Device Self-Test: Not Supported 00:31:15.839 Directives: Not Supported 00:31:15.839 NVMe-MI: Not Supported 00:31:15.839 Virtualization Management: Not Supported 00:31:15.839 Doorbell Buffer Config: Not Supported 00:31:15.839 Get LBA Status Capability: Not Supported 00:31:15.839 Command & Feature Lockdown Capability: Not Supported 00:31:15.839 Abort Command Limit: 1 00:31:15.839 Async Event Request Limit: 1 00:31:15.839 Number of Firmware Slots: N/A 00:31:15.839 Firmware Slot 1 Read-Only: N/A 00:31:15.839 Firmware Activation Without Reset: N/A 00:31:15.839 Multiple Update Detection Support: N/A 00:31:15.839 Firmware Update Granularity: No Information Provided 00:31:15.839 Per-Namespace SMART Log: No 00:31:15.839 Asymmetric Namespace Access Log Page: Not Supported 00:31:15.839 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:31:15.839 Command Effects Log Page: Not Supported 00:31:15.839 Get Log Page Extended Data: Supported 00:31:15.839 Telemetry Log Pages: Not Supported 00:31:15.839 Persistent Event Log Pages: Not Supported 00:31:15.839 Supported Log Pages Log Page: May Support 00:31:15.839 Commands Supported & Effects Log Page: Not Supported 00:31:15.839 Feature Identifiers & Effects Log Page:May Support 00:31:15.839 NVMe-MI Commands & Effects Log Page: May Support 00:31:15.839 Data Area 4 for Telemetry Log: Not Supported 00:31:15.839 Error Log Page Entries Supported: 1 00:31:15.839 Keep Alive: Not Supported 00:31:15.839 00:31:15.839 NVM Command Set Attributes 00:31:15.839 ========================== 00:31:15.839 Submission Queue Entry Size 00:31:15.839 Max: 1 00:31:15.839 Min: 1 00:31:15.839 Completion Queue Entry Size 00:31:15.839 Max: 1 00:31:15.839 Min: 1 00:31:15.839 Number of Namespaces: 0 00:31:15.839 Compare Command: Not Supported 00:31:15.839 Write Uncorrectable Command: Not Supported 00:31:15.839 Dataset Management Command: Not Supported 00:31:15.839 Write Zeroes Command: Not Supported 00:31:15.839 Set Features Save Field: Not Supported 00:31:15.839 Reservations: Not Supported 00:31:15.839 Timestamp: Not Supported 00:31:15.839 Copy: Not Supported 00:31:15.839 Volatile Write Cache: Not Present 00:31:15.839 Atomic Write Unit (Normal): 1 00:31:15.839 Atomic Write Unit (PFail): 1 00:31:15.839 Atomic Compare & Write Unit: 1 00:31:15.839 Fused Compare & Write: Not Supported 00:31:15.839 Scatter-Gather List 00:31:15.839 SGL Command Set: Supported 00:31:15.839 SGL Keyed: Not Supported 00:31:15.839 SGL Bit Bucket Descriptor: Not Supported 00:31:15.839 SGL Metadata Pointer: Not Supported 00:31:15.839 Oversized SGL: Not Supported 00:31:15.839 SGL Metadata Address: Not Supported 00:31:15.839 SGL Offset: Supported 00:31:15.839 Transport SGL Data Block: Not Supported 00:31:15.839 Replay Protected Memory Block: Not Supported 00:31:15.839 00:31:15.839 Firmware Slot Information 00:31:15.839 ========================= 00:31:15.839 Active slot: 0 00:31:15.839 00:31:15.839 00:31:15.839 Error Log 00:31:15.839 ========= 00:31:15.839 00:31:15.839 Active Namespaces 00:31:15.839 ================= 00:31:15.839 Discovery Log Page 00:31:15.839 ================== 00:31:15.839 Generation Counter: 2 00:31:15.839 Number of Records: 2 00:31:15.839 Record Format: 0 00:31:15.839 00:31:15.839 Discovery Log Entry 0 00:31:15.839 ---------------------- 00:31:15.839 Transport Type: 3 (TCP) 00:31:15.839 Address Family: 1 (IPv4) 00:31:15.839 Subsystem Type: 3 (Current Discovery Subsystem) 00:31:15.839 Entry Flags: 00:31:15.839 Duplicate Returned Information: 0 00:31:15.839 Explicit Persistent Connection Support for Discovery: 0 00:31:15.839 Transport Requirements: 00:31:15.839 Secure Channel: Not Specified 00:31:15.839 Port ID: 1 (0x0001) 00:31:15.839 Controller ID: 65535 (0xffff) 00:31:15.839 Admin Max SQ Size: 32 00:31:15.839 Transport Service Identifier: 4420 00:31:15.839 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:31:15.839 Transport Address: 10.0.0.1 00:31:15.839 Discovery Log Entry 1 00:31:15.839 ---------------------- 00:31:15.839 Transport Type: 3 (TCP) 00:31:15.839 Address Family: 1 (IPv4) 00:31:15.839 Subsystem Type: 2 (NVM Subsystem) 00:31:15.839 Entry Flags: 00:31:15.839 Duplicate Returned Information: 0 00:31:15.839 Explicit Persistent Connection Support for Discovery: 0 00:31:15.839 Transport Requirements: 00:31:15.839 Secure Channel: Not Specified 00:31:15.839 Port ID: 1 (0x0001) 00:31:15.839 Controller ID: 65535 (0xffff) 00:31:15.839 Admin Max SQ Size: 32 00:31:15.839 Transport Service Identifier: 4420 00:31:15.839 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:31:15.839 Transport Address: 10.0.0.1 00:31:15.839 09:04:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:15.839 EAL: No free 2048 kB hugepages reported on node 1 00:31:15.839 get_feature(0x01) failed 00:31:15.839 get_feature(0x02) failed 00:31:15.839 get_feature(0x04) failed 00:31:15.840 ===================================================== 00:31:15.840 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:31:15.840 ===================================================== 00:31:15.840 Controller Capabilities/Features 00:31:15.840 ================================ 00:31:15.840 Vendor ID: 0000 00:31:15.840 Subsystem Vendor ID: 0000 00:31:15.840 Serial Number: 534dce8b93a9fd00e5d0 00:31:15.840 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:31:15.840 Firmware Version: 6.7.0-68 00:31:15.840 Recommended Arb Burst: 6 00:31:15.840 IEEE OUI Identifier: 00 00 00 00:31:15.840 Multi-path I/O 00:31:15.840 May have multiple subsystem ports: Yes 00:31:15.840 May have multiple controllers: Yes 00:31:15.840 Associated with SR-IOV VF: No 00:31:15.840 Max Data Transfer Size: Unlimited 00:31:15.840 Max Number of Namespaces: 1024 00:31:15.840 Max Number of I/O Queues: 128 00:31:15.840 NVMe Specification Version (VS): 1.3 00:31:15.840 NVMe Specification Version (Identify): 1.3 00:31:15.840 Maximum Queue Entries: 1024 00:31:15.840 Contiguous Queues Required: No 00:31:15.840 Arbitration Mechanisms Supported 00:31:15.840 Weighted Round Robin: Not Supported 00:31:15.840 Vendor Specific: Not Supported 00:31:15.840 Reset Timeout: 7500 ms 00:31:15.840 Doorbell Stride: 4 bytes 00:31:15.840 NVM Subsystem Reset: Not Supported 00:31:15.840 Command Sets Supported 00:31:15.840 NVM Command Set: Supported 00:31:15.840 Boot Partition: Not Supported 00:31:15.840 Memory Page Size Minimum: 4096 bytes 00:31:15.840 Memory Page Size Maximum: 4096 bytes 00:31:15.840 Persistent Memory Region: Not Supported 00:31:15.840 Optional Asynchronous Events Supported 00:31:15.840 Namespace Attribute Notices: Supported 00:31:15.840 Firmware Activation Notices: Not Supported 00:31:15.840 ANA Change Notices: Supported 00:31:15.840 PLE Aggregate Log Change Notices: Not Supported 00:31:15.840 LBA Status Info Alert Notices: Not Supported 00:31:15.840 EGE Aggregate Log Change Notices: Not Supported 00:31:15.840 Normal NVM Subsystem Shutdown event: Not Supported 00:31:15.840 Zone Descriptor Change Notices: Not Supported 00:31:15.840 Discovery Log Change Notices: Not Supported 00:31:15.840 Controller Attributes 00:31:15.840 128-bit Host Identifier: Supported 00:31:15.840 Non-Operational Permissive Mode: Not Supported 00:31:15.840 NVM Sets: Not Supported 00:31:15.840 Read Recovery Levels: Not Supported 00:31:15.840 Endurance Groups: Not Supported 00:31:15.840 Predictable Latency Mode: Not Supported 00:31:15.840 Traffic Based Keep ALive: Supported 00:31:15.840 Namespace Granularity: Not Supported 00:31:15.840 SQ Associations: Not Supported 00:31:15.840 UUID List: Not Supported 00:31:15.840 Multi-Domain Subsystem: Not Supported 00:31:15.840 Fixed Capacity Management: Not Supported 00:31:15.840 Variable Capacity Management: Not Supported 00:31:15.840 Delete Endurance Group: Not Supported 00:31:15.840 Delete NVM Set: Not Supported 00:31:15.840 Extended LBA Formats Supported: Not Supported 00:31:15.840 Flexible Data Placement Supported: Not Supported 00:31:15.840 00:31:15.840 Controller Memory Buffer Support 00:31:15.840 ================================ 00:31:15.840 Supported: No 00:31:15.840 00:31:15.840 Persistent Memory Region Support 00:31:15.840 ================================ 00:31:15.840 Supported: No 00:31:15.840 00:31:15.840 Admin Command Set Attributes 00:31:15.840 ============================ 00:31:15.840 Security Send/Receive: Not Supported 00:31:15.840 Format NVM: Not Supported 00:31:15.840 Firmware Activate/Download: Not Supported 00:31:15.840 Namespace Management: Not Supported 00:31:15.840 Device Self-Test: Not Supported 00:31:15.840 Directives: Not Supported 00:31:15.840 NVMe-MI: Not Supported 00:31:15.840 Virtualization Management: Not Supported 00:31:15.840 Doorbell Buffer Config: Not Supported 00:31:15.840 Get LBA Status Capability: Not Supported 00:31:15.840 Command & Feature Lockdown Capability: Not Supported 00:31:15.840 Abort Command Limit: 4 00:31:15.840 Async Event Request Limit: 4 00:31:15.840 Number of Firmware Slots: N/A 00:31:15.840 Firmware Slot 1 Read-Only: N/A 00:31:15.840 Firmware Activation Without Reset: N/A 00:31:15.840 Multiple Update Detection Support: N/A 00:31:15.840 Firmware Update Granularity: No Information Provided 00:31:15.840 Per-Namespace SMART Log: Yes 00:31:15.840 Asymmetric Namespace Access Log Page: Supported 00:31:15.840 ANA Transition Time : 10 sec 00:31:15.840 00:31:15.840 Asymmetric Namespace Access Capabilities 00:31:15.840 ANA Optimized State : Supported 00:31:15.840 ANA Non-Optimized State : Supported 00:31:15.840 ANA Inaccessible State : Supported 00:31:15.840 ANA Persistent Loss State : Supported 00:31:15.840 ANA Change State : Supported 00:31:15.840 ANAGRPID is not changed : No 00:31:15.840 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:31:15.840 00:31:15.840 ANA Group Identifier Maximum : 128 00:31:15.840 Number of ANA Group Identifiers : 128 00:31:15.840 Max Number of Allowed Namespaces : 1024 00:31:15.840 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:31:15.840 Command Effects Log Page: Supported 00:31:15.840 Get Log Page Extended Data: Supported 00:31:15.840 Telemetry Log Pages: Not Supported 00:31:15.840 Persistent Event Log Pages: Not Supported 00:31:15.840 Supported Log Pages Log Page: May Support 00:31:15.840 Commands Supported & Effects Log Page: Not Supported 00:31:15.840 Feature Identifiers & Effects Log Page:May Support 00:31:15.840 NVMe-MI Commands & Effects Log Page: May Support 00:31:15.840 Data Area 4 for Telemetry Log: Not Supported 00:31:15.840 Error Log Page Entries Supported: 128 00:31:15.840 Keep Alive: Supported 00:31:15.840 Keep Alive Granularity: 1000 ms 00:31:15.840 00:31:15.840 NVM Command Set Attributes 00:31:15.840 ========================== 00:31:15.840 Submission Queue Entry Size 00:31:15.840 Max: 64 00:31:15.840 Min: 64 00:31:15.840 Completion Queue Entry Size 00:31:15.840 Max: 16 00:31:15.840 Min: 16 00:31:15.840 Number of Namespaces: 1024 00:31:15.840 Compare Command: Not Supported 00:31:15.840 Write Uncorrectable Command: Not Supported 00:31:15.840 Dataset Management Command: Supported 00:31:15.840 Write Zeroes Command: Supported 00:31:15.840 Set Features Save Field: Not Supported 00:31:15.840 Reservations: Not Supported 00:31:15.840 Timestamp: Not Supported 00:31:15.840 Copy: Not Supported 00:31:15.840 Volatile Write Cache: Present 00:31:15.840 Atomic Write Unit (Normal): 1 00:31:15.840 Atomic Write Unit (PFail): 1 00:31:15.840 Atomic Compare & Write Unit: 1 00:31:15.840 Fused Compare & Write: Not Supported 00:31:15.840 Scatter-Gather List 00:31:15.840 SGL Command Set: Supported 00:31:15.840 SGL Keyed: Not Supported 00:31:15.840 SGL Bit Bucket Descriptor: Not Supported 00:31:15.840 SGL Metadata Pointer: Not Supported 00:31:15.840 Oversized SGL: Not Supported 00:31:15.840 SGL Metadata Address: Not Supported 00:31:15.840 SGL Offset: Supported 00:31:15.840 Transport SGL Data Block: Not Supported 00:31:15.840 Replay Protected Memory Block: Not Supported 00:31:15.840 00:31:15.840 Firmware Slot Information 00:31:15.840 ========================= 00:31:15.840 Active slot: 0 00:31:15.840 00:31:15.840 Asymmetric Namespace Access 00:31:15.840 =========================== 00:31:15.841 Change Count : 0 00:31:15.841 Number of ANA Group Descriptors : 1 00:31:15.841 ANA Group Descriptor : 0 00:31:15.841 ANA Group ID : 1 00:31:15.841 Number of NSID Values : 1 00:31:15.841 Change Count : 0 00:31:15.841 ANA State : 1 00:31:15.841 Namespace Identifier : 1 00:31:15.841 00:31:15.841 Commands Supported and Effects 00:31:15.841 ============================== 00:31:15.841 Admin Commands 00:31:15.841 -------------- 00:31:15.841 Get Log Page (02h): Supported 00:31:15.841 Identify (06h): Supported 00:31:15.841 Abort (08h): Supported 00:31:15.841 Set Features (09h): Supported 00:31:15.841 Get Features (0Ah): Supported 00:31:15.841 Asynchronous Event Request (0Ch): Supported 00:31:15.841 Keep Alive (18h): Supported 00:31:15.841 I/O Commands 00:31:15.841 ------------ 00:31:15.841 Flush (00h): Supported 00:31:15.841 Write (01h): Supported LBA-Change 00:31:15.841 Read (02h): Supported 00:31:15.841 Write Zeroes (08h): Supported LBA-Change 00:31:15.841 Dataset Management (09h): Supported 00:31:15.841 00:31:15.841 Error Log 00:31:15.841 ========= 00:31:15.841 Entry: 0 00:31:15.841 Error Count: 0x3 00:31:15.841 Submission Queue Id: 0x0 00:31:15.841 Command Id: 0x5 00:31:15.841 Phase Bit: 0 00:31:15.841 Status Code: 0x2 00:31:15.841 Status Code Type: 0x0 00:31:15.841 Do Not Retry: 1 00:31:15.841 Error Location: 0x28 00:31:15.841 LBA: 0x0 00:31:15.841 Namespace: 0x0 00:31:15.841 Vendor Log Page: 0x0 00:31:15.841 ----------- 00:31:15.841 Entry: 1 00:31:15.841 Error Count: 0x2 00:31:15.841 Submission Queue Id: 0x0 00:31:15.841 Command Id: 0x5 00:31:15.841 Phase Bit: 0 00:31:15.841 Status Code: 0x2 00:31:15.841 Status Code Type: 0x0 00:31:15.841 Do Not Retry: 1 00:31:15.841 Error Location: 0x28 00:31:15.841 LBA: 0x0 00:31:15.841 Namespace: 0x0 00:31:15.841 Vendor Log Page: 0x0 00:31:15.841 ----------- 00:31:15.841 Entry: 2 00:31:15.841 Error Count: 0x1 00:31:15.841 Submission Queue Id: 0x0 00:31:15.841 Command Id: 0x4 00:31:15.841 Phase Bit: 0 00:31:15.841 Status Code: 0x2 00:31:15.841 Status Code Type: 0x0 00:31:15.841 Do Not Retry: 1 00:31:15.841 Error Location: 0x28 00:31:15.841 LBA: 0x0 00:31:15.841 Namespace: 0x0 00:31:15.841 Vendor Log Page: 0x0 00:31:15.841 00:31:15.841 Number of Queues 00:31:15.841 ================ 00:31:15.841 Number of I/O Submission Queues: 128 00:31:15.841 Number of I/O Completion Queues: 128 00:31:15.841 00:31:15.841 ZNS Specific Controller Data 00:31:15.841 ============================ 00:31:15.841 Zone Append Size Limit: 0 00:31:15.841 00:31:15.841 00:31:15.841 Active Namespaces 00:31:15.841 ================= 00:31:15.841 get_feature(0x05) failed 00:31:15.841 Namespace ID:1 00:31:15.841 Command Set Identifier: NVM (00h) 00:31:15.841 Deallocate: Supported 00:31:15.841 Deallocated/Unwritten Error: Not Supported 00:31:15.841 Deallocated Read Value: Unknown 00:31:15.841 Deallocate in Write Zeroes: Not Supported 00:31:15.841 Deallocated Guard Field: 0xFFFF 00:31:15.841 Flush: Supported 00:31:15.841 Reservation: Not Supported 00:31:15.841 Namespace Sharing Capabilities: Multiple Controllers 00:31:15.841 Size (in LBAs): 1953525168 (931GiB) 00:31:15.841 Capacity (in LBAs): 1953525168 (931GiB) 00:31:15.841 Utilization (in LBAs): 1953525168 (931GiB) 00:31:15.841 UUID: 8a3e123f-8b1c-4de4-aa4d-8071b4a56a53 00:31:15.841 Thin Provisioning: Not Supported 00:31:15.841 Per-NS Atomic Units: Yes 00:31:15.841 Atomic Boundary Size (Normal): 0 00:31:15.841 Atomic Boundary Size (PFail): 0 00:31:15.841 Atomic Boundary Offset: 0 00:31:15.841 NGUID/EUI64 Never Reused: No 00:31:15.841 ANA group ID: 1 00:31:15.841 Namespace Write Protected: No 00:31:15.841 Number of LBA Formats: 1 00:31:15.841 Current LBA Format: LBA Format #00 00:31:15.841 LBA Format #00: Data Size: 512 Metadata Size: 0 00:31:15.841 00:31:15.841 09:04:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:31:15.841 09:04:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:15.841 09:04:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:31:15.841 09:04:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:15.841 09:04:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:31:15.841 09:04:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:15.841 09:04:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:15.841 rmmod nvme_tcp 00:31:16.102 rmmod nvme_fabrics 00:31:16.102 09:04:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:16.102 09:04:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:31:16.102 09:04:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:31:16.102 09:04:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:31:16.102 09:04:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:16.102 09:04:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:16.102 09:04:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:16.102 09:04:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:16.102 09:04:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:16.102 09:04:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:16.102 09:04:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:16.102 09:04:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:18.075 09:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:18.075 09:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:31:18.075 09:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:31:18.075 09:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:31:18.076 09:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:18.076 09:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:18.076 09:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:31:18.076 09:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:18.076 09:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:31:18.076 09:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:31:18.076 09:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:31:19.467 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:31:19.468 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:31:19.468 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:31:19.468 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:31:19.468 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:31:19.468 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:31:19.468 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:31:19.468 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:31:19.468 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:31:19.468 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:31:19.468 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:31:19.468 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:31:19.468 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:31:19.468 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:31:19.468 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:31:19.468 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:31:20.405 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:31:20.405 00:31:20.405 real 0m9.450s 00:31:20.405 user 0m2.107s 00:31:20.405 sys 0m3.420s 00:31:20.405 09:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:20.405 09:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:31:20.405 ************************************ 00:31:20.405 END TEST nvmf_identify_kernel_target 00:31:20.405 ************************************ 00:31:20.405 09:04:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:31:20.405 09:04:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:31:20.405 09:04:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:20.405 09:04:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:20.405 ************************************ 00:31:20.405 START TEST nvmf_auth_host 00:31:20.405 ************************************ 00:31:20.405 09:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:31:20.405 * Looking for test storage... 00:31:20.405 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:20.665 09:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:20.665 09:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:31:20.665 09:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:20.665 09:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:20.665 09:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:20.665 09:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:20.665 09:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:20.665 09:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:20.665 09:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:20.665 09:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:20.665 09:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:20.665 09:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:20.665 09:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:31:20.665 09:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:31:20.665 09:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:20.665 09:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:20.665 09:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:20.665 09:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:20.665 09:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:20.665 09:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:20.665 09:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:20.665 09:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:20.665 09:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:20.665 09:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:20.665 09:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:20.665 09:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:31:20.665 09:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:20.665 09:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:31:20.665 09:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:20.665 09:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:20.665 09:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:20.665 09:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:20.665 09:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:20.665 09:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:20.665 09:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:20.665 09:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:20.665 09:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:31:20.665 09:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:31:20.665 09:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:31:20.665 09:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:31:20.665 09:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:31:20.665 09:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:31:20.665 09:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:31:20.665 09:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:31:20.665 09:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:31:20.665 09:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:20.665 09:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:20.665 09:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:20.665 09:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:20.666 09:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:20.666 09:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:20.666 09:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:20.666 09:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:20.666 09:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:20.666 09:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:20.666 09:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:31:20.666 09:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:22.567 09:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:22.567 09:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:31:22.567 09:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:22.567 09:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:22.567 09:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:22.567 09:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:22.567 09:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:22.567 09:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:31:22.567 09:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:22.567 09:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:31:22.567 09:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:31:22.567 09:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:31:22.567 09:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:31:22.567 09:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:31:22.567 09:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:31:22.567 09:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:22.567 09:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:22.567 09:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:22.567 09:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:22.567 09:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:22.567 09:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:22.567 09:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:22.567 09:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:22.567 09:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:22.567 09:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:22.567 09:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:22.568 09:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:22.568 09:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:22.568 09:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:22.568 09:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:22.568 09:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:22.568 09:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:22.568 09:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:22.568 09:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:31:22.568 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:31:22.568 09:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:22.568 09:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:22.568 09:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:22.568 09:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:22.568 09:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:22.568 09:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:22.568 09:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:31:22.568 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:31:22.568 09:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:22.568 09:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:22.568 09:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:22.568 09:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:22.568 09:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:22.568 09:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:22.568 09:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:22.568 09:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:22.568 09:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:22.568 09:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:22.568 09:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:22.568 09:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:22.568 09:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:22.568 09:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:22.568 09:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:22.568 09:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:31:22.568 Found net devices under 0000:0a:00.0: cvl_0_0 00:31:22.568 09:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:22.568 09:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:22.568 09:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:22.568 09:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:22.568 09:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:22.568 09:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:22.568 09:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:22.568 09:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:22.568 09:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:31:22.568 Found net devices under 0000:0a:00.1: cvl_0_1 00:31:22.568 09:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:22.568 09:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:22.568 09:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:31:22.568 09:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:22.568 09:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:22.568 09:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:22.568 09:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:22.568 09:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:22.568 09:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:22.568 09:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:22.568 09:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:22.568 09:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:22.568 09:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:22.568 09:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:22.568 09:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:22.568 09:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:22.568 09:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:22.568 09:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:22.568 09:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:22.568 09:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:22.568 09:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:22.568 09:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:22.568 09:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:22.568 09:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:22.568 09:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:22.568 09:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:22.568 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:22.568 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.197 ms 00:31:22.568 00:31:22.568 --- 10.0.0.2 ping statistics --- 00:31:22.568 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:22.568 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:31:22.568 09:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:22.568 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:22.568 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.138 ms 00:31:22.568 00:31:22.568 --- 10.0.0.1 ping statistics --- 00:31:22.568 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:22.568 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:31:22.568 09:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:22.568 09:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:31:22.568 09:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:22.568 09:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:22.568 09:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:22.568 09:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:22.568 09:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:22.568 09:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:22.568 09:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:22.568 09:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:31:22.568 09:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:22.568 09:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:22.568 09:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:22.568 09:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=1100200 00:31:22.568 09:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:31:22.568 09:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 1100200 00:31:22.568 09:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 1100200 ']' 00:31:22.568 09:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:22.568 09:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:22.568 09:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:22.568 09:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:22.568 09:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:22.826 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:22.826 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:31:22.826 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:22.826 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:22.826 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:22.826 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:22.826 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:31:22.826 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:31:22.826 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:31:22.826 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:22.826 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:31:22.826 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:31:22.826 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:31:22.826 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:31:22.826 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=a321b15c23f5333cce85ca58385cc7f3 00:31:22.826 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:31:22.826 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.cVr 00:31:22.826 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key a321b15c23f5333cce85ca58385cc7f3 0 00:31:22.826 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 a321b15c23f5333cce85ca58385cc7f3 0 00:31:22.826 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:31:22.826 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:31:22.826 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=a321b15c23f5333cce85ca58385cc7f3 00:31:22.826 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:31:22.826 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:31:23.084 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.cVr 00:31:23.084 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.cVr 00:31:23.084 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.cVr 00:31:23.084 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:31:23.084 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:31:23.084 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:23.084 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:31:23.084 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:31:23.084 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:31:23.084 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:31:23.084 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=611aaaaa62e192f02c3539f6215c04ca0864f132c35d6b1bfd4297a7e03bca43 00:31:23.084 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:31:23.084 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.6Yx 00:31:23.084 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 611aaaaa62e192f02c3539f6215c04ca0864f132c35d6b1bfd4297a7e03bca43 3 00:31:23.084 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 611aaaaa62e192f02c3539f6215c04ca0864f132c35d6b1bfd4297a7e03bca43 3 00:31:23.084 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:31:23.085 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:31:23.085 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=611aaaaa62e192f02c3539f6215c04ca0864f132c35d6b1bfd4297a7e03bca43 00:31:23.085 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:31:23.085 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:31:23.085 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.6Yx 00:31:23.085 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.6Yx 00:31:23.085 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.6Yx 00:31:23.085 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:31:23.085 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:31:23.085 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:23.085 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:31:23.085 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:31:23.085 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:31:23.085 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:31:23.085 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=78ccbb839bb12c76b08afa8cbe482474d36ffd7a23cc6f95 00:31:23.085 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:31:23.085 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.zvm 00:31:23.085 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 78ccbb839bb12c76b08afa8cbe482474d36ffd7a23cc6f95 0 00:31:23.085 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 78ccbb839bb12c76b08afa8cbe482474d36ffd7a23cc6f95 0 00:31:23.085 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:31:23.085 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:31:23.085 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=78ccbb839bb12c76b08afa8cbe482474d36ffd7a23cc6f95 00:31:23.085 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:31:23.085 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:31:23.085 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.zvm 00:31:23.085 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.zvm 00:31:23.085 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.zvm 00:31:23.085 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:31:23.085 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:31:23.085 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:23.085 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:31:23.085 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:31:23.085 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:31:23.085 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:31:23.085 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=23abc4c209a31ee8a9a27c103a5f5cf7971d85432cdd7379 00:31:23.085 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:31:23.085 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.QY6 00:31:23.085 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 23abc4c209a31ee8a9a27c103a5f5cf7971d85432cdd7379 2 00:31:23.085 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 23abc4c209a31ee8a9a27c103a5f5cf7971d85432cdd7379 2 00:31:23.085 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:31:23.085 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:31:23.085 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=23abc4c209a31ee8a9a27c103a5f5cf7971d85432cdd7379 00:31:23.085 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:31:23.085 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:31:23.085 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.QY6 00:31:23.085 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.QY6 00:31:23.085 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.QY6 00:31:23.085 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:31:23.085 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:31:23.085 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:23.085 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:31:23.085 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:31:23.085 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:31:23.085 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:31:23.085 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=f1bf6b61cc3e365e36027e0c1d0e7e72 00:31:23.085 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:31:23.085 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.Aum 00:31:23.085 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key f1bf6b61cc3e365e36027e0c1d0e7e72 1 00:31:23.085 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 f1bf6b61cc3e365e36027e0c1d0e7e72 1 00:31:23.085 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:31:23.085 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:31:23.085 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=f1bf6b61cc3e365e36027e0c1d0e7e72 00:31:23.085 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:31:23.085 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:31:23.085 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.Aum 00:31:23.085 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.Aum 00:31:23.085 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.Aum 00:31:23.085 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:31:23.085 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:31:23.085 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:23.085 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:31:23.085 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:31:23.085 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:31:23.085 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:31:23.085 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=bff5e8a35e56e1d63114b309c94613ec 00:31:23.085 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:31:23.085 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.kKK 00:31:23.085 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key bff5e8a35e56e1d63114b309c94613ec 1 00:31:23.085 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 bff5e8a35e56e1d63114b309c94613ec 1 00:31:23.085 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:31:23.085 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:31:23.085 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=bff5e8a35e56e1d63114b309c94613ec 00:31:23.085 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:31:23.085 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:31:23.343 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.kKK 00:31:23.343 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.kKK 00:31:23.343 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.kKK 00:31:23.343 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:31:23.343 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:31:23.344 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:23.344 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:31:23.344 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:31:23.344 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:31:23.344 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:31:23.344 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=d84bca22292003b66ca9e8aad21e4bdd7f52304ed1af61d1 00:31:23.344 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:31:23.344 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.cYj 00:31:23.344 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key d84bca22292003b66ca9e8aad21e4bdd7f52304ed1af61d1 2 00:31:23.344 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 d84bca22292003b66ca9e8aad21e4bdd7f52304ed1af61d1 2 00:31:23.344 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:31:23.344 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:31:23.344 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=d84bca22292003b66ca9e8aad21e4bdd7f52304ed1af61d1 00:31:23.344 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:31:23.344 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:31:23.344 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.cYj 00:31:23.344 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.cYj 00:31:23.344 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.cYj 00:31:23.344 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:31:23.344 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:31:23.344 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:23.344 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:31:23.344 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:31:23.344 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:31:23.344 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:31:23.344 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=26b6ce7ec9d009b0e90416b0831cea4a 00:31:23.344 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:31:23.344 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.zrq 00:31:23.344 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 26b6ce7ec9d009b0e90416b0831cea4a 0 00:31:23.344 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 26b6ce7ec9d009b0e90416b0831cea4a 0 00:31:23.344 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:31:23.344 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:31:23.344 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=26b6ce7ec9d009b0e90416b0831cea4a 00:31:23.344 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:31:23.344 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:31:23.344 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.zrq 00:31:23.344 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.zrq 00:31:23.344 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.zrq 00:31:23.344 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:31:23.344 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:31:23.344 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:23.344 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:31:23.344 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:31:23.344 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:31:23.344 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:31:23.344 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=df5e728a2aefeec0966bb063b42f4503cf383023dacf1aa162fbd1d54c354a48 00:31:23.344 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:31:23.344 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.hq5 00:31:23.344 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key df5e728a2aefeec0966bb063b42f4503cf383023dacf1aa162fbd1d54c354a48 3 00:31:23.344 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 df5e728a2aefeec0966bb063b42f4503cf383023dacf1aa162fbd1d54c354a48 3 00:31:23.344 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:31:23.344 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:31:23.344 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=df5e728a2aefeec0966bb063b42f4503cf383023dacf1aa162fbd1d54c354a48 00:31:23.344 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:31:23.344 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:31:23.344 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.hq5 00:31:23.344 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.hq5 00:31:23.344 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.hq5 00:31:23.344 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:31:23.344 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 1100200 00:31:23.344 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 1100200 ']' 00:31:23.344 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:23.344 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:23.344 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:23.344 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:23.344 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:23.344 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:23.603 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:23.603 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:31:23.603 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:31:23.603 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.cVr 00:31:23.603 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:23.603 09:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:23.603 09:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:23.603 09:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.6Yx ]] 00:31:23.603 09:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.6Yx 00:31:23.603 09:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:23.603 09:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:23.603 09:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:23.603 09:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:31:23.603 09:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.zvm 00:31:23.603 09:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:23.603 09:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:23.603 09:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:23.603 09:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.QY6 ]] 00:31:23.603 09:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.QY6 00:31:23.603 09:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:23.603 09:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:23.603 09:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:23.603 09:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:31:23.603 09:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.Aum 00:31:23.603 09:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:23.603 09:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:23.603 09:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:23.603 09:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.kKK ]] 00:31:23.603 09:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.kKK 00:31:23.603 09:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:23.603 09:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:23.603 09:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:23.603 09:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:31:23.603 09:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.cYj 00:31:23.603 09:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:23.603 09:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:23.603 09:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:23.603 09:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.zrq ]] 00:31:23.603 09:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.zrq 00:31:23.603 09:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:23.603 09:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:23.603 09:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:23.603 09:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:31:23.603 09:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.hq5 00:31:23.603 09:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:23.861 09:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:23.862 09:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:23.862 09:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:31:23.862 09:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:31:23.862 09:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:31:23.862 09:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:23.862 09:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:23.862 09:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:23.862 09:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:23.862 09:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:23.862 09:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:23.862 09:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:23.862 09:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:23.862 09:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:23.862 09:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:23.862 09:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:31:23.862 09:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:31:23.862 09:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:31:23.862 09:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:31:23.862 09:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:31:23.862 09:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:31:23.862 09:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:31:23.862 09:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:31:23.862 09:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:31:23.862 09:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:31:23.862 09:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:31:24.797 Waiting for block devices as requested 00:31:24.797 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:31:25.056 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:31:25.056 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:31:25.314 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:31:25.314 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:31:25.314 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:31:25.314 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:31:25.574 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:31:25.574 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:31:25.574 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:31:25.833 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:31:25.833 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:31:25.833 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:31:25.833 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:31:26.091 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:31:26.091 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:31:26.091 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:31:26.658 09:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:31:26.658 09:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:31:26.658 09:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:31:26.658 09:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:31:26.658 09:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:31:26.658 09:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:31:26.658 09:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:31:26.658 09:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:31:26.658 09:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:31:26.658 No valid GPT data, bailing 00:31:26.658 09:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:31:26.658 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:31:26.658 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:31:26.658 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:31:26.658 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:31:26.658 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:31:26.658 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:31:26.658 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:31:26.658 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:31:26.658 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:31:26.658 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:31:26.658 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:31:26.658 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:31:26.658 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:31:26.658 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:31:26.658 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:31:26.658 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:31:26.658 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:31:26.918 00:31:26.918 Discovery Log Number of Records 2, Generation counter 2 00:31:26.918 =====Discovery Log Entry 0====== 00:31:26.918 trtype: tcp 00:31:26.918 adrfam: ipv4 00:31:26.918 subtype: current discovery subsystem 00:31:26.918 treq: not specified, sq flow control disable supported 00:31:26.918 portid: 1 00:31:26.918 trsvcid: 4420 00:31:26.918 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:31:26.918 traddr: 10.0.0.1 00:31:26.918 eflags: none 00:31:26.918 sectype: none 00:31:26.918 =====Discovery Log Entry 1====== 00:31:26.918 trtype: tcp 00:31:26.918 adrfam: ipv4 00:31:26.918 subtype: nvme subsystem 00:31:26.918 treq: not specified, sq flow control disable supported 00:31:26.918 portid: 1 00:31:26.918 trsvcid: 4420 00:31:26.918 subnqn: nqn.2024-02.io.spdk:cnode0 00:31:26.918 traddr: 10.0.0.1 00:31:26.918 eflags: none 00:31:26.918 sectype: none 00:31:26.918 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:31:26.918 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:31:26.918 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:31:26.918 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:31:26.918 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:26.918 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:26.918 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:26.918 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:26.918 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzhjY2JiODM5YmIxMmM3NmIwOGFmYThjYmU0ODI0NzRkMzZmZmQ3YTIzY2M2Zjk1shH7Bw==: 00:31:26.918 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjNhYmM0YzIwOWEzMWVlOGE5YTI3YzEwM2E1ZjVjZjc5NzFkODU0MzJjZGQ3Mzc5bapaxA==: 00:31:26.918 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:26.918 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:26.918 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzhjY2JiODM5YmIxMmM3NmIwOGFmYThjYmU0ODI0NzRkMzZmZmQ3YTIzY2M2Zjk1shH7Bw==: 00:31:26.918 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjNhYmM0YzIwOWEzMWVlOGE5YTI3YzEwM2E1ZjVjZjc5NzFkODU0MzJjZGQ3Mzc5bapaxA==: ]] 00:31:26.918 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjNhYmM0YzIwOWEzMWVlOGE5YTI3YzEwM2E1ZjVjZjc5NzFkODU0MzJjZGQ3Mzc5bapaxA==: 00:31:26.918 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:31:26.918 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:31:26.918 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:31:26.918 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:31:26.918 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:31:26.918 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:26.918 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:31:26.918 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:31:26.918 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:26.918 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:26.918 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:31:26.918 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:26.918 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:26.918 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:26.918 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:26.918 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:26.918 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:26.918 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:26.918 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:26.918 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:26.918 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:26.918 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:26.918 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:26.918 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:26.918 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:26.918 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:26.918 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:26.918 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:26.918 nvme0n1 00:31:26.918 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:26.918 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:26.918 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:26.918 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:26.918 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:26.918 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:26.918 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:26.918 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:26.918 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:26.918 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:26.918 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:26.918 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:31:26.918 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:26.918 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:26.918 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:31:26.918 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:26.918 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:26.918 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:26.918 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:26.918 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTMyMWIxNWMyM2Y1MzMzY2NlODVjYTU4Mzg1Y2M3ZjPIwf4b: 00:31:26.918 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjExYWFhYWE2MmUxOTJmMDJjMzUzOWY2MjE1YzA0Y2EwODY0ZjEzMmMzNWQ2YjFiZmQ0Mjk3YTdlMDNiY2E0M8w8D9o=: 00:31:26.918 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:26.918 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:26.918 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTMyMWIxNWMyM2Y1MzMzY2NlODVjYTU4Mzg1Y2M3ZjPIwf4b: 00:31:26.918 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjExYWFhYWE2MmUxOTJmMDJjMzUzOWY2MjE1YzA0Y2EwODY0ZjEzMmMzNWQ2YjFiZmQ0Mjk3YTdlMDNiY2E0M8w8D9o=: ]] 00:31:26.918 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjExYWFhYWE2MmUxOTJmMDJjMzUzOWY2MjE1YzA0Y2EwODY0ZjEzMmMzNWQ2YjFiZmQ0Mjk3YTdlMDNiY2E0M8w8D9o=: 00:31:26.918 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:31:26.918 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:26.919 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:26.919 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:26.919 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:26.919 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:26.919 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:31:26.919 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:26.919 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:27.178 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:27.178 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:27.178 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:27.178 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:27.178 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:27.178 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:27.178 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:27.179 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:27.179 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:27.179 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:27.179 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:27.179 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:27.179 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:27.179 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:27.179 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:27.179 nvme0n1 00:31:27.179 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:27.179 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:27.179 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:27.179 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:27.179 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:27.179 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:27.179 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:27.179 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:27.179 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:27.179 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:27.179 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:27.179 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:27.179 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:31:27.179 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:27.179 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:27.179 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:27.179 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:27.179 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzhjY2JiODM5YmIxMmM3NmIwOGFmYThjYmU0ODI0NzRkMzZmZmQ3YTIzY2M2Zjk1shH7Bw==: 00:31:27.179 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjNhYmM0YzIwOWEzMWVlOGE5YTI3YzEwM2E1ZjVjZjc5NzFkODU0MzJjZGQ3Mzc5bapaxA==: 00:31:27.179 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:27.179 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:27.179 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzhjY2JiODM5YmIxMmM3NmIwOGFmYThjYmU0ODI0NzRkMzZmZmQ3YTIzY2M2Zjk1shH7Bw==: 00:31:27.179 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjNhYmM0YzIwOWEzMWVlOGE5YTI3YzEwM2E1ZjVjZjc5NzFkODU0MzJjZGQ3Mzc5bapaxA==: ]] 00:31:27.179 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjNhYmM0YzIwOWEzMWVlOGE5YTI3YzEwM2E1ZjVjZjc5NzFkODU0MzJjZGQ3Mzc5bapaxA==: 00:31:27.179 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:31:27.179 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:27.179 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:27.179 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:27.179 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:27.179 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:27.179 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:31:27.179 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:27.179 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:27.179 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:27.179 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:27.179 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:27.179 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:27.179 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:27.179 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:27.179 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:27.179 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:27.179 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:27.179 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:27.179 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:27.179 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:27.179 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:27.179 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:27.179 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:27.438 nvme0n1 00:31:27.438 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:27.438 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:27.438 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:27.438 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:27.438 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:27.438 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:27.438 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:27.438 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:27.438 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:27.438 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:27.438 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:27.438 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:27.438 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:31:27.438 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:27.438 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:27.438 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:27.438 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:27.438 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjFiZjZiNjFjYzNlMzY1ZTM2MDI3ZTBjMWQwZTdlNzJr9MSw: 00:31:27.438 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmZmNWU4YTM1ZTU2ZTFkNjMxMTRiMzA5Yzk0NjEzZWOJn1fL: 00:31:27.438 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:27.438 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:27.438 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjFiZjZiNjFjYzNlMzY1ZTM2MDI3ZTBjMWQwZTdlNzJr9MSw: 00:31:27.438 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmZmNWU4YTM1ZTU2ZTFkNjMxMTRiMzA5Yzk0NjEzZWOJn1fL: ]] 00:31:27.438 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmZmNWU4YTM1ZTU2ZTFkNjMxMTRiMzA5Yzk0NjEzZWOJn1fL: 00:31:27.438 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:31:27.438 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:27.438 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:27.438 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:27.438 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:27.438 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:27.438 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:31:27.438 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:27.438 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:27.438 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:27.438 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:27.438 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:27.438 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:27.438 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:27.438 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:27.438 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:27.438 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:27.438 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:27.438 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:27.438 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:27.438 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:27.438 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:27.438 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:27.438 09:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:27.696 nvme0n1 00:31:27.696 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:27.696 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:27.696 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:27.696 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:27.696 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:27.696 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:27.696 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:27.696 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:27.696 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:27.696 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:27.696 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:27.696 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:27.696 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:31:27.696 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:27.696 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:27.696 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:27.696 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:27.696 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDg0YmNhMjIyOTIwMDNiNjZjYTllOGFhZDIxZTRiZGQ3ZjUyMzA0ZWQxYWY2MWQxpD8UKg==: 00:31:27.696 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjZiNmNlN2VjOWQwMDliMGU5MDQxNmIwODMxY2VhNGGRqHZC: 00:31:27.696 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:27.696 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:27.696 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDg0YmNhMjIyOTIwMDNiNjZjYTllOGFhZDIxZTRiZGQ3ZjUyMzA0ZWQxYWY2MWQxpD8UKg==: 00:31:27.696 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjZiNmNlN2VjOWQwMDliMGU5MDQxNmIwODMxY2VhNGGRqHZC: ]] 00:31:27.696 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjZiNmNlN2VjOWQwMDliMGU5MDQxNmIwODMxY2VhNGGRqHZC: 00:31:27.696 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:31:27.696 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:27.696 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:27.696 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:27.696 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:27.696 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:27.696 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:31:27.696 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:27.696 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:27.696 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:27.696 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:27.696 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:27.696 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:27.696 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:27.696 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:27.696 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:27.696 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:27.696 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:27.697 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:27.697 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:27.697 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:27.697 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:27.697 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:27.697 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:27.957 nvme0n1 00:31:27.957 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:27.957 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:27.957 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:27.957 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:27.957 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:27.957 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:27.957 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:27.957 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:27.957 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:27.957 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:27.957 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:27.957 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:27.957 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:31:27.957 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:27.957 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:27.957 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:27.957 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:27.957 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGY1ZTcyOGEyYWVmZWVjMDk2NmJiMDYzYjQyZjQ1MDNjZjM4MzAyM2RhY2YxYWExNjJmYmQxZDU0YzM1NGE0ONIBPhA=: 00:31:27.957 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:27.957 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:27.957 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:27.957 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGY1ZTcyOGEyYWVmZWVjMDk2NmJiMDYzYjQyZjQ1MDNjZjM4MzAyM2RhY2YxYWExNjJmYmQxZDU0YzM1NGE0ONIBPhA=: 00:31:27.957 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:27.957 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:31:27.957 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:27.957 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:27.957 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:27.957 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:27.957 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:27.957 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:31:27.957 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:27.957 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:27.957 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:27.957 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:27.957 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:27.957 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:27.957 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:27.957 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:27.957 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:27.957 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:27.957 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:27.957 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:27.957 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:27.957 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:27.957 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:27.957 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:27.957 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:28.215 nvme0n1 00:31:28.215 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:28.215 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:28.215 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:28.215 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:28.215 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:28.215 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:28.215 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:28.215 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:28.215 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:28.215 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:28.215 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:28.215 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:28.215 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:28.215 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:31:28.215 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:28.215 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:28.215 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:28.215 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:28.215 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTMyMWIxNWMyM2Y1MzMzY2NlODVjYTU4Mzg1Y2M3ZjPIwf4b: 00:31:28.215 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjExYWFhYWE2MmUxOTJmMDJjMzUzOWY2MjE1YzA0Y2EwODY0ZjEzMmMzNWQ2YjFiZmQ0Mjk3YTdlMDNiY2E0M8w8D9o=: 00:31:28.215 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:28.215 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:28.215 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTMyMWIxNWMyM2Y1MzMzY2NlODVjYTU4Mzg1Y2M3ZjPIwf4b: 00:31:28.215 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjExYWFhYWE2MmUxOTJmMDJjMzUzOWY2MjE1YzA0Y2EwODY0ZjEzMmMzNWQ2YjFiZmQ0Mjk3YTdlMDNiY2E0M8w8D9o=: ]] 00:31:28.215 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjExYWFhYWE2MmUxOTJmMDJjMzUzOWY2MjE1YzA0Y2EwODY0ZjEzMmMzNWQ2YjFiZmQ0Mjk3YTdlMDNiY2E0M8w8D9o=: 00:31:28.215 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:31:28.215 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:28.215 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:28.215 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:28.215 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:28.215 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:28.215 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:31:28.215 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:28.215 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:28.215 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:28.215 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:28.215 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:28.215 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:28.215 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:28.215 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:28.215 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:28.215 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:28.215 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:28.215 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:28.215 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:28.215 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:28.215 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:28.215 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:28.215 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:28.474 nvme0n1 00:31:28.474 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:28.474 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:28.474 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:28.474 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:28.474 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:28.474 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:28.474 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:28.474 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:28.474 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:28.474 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:28.474 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:28.474 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:28.474 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:31:28.474 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:28.474 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:28.474 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:28.474 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:28.474 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzhjY2JiODM5YmIxMmM3NmIwOGFmYThjYmU0ODI0NzRkMzZmZmQ3YTIzY2M2Zjk1shH7Bw==: 00:31:28.474 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjNhYmM0YzIwOWEzMWVlOGE5YTI3YzEwM2E1ZjVjZjc5NzFkODU0MzJjZGQ3Mzc5bapaxA==: 00:31:28.474 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:28.474 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:28.474 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzhjY2JiODM5YmIxMmM3NmIwOGFmYThjYmU0ODI0NzRkMzZmZmQ3YTIzY2M2Zjk1shH7Bw==: 00:31:28.474 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjNhYmM0YzIwOWEzMWVlOGE5YTI3YzEwM2E1ZjVjZjc5NzFkODU0MzJjZGQ3Mzc5bapaxA==: ]] 00:31:28.474 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjNhYmM0YzIwOWEzMWVlOGE5YTI3YzEwM2E1ZjVjZjc5NzFkODU0MzJjZGQ3Mzc5bapaxA==: 00:31:28.474 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:31:28.474 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:28.474 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:28.474 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:28.474 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:28.474 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:28.475 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:31:28.475 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:28.475 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:28.475 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:28.475 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:28.475 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:28.475 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:28.475 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:28.475 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:28.475 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:28.475 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:28.475 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:28.475 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:28.475 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:28.475 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:28.475 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:28.475 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:28.475 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:28.734 nvme0n1 00:31:28.734 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:28.734 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:28.734 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:28.734 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:28.734 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:28.734 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:28.734 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:28.734 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:28.734 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:28.734 09:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:28.734 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:28.734 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:28.734 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:31:28.734 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:28.734 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:28.734 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:28.734 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:28.734 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjFiZjZiNjFjYzNlMzY1ZTM2MDI3ZTBjMWQwZTdlNzJr9MSw: 00:31:28.735 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmZmNWU4YTM1ZTU2ZTFkNjMxMTRiMzA5Yzk0NjEzZWOJn1fL: 00:31:28.735 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:28.735 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:28.735 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjFiZjZiNjFjYzNlMzY1ZTM2MDI3ZTBjMWQwZTdlNzJr9MSw: 00:31:28.735 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmZmNWU4YTM1ZTU2ZTFkNjMxMTRiMzA5Yzk0NjEzZWOJn1fL: ]] 00:31:28.735 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmZmNWU4YTM1ZTU2ZTFkNjMxMTRiMzA5Yzk0NjEzZWOJn1fL: 00:31:28.735 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:31:28.735 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:28.735 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:28.735 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:28.735 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:28.735 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:28.735 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:31:28.735 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:28.735 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:28.735 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:28.735 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:28.735 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:28.735 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:28.735 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:28.735 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:28.735 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:28.735 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:28.735 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:28.735 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:28.735 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:28.735 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:28.735 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:28.735 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:28.735 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:28.995 nvme0n1 00:31:28.995 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:28.995 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:28.995 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:28.995 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:28.995 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:28.995 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:28.995 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:28.995 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:28.995 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:28.995 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:28.995 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:28.995 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:28.995 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:31:28.995 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:28.995 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:28.995 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:28.995 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:28.995 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDg0YmNhMjIyOTIwMDNiNjZjYTllOGFhZDIxZTRiZGQ3ZjUyMzA0ZWQxYWY2MWQxpD8UKg==: 00:31:28.995 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjZiNmNlN2VjOWQwMDliMGU5MDQxNmIwODMxY2VhNGGRqHZC: 00:31:28.995 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:28.995 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:28.995 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDg0YmNhMjIyOTIwMDNiNjZjYTllOGFhZDIxZTRiZGQ3ZjUyMzA0ZWQxYWY2MWQxpD8UKg==: 00:31:28.995 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjZiNmNlN2VjOWQwMDliMGU5MDQxNmIwODMxY2VhNGGRqHZC: ]] 00:31:28.995 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjZiNmNlN2VjOWQwMDliMGU5MDQxNmIwODMxY2VhNGGRqHZC: 00:31:28.995 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:31:28.995 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:28.995 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:28.995 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:28.995 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:28.995 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:28.995 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:31:28.995 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:28.995 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:28.995 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:28.995 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:28.995 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:28.995 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:28.995 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:28.995 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:28.995 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:28.995 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:28.995 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:28.995 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:28.995 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:28.995 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:28.995 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:28.995 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:28.995 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:29.256 nvme0n1 00:31:29.256 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:29.256 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:29.256 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:29.256 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:29.256 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:29.256 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:29.256 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:29.256 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:29.256 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:29.256 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:29.256 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:29.256 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:29.256 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:31:29.256 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:29.256 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:29.256 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:29.256 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:29.256 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGY1ZTcyOGEyYWVmZWVjMDk2NmJiMDYzYjQyZjQ1MDNjZjM4MzAyM2RhY2YxYWExNjJmYmQxZDU0YzM1NGE0ONIBPhA=: 00:31:29.256 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:29.256 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:29.256 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:29.256 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGY1ZTcyOGEyYWVmZWVjMDk2NmJiMDYzYjQyZjQ1MDNjZjM4MzAyM2RhY2YxYWExNjJmYmQxZDU0YzM1NGE0ONIBPhA=: 00:31:29.256 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:29.256 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:31:29.256 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:29.256 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:29.256 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:29.256 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:29.256 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:29.256 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:31:29.256 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:29.256 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:29.256 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:29.256 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:29.256 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:29.256 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:29.256 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:29.256 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:29.256 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:29.256 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:29.256 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:29.256 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:29.256 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:29.256 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:29.256 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:29.256 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:29.256 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:29.517 nvme0n1 00:31:29.517 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:29.517 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:29.517 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:29.517 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:29.517 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:29.517 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:29.517 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:29.517 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:29.517 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:29.517 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:29.517 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:29.517 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:29.517 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:29.517 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:31:29.517 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:29.517 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:29.517 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:29.517 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:29.517 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTMyMWIxNWMyM2Y1MzMzY2NlODVjYTU4Mzg1Y2M3ZjPIwf4b: 00:31:29.518 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjExYWFhYWE2MmUxOTJmMDJjMzUzOWY2MjE1YzA0Y2EwODY0ZjEzMmMzNWQ2YjFiZmQ0Mjk3YTdlMDNiY2E0M8w8D9o=: 00:31:29.518 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:29.518 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:29.518 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTMyMWIxNWMyM2Y1MzMzY2NlODVjYTU4Mzg1Y2M3ZjPIwf4b: 00:31:29.518 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjExYWFhYWE2MmUxOTJmMDJjMzUzOWY2MjE1YzA0Y2EwODY0ZjEzMmMzNWQ2YjFiZmQ0Mjk3YTdlMDNiY2E0M8w8D9o=: ]] 00:31:29.518 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjExYWFhYWE2MmUxOTJmMDJjMzUzOWY2MjE1YzA0Y2EwODY0ZjEzMmMzNWQ2YjFiZmQ0Mjk3YTdlMDNiY2E0M8w8D9o=: 00:31:29.518 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:31:29.518 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:29.518 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:29.518 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:29.518 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:29.518 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:29.518 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:31:29.518 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:29.518 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:29.518 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:29.518 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:29.518 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:29.518 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:29.518 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:29.518 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:29.518 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:29.518 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:29.518 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:29.518 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:29.518 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:29.518 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:29.518 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:29.518 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:29.518 09:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:29.778 nvme0n1 00:31:29.778 09:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:29.778 09:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:29.778 09:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:29.778 09:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:29.778 09:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:29.778 09:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:29.778 09:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:29.778 09:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:29.778 09:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:29.778 09:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:29.778 09:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:29.778 09:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:29.778 09:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:31:29.778 09:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:29.778 09:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:29.778 09:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:29.778 09:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:29.778 09:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzhjY2JiODM5YmIxMmM3NmIwOGFmYThjYmU0ODI0NzRkMzZmZmQ3YTIzY2M2Zjk1shH7Bw==: 00:31:29.778 09:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjNhYmM0YzIwOWEzMWVlOGE5YTI3YzEwM2E1ZjVjZjc5NzFkODU0MzJjZGQ3Mzc5bapaxA==: 00:31:29.778 09:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:29.778 09:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:29.778 09:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzhjY2JiODM5YmIxMmM3NmIwOGFmYThjYmU0ODI0NzRkMzZmZmQ3YTIzY2M2Zjk1shH7Bw==: 00:31:29.778 09:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjNhYmM0YzIwOWEzMWVlOGE5YTI3YzEwM2E1ZjVjZjc5NzFkODU0MzJjZGQ3Mzc5bapaxA==: ]] 00:31:29.778 09:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjNhYmM0YzIwOWEzMWVlOGE5YTI3YzEwM2E1ZjVjZjc5NzFkODU0MzJjZGQ3Mzc5bapaxA==: 00:31:29.778 09:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:31:29.778 09:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:29.778 09:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:29.778 09:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:29.778 09:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:29.778 09:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:29.778 09:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:31:29.778 09:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:29.778 09:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:29.778 09:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:29.778 09:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:29.778 09:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:29.778 09:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:29.778 09:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:29.778 09:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:29.778 09:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:29.778 09:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:29.778 09:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:29.778 09:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:29.778 09:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:29.778 09:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:29.778 09:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:29.778 09:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:29.778 09:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:30.037 nvme0n1 00:31:30.037 09:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:30.037 09:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:30.037 09:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:30.037 09:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:30.037 09:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:30.037 09:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:30.297 09:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:30.297 09:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:30.297 09:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:30.297 09:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:30.297 09:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:30.297 09:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:30.297 09:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:31:30.297 09:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:30.297 09:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:30.297 09:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:30.297 09:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:30.297 09:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjFiZjZiNjFjYzNlMzY1ZTM2MDI3ZTBjMWQwZTdlNzJr9MSw: 00:31:30.297 09:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmZmNWU4YTM1ZTU2ZTFkNjMxMTRiMzA5Yzk0NjEzZWOJn1fL: 00:31:30.297 09:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:30.297 09:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:30.297 09:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjFiZjZiNjFjYzNlMzY1ZTM2MDI3ZTBjMWQwZTdlNzJr9MSw: 00:31:30.297 09:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmZmNWU4YTM1ZTU2ZTFkNjMxMTRiMzA5Yzk0NjEzZWOJn1fL: ]] 00:31:30.297 09:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmZmNWU4YTM1ZTU2ZTFkNjMxMTRiMzA5Yzk0NjEzZWOJn1fL: 00:31:30.297 09:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:31:30.297 09:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:30.297 09:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:30.297 09:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:30.297 09:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:30.297 09:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:30.297 09:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:31:30.297 09:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:30.297 09:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:30.297 09:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:30.297 09:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:30.297 09:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:30.297 09:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:30.297 09:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:30.297 09:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:30.297 09:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:30.297 09:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:30.297 09:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:30.297 09:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:30.297 09:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:30.297 09:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:30.297 09:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:30.297 09:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:30.297 09:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:30.556 nvme0n1 00:31:30.556 09:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:30.556 09:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:30.556 09:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:30.556 09:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:30.556 09:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:30.556 09:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:30.556 09:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:30.556 09:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:30.556 09:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:30.556 09:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:30.556 09:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:30.556 09:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:30.556 09:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:31:30.556 09:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:30.556 09:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:30.556 09:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:30.556 09:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:30.556 09:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDg0YmNhMjIyOTIwMDNiNjZjYTllOGFhZDIxZTRiZGQ3ZjUyMzA0ZWQxYWY2MWQxpD8UKg==: 00:31:30.556 09:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjZiNmNlN2VjOWQwMDliMGU5MDQxNmIwODMxY2VhNGGRqHZC: 00:31:30.556 09:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:30.556 09:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:30.556 09:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDg0YmNhMjIyOTIwMDNiNjZjYTllOGFhZDIxZTRiZGQ3ZjUyMzA0ZWQxYWY2MWQxpD8UKg==: 00:31:30.556 09:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjZiNmNlN2VjOWQwMDliMGU5MDQxNmIwODMxY2VhNGGRqHZC: ]] 00:31:30.556 09:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjZiNmNlN2VjOWQwMDliMGU5MDQxNmIwODMxY2VhNGGRqHZC: 00:31:30.556 09:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:31:30.556 09:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:30.557 09:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:30.557 09:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:30.557 09:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:30.557 09:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:30.557 09:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:31:30.557 09:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:30.557 09:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:30.557 09:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:30.557 09:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:30.557 09:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:30.557 09:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:30.557 09:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:30.557 09:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:30.557 09:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:30.557 09:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:30.557 09:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:30.557 09:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:30.557 09:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:30.557 09:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:30.557 09:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:30.557 09:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:30.557 09:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:30.817 nvme0n1 00:31:30.817 09:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:30.817 09:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:30.817 09:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:30.817 09:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:30.817 09:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:30.817 09:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:30.817 09:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:30.817 09:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:30.817 09:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:30.817 09:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:30.817 09:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:30.817 09:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:30.817 09:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:31:30.817 09:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:30.817 09:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:30.817 09:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:30.817 09:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:30.817 09:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGY1ZTcyOGEyYWVmZWVjMDk2NmJiMDYzYjQyZjQ1MDNjZjM4MzAyM2RhY2YxYWExNjJmYmQxZDU0YzM1NGE0ONIBPhA=: 00:31:30.817 09:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:30.817 09:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:30.817 09:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:30.817 09:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGY1ZTcyOGEyYWVmZWVjMDk2NmJiMDYzYjQyZjQ1MDNjZjM4MzAyM2RhY2YxYWExNjJmYmQxZDU0YzM1NGE0ONIBPhA=: 00:31:30.817 09:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:30.817 09:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:31:30.817 09:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:30.817 09:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:30.817 09:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:30.817 09:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:30.817 09:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:30.817 09:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:31:30.817 09:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:30.817 09:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:30.817 09:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:30.817 09:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:30.817 09:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:30.817 09:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:30.817 09:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:30.817 09:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:30.817 09:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:30.817 09:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:30.817 09:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:30.817 09:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:30.817 09:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:30.817 09:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:30.817 09:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:30.817 09:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:30.817 09:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:31.387 nvme0n1 00:31:31.387 09:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:31.387 09:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:31.387 09:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:31.387 09:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:31.387 09:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:31.387 09:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:31.387 09:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:31.387 09:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:31.387 09:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:31.387 09:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:31.387 09:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:31.387 09:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:31.387 09:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:31.387 09:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:31:31.387 09:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:31.387 09:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:31.387 09:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:31.387 09:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:31.387 09:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTMyMWIxNWMyM2Y1MzMzY2NlODVjYTU4Mzg1Y2M3ZjPIwf4b: 00:31:31.387 09:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjExYWFhYWE2MmUxOTJmMDJjMzUzOWY2MjE1YzA0Y2EwODY0ZjEzMmMzNWQ2YjFiZmQ0Mjk3YTdlMDNiY2E0M8w8D9o=: 00:31:31.387 09:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:31.387 09:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:31.387 09:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTMyMWIxNWMyM2Y1MzMzY2NlODVjYTU4Mzg1Y2M3ZjPIwf4b: 00:31:31.387 09:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjExYWFhYWE2MmUxOTJmMDJjMzUzOWY2MjE1YzA0Y2EwODY0ZjEzMmMzNWQ2YjFiZmQ0Mjk3YTdlMDNiY2E0M8w8D9o=: ]] 00:31:31.387 09:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjExYWFhYWE2MmUxOTJmMDJjMzUzOWY2MjE1YzA0Y2EwODY0ZjEzMmMzNWQ2YjFiZmQ0Mjk3YTdlMDNiY2E0M8w8D9o=: 00:31:31.387 09:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:31:31.387 09:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:31.387 09:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:31.387 09:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:31.387 09:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:31.387 09:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:31.387 09:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:31:31.387 09:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:31.387 09:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:31.387 09:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:31.387 09:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:31.387 09:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:31.387 09:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:31.387 09:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:31.388 09:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:31.388 09:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:31.388 09:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:31.388 09:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:31.388 09:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:31.388 09:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:31.388 09:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:31.388 09:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:31.388 09:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:31.388 09:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:31.955 nvme0n1 00:31:31.955 09:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:31.955 09:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:31.955 09:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:31.955 09:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:31.955 09:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:31.955 09:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:31.955 09:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:31.955 09:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:31.955 09:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:31.955 09:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:31.955 09:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:31.955 09:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:31.955 09:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:31:31.955 09:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:31.955 09:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:31.955 09:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:31.956 09:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:31.956 09:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzhjY2JiODM5YmIxMmM3NmIwOGFmYThjYmU0ODI0NzRkMzZmZmQ3YTIzY2M2Zjk1shH7Bw==: 00:31:31.956 09:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjNhYmM0YzIwOWEzMWVlOGE5YTI3YzEwM2E1ZjVjZjc5NzFkODU0MzJjZGQ3Mzc5bapaxA==: 00:31:31.956 09:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:31.956 09:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:31.956 09:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzhjY2JiODM5YmIxMmM3NmIwOGFmYThjYmU0ODI0NzRkMzZmZmQ3YTIzY2M2Zjk1shH7Bw==: 00:31:31.956 09:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjNhYmM0YzIwOWEzMWVlOGE5YTI3YzEwM2E1ZjVjZjc5NzFkODU0MzJjZGQ3Mzc5bapaxA==: ]] 00:31:31.956 09:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjNhYmM0YzIwOWEzMWVlOGE5YTI3YzEwM2E1ZjVjZjc5NzFkODU0MzJjZGQ3Mzc5bapaxA==: 00:31:31.956 09:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:31:31.956 09:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:31.956 09:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:31.956 09:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:31.956 09:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:31.956 09:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:31.956 09:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:31:31.956 09:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:31.956 09:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:31.956 09:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:31.956 09:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:31.956 09:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:31.956 09:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:31.956 09:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:31.956 09:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:31.956 09:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:31.956 09:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:31.956 09:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:31.956 09:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:31.956 09:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:31.956 09:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:31.956 09:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:31.956 09:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:31.956 09:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:32.522 nvme0n1 00:31:32.522 09:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:32.522 09:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:32.522 09:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:32.522 09:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:32.522 09:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:32.522 09:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:32.522 09:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:32.522 09:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:32.522 09:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:32.522 09:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:32.522 09:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:32.522 09:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:32.522 09:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:31:32.522 09:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:32.522 09:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:32.522 09:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:32.522 09:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:32.522 09:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjFiZjZiNjFjYzNlMzY1ZTM2MDI3ZTBjMWQwZTdlNzJr9MSw: 00:31:32.522 09:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmZmNWU4YTM1ZTU2ZTFkNjMxMTRiMzA5Yzk0NjEzZWOJn1fL: 00:31:32.522 09:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:32.522 09:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:32.522 09:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjFiZjZiNjFjYzNlMzY1ZTM2MDI3ZTBjMWQwZTdlNzJr9MSw: 00:31:32.522 09:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmZmNWU4YTM1ZTU2ZTFkNjMxMTRiMzA5Yzk0NjEzZWOJn1fL: ]] 00:31:32.522 09:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmZmNWU4YTM1ZTU2ZTFkNjMxMTRiMzA5Yzk0NjEzZWOJn1fL: 00:31:32.522 09:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:31:32.522 09:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:32.522 09:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:32.522 09:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:32.522 09:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:32.522 09:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:32.522 09:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:31:32.523 09:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:32.523 09:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:32.523 09:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:32.523 09:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:32.523 09:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:32.523 09:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:32.523 09:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:32.523 09:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:32.523 09:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:32.523 09:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:32.523 09:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:32.523 09:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:32.523 09:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:32.523 09:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:32.523 09:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:32.523 09:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:32.523 09:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:33.091 nvme0n1 00:31:33.091 09:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:33.091 09:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:33.091 09:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:33.091 09:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:33.091 09:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:33.091 09:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:33.091 09:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:33.091 09:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:33.091 09:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:33.091 09:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:33.091 09:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:33.091 09:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:33.091 09:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:31:33.091 09:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:33.091 09:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:33.091 09:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:33.091 09:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:33.091 09:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDg0YmNhMjIyOTIwMDNiNjZjYTllOGFhZDIxZTRiZGQ3ZjUyMzA0ZWQxYWY2MWQxpD8UKg==: 00:31:33.091 09:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjZiNmNlN2VjOWQwMDliMGU5MDQxNmIwODMxY2VhNGGRqHZC: 00:31:33.091 09:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:33.091 09:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:33.091 09:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDg0YmNhMjIyOTIwMDNiNjZjYTllOGFhZDIxZTRiZGQ3ZjUyMzA0ZWQxYWY2MWQxpD8UKg==: 00:31:33.091 09:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjZiNmNlN2VjOWQwMDliMGU5MDQxNmIwODMxY2VhNGGRqHZC: ]] 00:31:33.091 09:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjZiNmNlN2VjOWQwMDliMGU5MDQxNmIwODMxY2VhNGGRqHZC: 00:31:33.091 09:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:31:33.092 09:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:33.092 09:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:33.092 09:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:33.092 09:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:33.092 09:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:33.092 09:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:31:33.092 09:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:33.092 09:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:33.092 09:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:33.092 09:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:33.092 09:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:33.092 09:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:33.092 09:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:33.092 09:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:33.092 09:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:33.092 09:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:33.092 09:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:33.092 09:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:33.092 09:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:33.092 09:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:33.092 09:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:33.092 09:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:33.092 09:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:33.662 nvme0n1 00:31:33.662 09:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:33.662 09:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:33.662 09:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:33.662 09:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:33.662 09:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:33.662 09:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:33.662 09:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:33.662 09:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:33.662 09:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:33.662 09:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:33.662 09:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:33.662 09:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:33.662 09:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:31:33.662 09:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:33.662 09:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:33.662 09:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:33.662 09:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:33.662 09:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGY1ZTcyOGEyYWVmZWVjMDk2NmJiMDYzYjQyZjQ1MDNjZjM4MzAyM2RhY2YxYWExNjJmYmQxZDU0YzM1NGE0ONIBPhA=: 00:31:33.662 09:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:33.662 09:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:33.662 09:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:33.662 09:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGY1ZTcyOGEyYWVmZWVjMDk2NmJiMDYzYjQyZjQ1MDNjZjM4MzAyM2RhY2YxYWExNjJmYmQxZDU0YzM1NGE0ONIBPhA=: 00:31:33.662 09:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:33.662 09:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:31:33.662 09:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:33.662 09:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:33.662 09:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:33.662 09:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:33.662 09:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:33.662 09:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:31:33.662 09:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:33.662 09:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:33.662 09:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:33.662 09:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:33.662 09:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:33.662 09:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:33.662 09:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:33.662 09:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:33.662 09:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:33.662 09:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:33.663 09:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:33.663 09:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:33.663 09:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:33.663 09:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:33.663 09:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:33.663 09:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:33.663 09:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:34.229 nvme0n1 00:31:34.230 09:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:34.230 09:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:34.230 09:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:34.230 09:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:34.230 09:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:34.230 09:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:34.230 09:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:34.230 09:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:34.230 09:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:34.230 09:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:34.230 09:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:34.230 09:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:34.230 09:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:34.230 09:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:31:34.230 09:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:34.230 09:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:34.230 09:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:34.230 09:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:34.230 09:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTMyMWIxNWMyM2Y1MzMzY2NlODVjYTU4Mzg1Y2M3ZjPIwf4b: 00:31:34.230 09:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjExYWFhYWE2MmUxOTJmMDJjMzUzOWY2MjE1YzA0Y2EwODY0ZjEzMmMzNWQ2YjFiZmQ0Mjk3YTdlMDNiY2E0M8w8D9o=: 00:31:34.230 09:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:34.230 09:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:34.230 09:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTMyMWIxNWMyM2Y1MzMzY2NlODVjYTU4Mzg1Y2M3ZjPIwf4b: 00:31:34.230 09:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjExYWFhYWE2MmUxOTJmMDJjMzUzOWY2MjE1YzA0Y2EwODY0ZjEzMmMzNWQ2YjFiZmQ0Mjk3YTdlMDNiY2E0M8w8D9o=: ]] 00:31:34.230 09:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjExYWFhYWE2MmUxOTJmMDJjMzUzOWY2MjE1YzA0Y2EwODY0ZjEzMmMzNWQ2YjFiZmQ0Mjk3YTdlMDNiY2E0M8w8D9o=: 00:31:34.230 09:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:31:34.230 09:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:34.230 09:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:34.230 09:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:34.230 09:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:34.230 09:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:34.230 09:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:31:34.230 09:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:34.230 09:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:34.230 09:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:34.230 09:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:34.230 09:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:34.230 09:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:34.230 09:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:34.230 09:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:34.230 09:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:34.230 09:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:34.230 09:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:34.230 09:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:34.230 09:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:34.230 09:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:34.230 09:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:34.230 09:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:34.230 09:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:35.165 nvme0n1 00:31:35.165 09:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:35.165 09:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:35.165 09:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:35.165 09:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:35.165 09:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:35.165 09:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:35.165 09:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:35.165 09:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:35.165 09:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:35.165 09:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:35.424 09:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:35.424 09:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:35.424 09:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:31:35.424 09:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:35.424 09:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:35.424 09:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:35.424 09:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:35.424 09:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzhjY2JiODM5YmIxMmM3NmIwOGFmYThjYmU0ODI0NzRkMzZmZmQ3YTIzY2M2Zjk1shH7Bw==: 00:31:35.424 09:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjNhYmM0YzIwOWEzMWVlOGE5YTI3YzEwM2E1ZjVjZjc5NzFkODU0MzJjZGQ3Mzc5bapaxA==: 00:31:35.424 09:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:35.424 09:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:35.424 09:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzhjY2JiODM5YmIxMmM3NmIwOGFmYThjYmU0ODI0NzRkMzZmZmQ3YTIzY2M2Zjk1shH7Bw==: 00:31:35.424 09:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjNhYmM0YzIwOWEzMWVlOGE5YTI3YzEwM2E1ZjVjZjc5NzFkODU0MzJjZGQ3Mzc5bapaxA==: ]] 00:31:35.424 09:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjNhYmM0YzIwOWEzMWVlOGE5YTI3YzEwM2E1ZjVjZjc5NzFkODU0MzJjZGQ3Mzc5bapaxA==: 00:31:35.424 09:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:31:35.424 09:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:35.424 09:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:35.424 09:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:35.424 09:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:35.424 09:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:35.424 09:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:31:35.424 09:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:35.424 09:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:35.424 09:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:35.424 09:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:35.424 09:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:35.424 09:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:35.424 09:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:35.424 09:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:35.424 09:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:35.424 09:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:35.424 09:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:35.424 09:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:35.424 09:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:35.424 09:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:35.424 09:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:35.424 09:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:35.424 09:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:36.365 nvme0n1 00:31:36.365 09:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:36.365 09:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:36.365 09:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:36.365 09:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:36.365 09:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:36.365 09:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:36.365 09:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:36.365 09:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:36.365 09:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:36.365 09:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:36.365 09:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:36.365 09:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:36.365 09:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:31:36.365 09:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:36.365 09:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:36.365 09:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:36.365 09:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:36.365 09:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjFiZjZiNjFjYzNlMzY1ZTM2MDI3ZTBjMWQwZTdlNzJr9MSw: 00:31:36.365 09:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmZmNWU4YTM1ZTU2ZTFkNjMxMTRiMzA5Yzk0NjEzZWOJn1fL: 00:31:36.365 09:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:36.365 09:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:36.365 09:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjFiZjZiNjFjYzNlMzY1ZTM2MDI3ZTBjMWQwZTdlNzJr9MSw: 00:31:36.365 09:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmZmNWU4YTM1ZTU2ZTFkNjMxMTRiMzA5Yzk0NjEzZWOJn1fL: ]] 00:31:36.365 09:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmZmNWU4YTM1ZTU2ZTFkNjMxMTRiMzA5Yzk0NjEzZWOJn1fL: 00:31:36.365 09:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:31:36.365 09:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:36.365 09:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:36.365 09:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:36.365 09:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:36.365 09:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:36.365 09:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:31:36.365 09:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:36.365 09:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:36.366 09:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:36.366 09:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:36.366 09:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:36.366 09:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:36.366 09:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:36.366 09:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:36.366 09:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:36.366 09:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:36.366 09:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:36.366 09:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:36.366 09:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:36.366 09:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:36.366 09:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:36.366 09:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:36.366 09:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:37.306 nvme0n1 00:31:37.306 09:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:37.306 09:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:37.306 09:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:37.306 09:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:37.306 09:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:37.306 09:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:37.306 09:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:37.306 09:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:37.306 09:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:37.306 09:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:37.306 09:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:37.306 09:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:37.306 09:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:31:37.306 09:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:37.306 09:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:37.306 09:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:37.306 09:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:37.306 09:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDg0YmNhMjIyOTIwMDNiNjZjYTllOGFhZDIxZTRiZGQ3ZjUyMzA0ZWQxYWY2MWQxpD8UKg==: 00:31:37.306 09:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjZiNmNlN2VjOWQwMDliMGU5MDQxNmIwODMxY2VhNGGRqHZC: 00:31:37.306 09:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:37.306 09:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:37.306 09:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDg0YmNhMjIyOTIwMDNiNjZjYTllOGFhZDIxZTRiZGQ3ZjUyMzA0ZWQxYWY2MWQxpD8UKg==: 00:31:37.306 09:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjZiNmNlN2VjOWQwMDliMGU5MDQxNmIwODMxY2VhNGGRqHZC: ]] 00:31:37.306 09:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjZiNmNlN2VjOWQwMDliMGU5MDQxNmIwODMxY2VhNGGRqHZC: 00:31:37.306 09:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:31:37.306 09:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:37.306 09:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:37.306 09:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:37.306 09:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:37.306 09:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:37.306 09:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:31:37.306 09:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:37.306 09:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:37.306 09:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:37.306 09:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:37.306 09:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:37.306 09:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:37.306 09:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:37.306 09:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:37.306 09:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:37.306 09:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:37.306 09:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:37.306 09:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:37.306 09:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:37.306 09:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:37.306 09:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:37.306 09:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:37.306 09:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:38.246 nvme0n1 00:31:38.246 09:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:38.246 09:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:38.246 09:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:38.246 09:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:38.246 09:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:38.246 09:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:38.246 09:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:38.246 09:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:38.246 09:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:38.246 09:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:38.246 09:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:38.246 09:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:38.246 09:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:31:38.246 09:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:38.246 09:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:38.246 09:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:38.246 09:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:38.246 09:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGY1ZTcyOGEyYWVmZWVjMDk2NmJiMDYzYjQyZjQ1MDNjZjM4MzAyM2RhY2YxYWExNjJmYmQxZDU0YzM1NGE0ONIBPhA=: 00:31:38.246 09:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:38.246 09:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:38.246 09:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:38.246 09:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGY1ZTcyOGEyYWVmZWVjMDk2NmJiMDYzYjQyZjQ1MDNjZjM4MzAyM2RhY2YxYWExNjJmYmQxZDU0YzM1NGE0ONIBPhA=: 00:31:38.246 09:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:38.246 09:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:31:38.246 09:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:38.246 09:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:38.246 09:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:38.246 09:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:38.246 09:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:38.246 09:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:31:38.246 09:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:38.246 09:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:38.246 09:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:38.246 09:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:38.246 09:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:38.246 09:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:38.246 09:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:38.246 09:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:38.246 09:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:38.246 09:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:38.246 09:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:38.246 09:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:38.246 09:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:38.246 09:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:38.246 09:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:38.246 09:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:38.246 09:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.182 nvme0n1 00:31:39.182 09:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.182 09:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:39.182 09:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.182 09:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.182 09:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:39.182 09:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.182 09:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:39.182 09:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:39.182 09:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.182 09:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.443 09:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.443 09:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:31:39.443 09:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:39.443 09:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:39.443 09:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:31:39.443 09:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:39.443 09:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:39.443 09:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:39.443 09:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:39.443 09:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTMyMWIxNWMyM2Y1MzMzY2NlODVjYTU4Mzg1Y2M3ZjPIwf4b: 00:31:39.443 09:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjExYWFhYWE2MmUxOTJmMDJjMzUzOWY2MjE1YzA0Y2EwODY0ZjEzMmMzNWQ2YjFiZmQ0Mjk3YTdlMDNiY2E0M8w8D9o=: 00:31:39.443 09:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:39.443 09:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:39.443 09:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTMyMWIxNWMyM2Y1MzMzY2NlODVjYTU4Mzg1Y2M3ZjPIwf4b: 00:31:39.443 09:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjExYWFhYWE2MmUxOTJmMDJjMzUzOWY2MjE1YzA0Y2EwODY0ZjEzMmMzNWQ2YjFiZmQ0Mjk3YTdlMDNiY2E0M8w8D9o=: ]] 00:31:39.443 09:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjExYWFhYWE2MmUxOTJmMDJjMzUzOWY2MjE1YzA0Y2EwODY0ZjEzMmMzNWQ2YjFiZmQ0Mjk3YTdlMDNiY2E0M8w8D9o=: 00:31:39.443 09:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:31:39.443 09:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:39.443 09:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:39.443 09:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:39.443 09:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:39.443 09:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:39.443 09:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:31:39.443 09:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.443 09:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.443 09:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.443 09:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:39.443 09:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:39.443 09:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:39.443 09:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:39.443 09:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:39.443 09:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:39.443 09:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:39.443 09:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:39.443 09:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:39.443 09:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:39.443 09:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:39.443 09:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:39.443 09:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.443 09:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.443 nvme0n1 00:31:39.443 09:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.443 09:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:39.443 09:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.443 09:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.443 09:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:39.443 09:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.443 09:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:39.443 09:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:39.443 09:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.443 09:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.443 09:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.443 09:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:39.443 09:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:31:39.443 09:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:39.443 09:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:39.443 09:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:39.443 09:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:39.443 09:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzhjY2JiODM5YmIxMmM3NmIwOGFmYThjYmU0ODI0NzRkMzZmZmQ3YTIzY2M2Zjk1shH7Bw==: 00:31:39.443 09:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjNhYmM0YzIwOWEzMWVlOGE5YTI3YzEwM2E1ZjVjZjc5NzFkODU0MzJjZGQ3Mzc5bapaxA==: 00:31:39.443 09:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:39.443 09:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:39.443 09:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzhjY2JiODM5YmIxMmM3NmIwOGFmYThjYmU0ODI0NzRkMzZmZmQ3YTIzY2M2Zjk1shH7Bw==: 00:31:39.443 09:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjNhYmM0YzIwOWEzMWVlOGE5YTI3YzEwM2E1ZjVjZjc5NzFkODU0MzJjZGQ3Mzc5bapaxA==: ]] 00:31:39.443 09:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjNhYmM0YzIwOWEzMWVlOGE5YTI3YzEwM2E1ZjVjZjc5NzFkODU0MzJjZGQ3Mzc5bapaxA==: 00:31:39.443 09:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:31:39.443 09:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:39.443 09:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:39.443 09:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:39.443 09:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:39.443 09:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:39.443 09:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:31:39.443 09:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.443 09:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.443 09:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.443 09:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:39.443 09:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:39.443 09:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:39.443 09:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:39.443 09:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:39.443 09:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:39.443 09:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:39.443 09:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:39.443 09:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:39.443 09:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:39.444 09:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:39.444 09:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:39.444 09:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.444 09:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.703 nvme0n1 00:31:39.703 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.703 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:39.703 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.703 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.703 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:39.703 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.703 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:39.703 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:39.703 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.703 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.703 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.703 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:39.703 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:31:39.703 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:39.703 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:39.703 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:39.703 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:39.703 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjFiZjZiNjFjYzNlMzY1ZTM2MDI3ZTBjMWQwZTdlNzJr9MSw: 00:31:39.703 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmZmNWU4YTM1ZTU2ZTFkNjMxMTRiMzA5Yzk0NjEzZWOJn1fL: 00:31:39.703 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:39.703 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:39.703 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjFiZjZiNjFjYzNlMzY1ZTM2MDI3ZTBjMWQwZTdlNzJr9MSw: 00:31:39.703 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmZmNWU4YTM1ZTU2ZTFkNjMxMTRiMzA5Yzk0NjEzZWOJn1fL: ]] 00:31:39.703 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmZmNWU4YTM1ZTU2ZTFkNjMxMTRiMzA5Yzk0NjEzZWOJn1fL: 00:31:39.703 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:31:39.703 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:39.703 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:39.703 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:39.703 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:39.703 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:39.703 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:31:39.703 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.703 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.703 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.703 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:39.703 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:39.703 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:39.703 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:39.703 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:39.703 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:39.703 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:39.703 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:39.703 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:39.703 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:39.703 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:39.703 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:39.703 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.703 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.964 nvme0n1 00:31:39.964 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.964 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:39.964 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.964 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:39.964 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.964 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.964 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:39.964 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:39.964 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.964 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.964 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.964 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:39.964 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:31:39.964 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:39.964 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:39.964 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:39.964 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:39.964 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDg0YmNhMjIyOTIwMDNiNjZjYTllOGFhZDIxZTRiZGQ3ZjUyMzA0ZWQxYWY2MWQxpD8UKg==: 00:31:39.964 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjZiNmNlN2VjOWQwMDliMGU5MDQxNmIwODMxY2VhNGGRqHZC: 00:31:39.964 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:39.964 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:39.964 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDg0YmNhMjIyOTIwMDNiNjZjYTllOGFhZDIxZTRiZGQ3ZjUyMzA0ZWQxYWY2MWQxpD8UKg==: 00:31:39.964 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjZiNmNlN2VjOWQwMDliMGU5MDQxNmIwODMxY2VhNGGRqHZC: ]] 00:31:39.964 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjZiNmNlN2VjOWQwMDliMGU5MDQxNmIwODMxY2VhNGGRqHZC: 00:31:39.964 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:31:39.964 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:39.964 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:39.964 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:39.964 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:39.964 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:39.964 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:31:39.964 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.964 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.964 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.964 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:39.964 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:39.964 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:39.964 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:39.964 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:39.964 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:39.964 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:39.964 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:39.964 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:39.964 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:39.965 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:39.965 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:39.965 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.965 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:40.225 nvme0n1 00:31:40.225 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:40.225 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:40.225 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:40.225 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:40.225 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:40.225 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:40.225 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:40.225 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:40.225 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:40.225 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:40.225 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:40.225 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:40.225 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:31:40.225 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:40.225 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:40.225 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:40.225 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:40.225 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGY1ZTcyOGEyYWVmZWVjMDk2NmJiMDYzYjQyZjQ1MDNjZjM4MzAyM2RhY2YxYWExNjJmYmQxZDU0YzM1NGE0ONIBPhA=: 00:31:40.225 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:40.225 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:40.225 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:40.225 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGY1ZTcyOGEyYWVmZWVjMDk2NmJiMDYzYjQyZjQ1MDNjZjM4MzAyM2RhY2YxYWExNjJmYmQxZDU0YzM1NGE0ONIBPhA=: 00:31:40.225 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:40.225 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:31:40.226 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:40.226 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:40.226 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:40.226 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:40.226 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:40.226 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:31:40.226 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:40.226 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:40.226 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:40.226 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:40.226 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:40.226 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:40.226 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:40.226 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:40.226 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:40.226 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:40.226 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:40.226 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:40.226 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:40.226 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:40.226 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:40.226 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:40.226 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:40.487 nvme0n1 00:31:40.487 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:40.487 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:40.487 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:40.487 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:40.487 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:40.487 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:40.487 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:40.487 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:40.487 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:40.487 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:40.487 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:40.487 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:40.487 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:40.487 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:31:40.487 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:40.487 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:40.487 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:40.487 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:40.487 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTMyMWIxNWMyM2Y1MzMzY2NlODVjYTU4Mzg1Y2M3ZjPIwf4b: 00:31:40.487 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjExYWFhYWE2MmUxOTJmMDJjMzUzOWY2MjE1YzA0Y2EwODY0ZjEzMmMzNWQ2YjFiZmQ0Mjk3YTdlMDNiY2E0M8w8D9o=: 00:31:40.487 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:40.487 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:40.487 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTMyMWIxNWMyM2Y1MzMzY2NlODVjYTU4Mzg1Y2M3ZjPIwf4b: 00:31:40.487 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjExYWFhYWE2MmUxOTJmMDJjMzUzOWY2MjE1YzA0Y2EwODY0ZjEzMmMzNWQ2YjFiZmQ0Mjk3YTdlMDNiY2E0M8w8D9o=: ]] 00:31:40.487 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjExYWFhYWE2MmUxOTJmMDJjMzUzOWY2MjE1YzA0Y2EwODY0ZjEzMmMzNWQ2YjFiZmQ0Mjk3YTdlMDNiY2E0M8w8D9o=: 00:31:40.487 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:31:40.487 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:40.487 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:40.487 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:40.487 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:40.487 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:40.487 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:31:40.487 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:40.487 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:40.487 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:40.487 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:40.487 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:40.487 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:40.487 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:40.487 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:40.487 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:40.487 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:40.487 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:40.487 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:40.487 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:40.487 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:40.487 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:40.487 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:40.487 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:40.748 nvme0n1 00:31:40.748 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:40.748 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:40.748 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:40.748 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:40.748 09:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:40.748 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:40.748 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:40.748 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:40.748 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:40.748 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:40.748 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:40.748 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:40.748 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:31:40.748 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:40.748 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:40.748 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:40.748 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:40.748 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzhjY2JiODM5YmIxMmM3NmIwOGFmYThjYmU0ODI0NzRkMzZmZmQ3YTIzY2M2Zjk1shH7Bw==: 00:31:40.748 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjNhYmM0YzIwOWEzMWVlOGE5YTI3YzEwM2E1ZjVjZjc5NzFkODU0MzJjZGQ3Mzc5bapaxA==: 00:31:40.748 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:40.748 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:40.748 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzhjY2JiODM5YmIxMmM3NmIwOGFmYThjYmU0ODI0NzRkMzZmZmQ3YTIzY2M2Zjk1shH7Bw==: 00:31:40.748 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjNhYmM0YzIwOWEzMWVlOGE5YTI3YzEwM2E1ZjVjZjc5NzFkODU0MzJjZGQ3Mzc5bapaxA==: ]] 00:31:40.748 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjNhYmM0YzIwOWEzMWVlOGE5YTI3YzEwM2E1ZjVjZjc5NzFkODU0MzJjZGQ3Mzc5bapaxA==: 00:31:40.748 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:31:40.748 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:40.748 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:40.748 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:40.748 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:40.748 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:40.749 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:31:40.749 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:40.749 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:40.749 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:40.749 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:40.749 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:40.749 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:40.749 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:40.749 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:40.749 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:40.749 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:40.749 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:40.749 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:40.749 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:40.749 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:40.749 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:40.749 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:40.749 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:41.014 nvme0n1 00:31:41.014 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:41.014 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:41.014 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:41.014 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:41.014 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:41.014 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:41.014 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:41.014 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:41.014 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:41.014 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:41.014 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:41.014 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:41.014 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:31:41.014 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:41.014 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:41.014 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:41.014 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:41.014 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjFiZjZiNjFjYzNlMzY1ZTM2MDI3ZTBjMWQwZTdlNzJr9MSw: 00:31:41.014 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmZmNWU4YTM1ZTU2ZTFkNjMxMTRiMzA5Yzk0NjEzZWOJn1fL: 00:31:41.014 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:41.014 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:41.014 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjFiZjZiNjFjYzNlMzY1ZTM2MDI3ZTBjMWQwZTdlNzJr9MSw: 00:31:41.014 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmZmNWU4YTM1ZTU2ZTFkNjMxMTRiMzA5Yzk0NjEzZWOJn1fL: ]] 00:31:41.014 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmZmNWU4YTM1ZTU2ZTFkNjMxMTRiMzA5Yzk0NjEzZWOJn1fL: 00:31:41.014 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:31:41.014 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:41.014 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:41.014 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:41.014 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:41.014 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:41.014 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:31:41.014 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:41.014 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:41.014 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:41.014 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:41.014 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:41.014 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:41.014 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:41.014 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:41.014 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:41.014 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:41.014 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:41.014 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:41.014 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:41.014 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:41.014 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:41.014 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:41.014 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:41.285 nvme0n1 00:31:41.285 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:41.285 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:41.285 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:41.285 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:41.285 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:41.285 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:41.285 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:41.285 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:41.285 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:41.285 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:41.285 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:41.285 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:41.285 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:31:41.285 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:41.285 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:41.285 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:41.285 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:41.285 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDg0YmNhMjIyOTIwMDNiNjZjYTllOGFhZDIxZTRiZGQ3ZjUyMzA0ZWQxYWY2MWQxpD8UKg==: 00:31:41.285 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjZiNmNlN2VjOWQwMDliMGU5MDQxNmIwODMxY2VhNGGRqHZC: 00:31:41.285 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:41.285 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:41.285 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDg0YmNhMjIyOTIwMDNiNjZjYTllOGFhZDIxZTRiZGQ3ZjUyMzA0ZWQxYWY2MWQxpD8UKg==: 00:31:41.285 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjZiNmNlN2VjOWQwMDliMGU5MDQxNmIwODMxY2VhNGGRqHZC: ]] 00:31:41.285 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjZiNmNlN2VjOWQwMDliMGU5MDQxNmIwODMxY2VhNGGRqHZC: 00:31:41.285 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:31:41.285 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:41.285 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:41.285 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:41.285 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:41.285 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:41.285 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:31:41.285 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:41.285 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:41.285 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:41.285 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:41.285 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:41.285 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:41.285 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:41.285 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:41.285 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:41.285 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:41.285 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:41.285 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:41.285 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:41.285 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:41.285 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:41.285 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:41.285 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:41.553 nvme0n1 00:31:41.553 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:41.553 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:41.553 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:41.553 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:41.553 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:41.553 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:41.553 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:41.553 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:41.553 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:41.553 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:41.553 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:41.553 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:41.553 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:31:41.553 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:41.553 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:41.553 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:41.553 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:41.553 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGY1ZTcyOGEyYWVmZWVjMDk2NmJiMDYzYjQyZjQ1MDNjZjM4MzAyM2RhY2YxYWExNjJmYmQxZDU0YzM1NGE0ONIBPhA=: 00:31:41.553 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:41.553 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:41.553 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:41.553 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGY1ZTcyOGEyYWVmZWVjMDk2NmJiMDYzYjQyZjQ1MDNjZjM4MzAyM2RhY2YxYWExNjJmYmQxZDU0YzM1NGE0ONIBPhA=: 00:31:41.553 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:41.553 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:31:41.553 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:41.553 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:41.553 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:41.553 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:41.553 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:41.553 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:31:41.553 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:41.553 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:41.553 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:41.553 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:41.553 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:41.553 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:41.553 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:41.553 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:41.553 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:41.553 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:41.553 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:41.553 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:41.553 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:41.553 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:41.553 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:41.553 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:41.553 09:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:41.814 nvme0n1 00:31:41.814 09:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:41.814 09:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:41.814 09:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:41.814 09:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:41.814 09:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:41.814 09:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:41.814 09:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:41.814 09:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:41.814 09:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:41.814 09:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:41.814 09:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:41.814 09:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:41.814 09:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:41.814 09:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:31:41.814 09:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:41.814 09:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:41.814 09:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:41.814 09:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:41.814 09:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTMyMWIxNWMyM2Y1MzMzY2NlODVjYTU4Mzg1Y2M3ZjPIwf4b: 00:31:41.814 09:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjExYWFhYWE2MmUxOTJmMDJjMzUzOWY2MjE1YzA0Y2EwODY0ZjEzMmMzNWQ2YjFiZmQ0Mjk3YTdlMDNiY2E0M8w8D9o=: 00:31:41.814 09:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:41.814 09:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:41.814 09:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTMyMWIxNWMyM2Y1MzMzY2NlODVjYTU4Mzg1Y2M3ZjPIwf4b: 00:31:41.814 09:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjExYWFhYWE2MmUxOTJmMDJjMzUzOWY2MjE1YzA0Y2EwODY0ZjEzMmMzNWQ2YjFiZmQ0Mjk3YTdlMDNiY2E0M8w8D9o=: ]] 00:31:41.814 09:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjExYWFhYWE2MmUxOTJmMDJjMzUzOWY2MjE1YzA0Y2EwODY0ZjEzMmMzNWQ2YjFiZmQ0Mjk3YTdlMDNiY2E0M8w8D9o=: 00:31:41.814 09:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:31:41.814 09:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:41.814 09:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:41.814 09:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:41.814 09:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:41.814 09:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:41.814 09:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:31:41.814 09:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:41.814 09:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:41.814 09:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:41.814 09:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:41.814 09:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:41.814 09:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:41.814 09:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:41.814 09:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:41.814 09:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:41.814 09:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:41.814 09:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:41.814 09:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:41.814 09:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:41.814 09:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:41.814 09:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:41.814 09:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:41.814 09:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.073 nvme0n1 00:31:42.073 09:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.073 09:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:42.073 09:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.073 09:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.073 09:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:42.073 09:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.073 09:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:42.073 09:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:42.073 09:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.073 09:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.073 09:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.073 09:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:42.073 09:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:31:42.073 09:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:42.073 09:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:42.073 09:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:42.073 09:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:42.073 09:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzhjY2JiODM5YmIxMmM3NmIwOGFmYThjYmU0ODI0NzRkMzZmZmQ3YTIzY2M2Zjk1shH7Bw==: 00:31:42.073 09:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjNhYmM0YzIwOWEzMWVlOGE5YTI3YzEwM2E1ZjVjZjc5NzFkODU0MzJjZGQ3Mzc5bapaxA==: 00:31:42.073 09:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:42.073 09:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:42.073 09:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzhjY2JiODM5YmIxMmM3NmIwOGFmYThjYmU0ODI0NzRkMzZmZmQ3YTIzY2M2Zjk1shH7Bw==: 00:31:42.073 09:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjNhYmM0YzIwOWEzMWVlOGE5YTI3YzEwM2E1ZjVjZjc5NzFkODU0MzJjZGQ3Mzc5bapaxA==: ]] 00:31:42.073 09:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjNhYmM0YzIwOWEzMWVlOGE5YTI3YzEwM2E1ZjVjZjc5NzFkODU0MzJjZGQ3Mzc5bapaxA==: 00:31:42.073 09:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:31:42.073 09:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:42.073 09:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:42.073 09:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:42.073 09:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:42.073 09:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:42.073 09:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:31:42.073 09:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.073 09:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.073 09:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.073 09:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:42.073 09:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:42.073 09:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:42.073 09:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:42.073 09:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:42.073 09:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:42.073 09:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:42.073 09:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:42.073 09:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:42.073 09:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:42.073 09:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:42.073 09:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:42.073 09:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.073 09:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.331 nvme0n1 00:31:42.331 09:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.331 09:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:42.331 09:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:42.331 09:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.331 09:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.331 09:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.591 09:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:42.591 09:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:42.591 09:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.591 09:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.591 09:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.591 09:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:42.591 09:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:31:42.591 09:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:42.591 09:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:42.591 09:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:42.591 09:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:42.591 09:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjFiZjZiNjFjYzNlMzY1ZTM2MDI3ZTBjMWQwZTdlNzJr9MSw: 00:31:42.591 09:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmZmNWU4YTM1ZTU2ZTFkNjMxMTRiMzA5Yzk0NjEzZWOJn1fL: 00:31:42.591 09:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:42.591 09:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:42.591 09:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjFiZjZiNjFjYzNlMzY1ZTM2MDI3ZTBjMWQwZTdlNzJr9MSw: 00:31:42.591 09:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmZmNWU4YTM1ZTU2ZTFkNjMxMTRiMzA5Yzk0NjEzZWOJn1fL: ]] 00:31:42.591 09:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmZmNWU4YTM1ZTU2ZTFkNjMxMTRiMzA5Yzk0NjEzZWOJn1fL: 00:31:42.591 09:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:31:42.591 09:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:42.591 09:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:42.591 09:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:42.591 09:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:42.591 09:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:42.591 09:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:31:42.591 09:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.591 09:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.591 09:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.591 09:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:42.591 09:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:42.591 09:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:42.591 09:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:42.591 09:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:42.591 09:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:42.591 09:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:42.591 09:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:42.591 09:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:42.591 09:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:42.591 09:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:42.591 09:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:42.591 09:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.591 09:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.851 nvme0n1 00:31:42.851 09:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.851 09:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:42.851 09:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:42.851 09:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.851 09:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.851 09:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.851 09:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:42.851 09:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:42.851 09:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.851 09:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.851 09:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.851 09:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:42.851 09:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:31:42.851 09:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:42.851 09:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:42.851 09:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:42.851 09:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:42.851 09:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDg0YmNhMjIyOTIwMDNiNjZjYTllOGFhZDIxZTRiZGQ3ZjUyMzA0ZWQxYWY2MWQxpD8UKg==: 00:31:42.851 09:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjZiNmNlN2VjOWQwMDliMGU5MDQxNmIwODMxY2VhNGGRqHZC: 00:31:42.851 09:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:42.851 09:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:42.851 09:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDg0YmNhMjIyOTIwMDNiNjZjYTllOGFhZDIxZTRiZGQ3ZjUyMzA0ZWQxYWY2MWQxpD8UKg==: 00:31:42.851 09:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjZiNmNlN2VjOWQwMDliMGU5MDQxNmIwODMxY2VhNGGRqHZC: ]] 00:31:42.851 09:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjZiNmNlN2VjOWQwMDliMGU5MDQxNmIwODMxY2VhNGGRqHZC: 00:31:42.851 09:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:31:42.851 09:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:42.851 09:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:42.851 09:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:42.851 09:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:42.851 09:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:42.851 09:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:31:42.851 09:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.851 09:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.851 09:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.851 09:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:42.851 09:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:42.851 09:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:42.851 09:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:42.851 09:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:42.851 09:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:42.851 09:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:42.851 09:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:42.851 09:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:42.851 09:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:42.851 09:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:42.851 09:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:42.851 09:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.851 09:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.110 nvme0n1 00:31:43.110 09:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:43.110 09:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:43.110 09:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:43.110 09:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:43.110 09:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.110 09:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:43.110 09:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:43.110 09:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:43.110 09:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:43.110 09:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.110 09:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:43.110 09:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:43.110 09:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:31:43.110 09:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:43.110 09:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:43.110 09:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:43.110 09:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:43.110 09:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGY1ZTcyOGEyYWVmZWVjMDk2NmJiMDYzYjQyZjQ1MDNjZjM4MzAyM2RhY2YxYWExNjJmYmQxZDU0YzM1NGE0ONIBPhA=: 00:31:43.110 09:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:43.110 09:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:43.110 09:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:43.110 09:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGY1ZTcyOGEyYWVmZWVjMDk2NmJiMDYzYjQyZjQ1MDNjZjM4MzAyM2RhY2YxYWExNjJmYmQxZDU0YzM1NGE0ONIBPhA=: 00:31:43.110 09:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:43.110 09:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:31:43.110 09:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:43.110 09:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:43.110 09:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:43.110 09:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:43.110 09:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:43.111 09:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:31:43.111 09:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:43.111 09:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.111 09:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:43.111 09:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:43.111 09:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:43.111 09:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:43.111 09:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:43.111 09:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:43.111 09:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:43.111 09:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:43.111 09:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:43.111 09:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:43.111 09:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:43.111 09:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:43.111 09:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:43.111 09:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:43.111 09:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.369 nvme0n1 00:31:43.369 09:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:43.369 09:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:43.369 09:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:43.369 09:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.369 09:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:43.369 09:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:43.628 09:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:43.628 09:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:43.628 09:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:43.628 09:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.628 09:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:43.628 09:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:43.628 09:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:43.628 09:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:31:43.628 09:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:43.628 09:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:43.628 09:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:43.628 09:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:43.628 09:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTMyMWIxNWMyM2Y1MzMzY2NlODVjYTU4Mzg1Y2M3ZjPIwf4b: 00:31:43.628 09:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjExYWFhYWE2MmUxOTJmMDJjMzUzOWY2MjE1YzA0Y2EwODY0ZjEzMmMzNWQ2YjFiZmQ0Mjk3YTdlMDNiY2E0M8w8D9o=: 00:31:43.628 09:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:43.628 09:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:43.628 09:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTMyMWIxNWMyM2Y1MzMzY2NlODVjYTU4Mzg1Y2M3ZjPIwf4b: 00:31:43.628 09:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjExYWFhYWE2MmUxOTJmMDJjMzUzOWY2MjE1YzA0Y2EwODY0ZjEzMmMzNWQ2YjFiZmQ0Mjk3YTdlMDNiY2E0M8w8D9o=: ]] 00:31:43.628 09:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjExYWFhYWE2MmUxOTJmMDJjMzUzOWY2MjE1YzA0Y2EwODY0ZjEzMmMzNWQ2YjFiZmQ0Mjk3YTdlMDNiY2E0M8w8D9o=: 00:31:43.628 09:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:31:43.628 09:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:43.628 09:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:43.628 09:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:43.628 09:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:43.628 09:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:43.628 09:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:31:43.628 09:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:43.628 09:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.628 09:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:43.628 09:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:43.628 09:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:43.628 09:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:43.628 09:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:43.628 09:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:43.628 09:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:43.628 09:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:43.628 09:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:43.628 09:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:43.628 09:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:43.628 09:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:43.628 09:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:43.628 09:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:43.628 09:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:44.197 nvme0n1 00:31:44.197 09:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:44.197 09:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:44.197 09:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:44.197 09:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:44.197 09:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:44.197 09:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:44.197 09:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:44.197 09:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:44.197 09:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:44.197 09:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:44.197 09:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:44.197 09:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:44.197 09:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:31:44.197 09:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:44.197 09:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:44.197 09:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:44.197 09:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:44.197 09:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzhjY2JiODM5YmIxMmM3NmIwOGFmYThjYmU0ODI0NzRkMzZmZmQ3YTIzY2M2Zjk1shH7Bw==: 00:31:44.197 09:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjNhYmM0YzIwOWEzMWVlOGE5YTI3YzEwM2E1ZjVjZjc5NzFkODU0MzJjZGQ3Mzc5bapaxA==: 00:31:44.197 09:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:44.197 09:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:44.197 09:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzhjY2JiODM5YmIxMmM3NmIwOGFmYThjYmU0ODI0NzRkMzZmZmQ3YTIzY2M2Zjk1shH7Bw==: 00:31:44.197 09:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjNhYmM0YzIwOWEzMWVlOGE5YTI3YzEwM2E1ZjVjZjc5NzFkODU0MzJjZGQ3Mzc5bapaxA==: ]] 00:31:44.197 09:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjNhYmM0YzIwOWEzMWVlOGE5YTI3YzEwM2E1ZjVjZjc5NzFkODU0MzJjZGQ3Mzc5bapaxA==: 00:31:44.197 09:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:31:44.197 09:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:44.197 09:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:44.197 09:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:44.197 09:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:44.197 09:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:44.197 09:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:31:44.197 09:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:44.197 09:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:44.197 09:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:44.197 09:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:44.197 09:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:44.197 09:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:44.197 09:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:44.197 09:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:44.197 09:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:44.197 09:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:44.197 09:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:44.197 09:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:44.197 09:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:44.197 09:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:44.197 09:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:44.197 09:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:44.197 09:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:44.764 nvme0n1 00:31:44.764 09:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:44.764 09:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:44.764 09:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:44.764 09:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:44.764 09:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:44.764 09:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:44.764 09:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:44.764 09:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:44.764 09:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:44.764 09:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:44.764 09:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:44.764 09:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:44.764 09:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:31:44.764 09:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:44.764 09:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:44.764 09:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:44.764 09:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:44.764 09:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjFiZjZiNjFjYzNlMzY1ZTM2MDI3ZTBjMWQwZTdlNzJr9MSw: 00:31:44.764 09:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmZmNWU4YTM1ZTU2ZTFkNjMxMTRiMzA5Yzk0NjEzZWOJn1fL: 00:31:44.764 09:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:44.764 09:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:44.764 09:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjFiZjZiNjFjYzNlMzY1ZTM2MDI3ZTBjMWQwZTdlNzJr9MSw: 00:31:44.764 09:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmZmNWU4YTM1ZTU2ZTFkNjMxMTRiMzA5Yzk0NjEzZWOJn1fL: ]] 00:31:44.764 09:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmZmNWU4YTM1ZTU2ZTFkNjMxMTRiMzA5Yzk0NjEzZWOJn1fL: 00:31:44.764 09:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:31:44.764 09:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:44.764 09:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:44.764 09:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:44.764 09:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:44.764 09:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:44.764 09:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:31:44.764 09:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:44.764 09:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:44.764 09:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:44.764 09:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:44.764 09:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:44.764 09:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:44.764 09:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:44.764 09:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:44.764 09:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:44.764 09:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:44.764 09:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:44.764 09:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:44.764 09:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:44.764 09:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:44.764 09:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:44.764 09:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:44.764 09:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:45.328 nvme0n1 00:31:45.328 09:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:45.328 09:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:45.328 09:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:45.328 09:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:45.328 09:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:45.328 09:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:45.328 09:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:45.328 09:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:45.328 09:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:45.328 09:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:45.328 09:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:45.328 09:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:45.328 09:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:31:45.328 09:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:45.328 09:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:45.328 09:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:45.328 09:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:45.328 09:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDg0YmNhMjIyOTIwMDNiNjZjYTllOGFhZDIxZTRiZGQ3ZjUyMzA0ZWQxYWY2MWQxpD8UKg==: 00:31:45.328 09:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjZiNmNlN2VjOWQwMDliMGU5MDQxNmIwODMxY2VhNGGRqHZC: 00:31:45.328 09:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:45.328 09:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:45.328 09:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDg0YmNhMjIyOTIwMDNiNjZjYTllOGFhZDIxZTRiZGQ3ZjUyMzA0ZWQxYWY2MWQxpD8UKg==: 00:31:45.328 09:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjZiNmNlN2VjOWQwMDliMGU5MDQxNmIwODMxY2VhNGGRqHZC: ]] 00:31:45.328 09:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjZiNmNlN2VjOWQwMDliMGU5MDQxNmIwODMxY2VhNGGRqHZC: 00:31:45.328 09:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:31:45.328 09:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:45.328 09:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:45.328 09:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:45.328 09:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:45.328 09:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:45.328 09:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:31:45.328 09:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:45.328 09:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:45.328 09:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:45.328 09:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:45.328 09:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:45.328 09:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:45.328 09:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:45.328 09:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:45.328 09:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:45.328 09:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:45.328 09:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:45.328 09:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:45.328 09:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:45.328 09:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:45.328 09:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:45.328 09:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:45.328 09:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:45.895 nvme0n1 00:31:45.895 09:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:45.895 09:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:45.895 09:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:45.895 09:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:45.895 09:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:45.895 09:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:45.895 09:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:45.895 09:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:45.895 09:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:45.895 09:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:45.895 09:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:45.895 09:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:45.895 09:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:31:45.895 09:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:45.895 09:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:45.895 09:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:45.895 09:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:45.895 09:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGY1ZTcyOGEyYWVmZWVjMDk2NmJiMDYzYjQyZjQ1MDNjZjM4MzAyM2RhY2YxYWExNjJmYmQxZDU0YzM1NGE0ONIBPhA=: 00:31:45.895 09:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:45.895 09:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:45.895 09:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:45.895 09:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGY1ZTcyOGEyYWVmZWVjMDk2NmJiMDYzYjQyZjQ1MDNjZjM4MzAyM2RhY2YxYWExNjJmYmQxZDU0YzM1NGE0ONIBPhA=: 00:31:45.895 09:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:45.895 09:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:31:45.895 09:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:45.895 09:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:45.895 09:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:45.895 09:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:45.895 09:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:45.895 09:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:31:45.895 09:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:45.895 09:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:45.895 09:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:45.895 09:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:45.895 09:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:45.895 09:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:45.895 09:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:45.895 09:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:45.895 09:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:45.895 09:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:45.895 09:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:45.895 09:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:45.895 09:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:45.895 09:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:45.895 09:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:45.896 09:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:45.896 09:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:46.463 nvme0n1 00:31:46.463 09:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:46.463 09:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:46.463 09:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:46.463 09:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:46.463 09:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:46.463 09:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:46.720 09:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:46.720 09:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:46.720 09:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:46.720 09:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:46.720 09:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:46.720 09:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:46.720 09:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:46.720 09:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:31:46.720 09:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:46.720 09:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:46.720 09:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:46.720 09:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:46.720 09:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTMyMWIxNWMyM2Y1MzMzY2NlODVjYTU4Mzg1Y2M3ZjPIwf4b: 00:31:46.720 09:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjExYWFhYWE2MmUxOTJmMDJjMzUzOWY2MjE1YzA0Y2EwODY0ZjEzMmMzNWQ2YjFiZmQ0Mjk3YTdlMDNiY2E0M8w8D9o=: 00:31:46.720 09:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:46.720 09:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:46.720 09:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTMyMWIxNWMyM2Y1MzMzY2NlODVjYTU4Mzg1Y2M3ZjPIwf4b: 00:31:46.720 09:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjExYWFhYWE2MmUxOTJmMDJjMzUzOWY2MjE1YzA0Y2EwODY0ZjEzMmMzNWQ2YjFiZmQ0Mjk3YTdlMDNiY2E0M8w8D9o=: ]] 00:31:46.720 09:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjExYWFhYWE2MmUxOTJmMDJjMzUzOWY2MjE1YzA0Y2EwODY0ZjEzMmMzNWQ2YjFiZmQ0Mjk3YTdlMDNiY2E0M8w8D9o=: 00:31:46.720 09:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:31:46.720 09:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:46.720 09:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:46.720 09:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:46.720 09:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:46.720 09:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:46.720 09:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:31:46.720 09:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:46.720 09:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:46.720 09:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:46.720 09:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:46.720 09:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:46.720 09:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:46.720 09:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:46.720 09:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:46.720 09:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:46.720 09:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:46.720 09:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:46.720 09:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:46.720 09:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:46.720 09:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:46.720 09:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:46.720 09:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:46.720 09:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:47.658 nvme0n1 00:31:47.658 09:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:47.658 09:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:47.658 09:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:47.658 09:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:47.658 09:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:47.658 09:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:47.658 09:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:47.658 09:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:47.658 09:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:47.658 09:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:47.658 09:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:47.658 09:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:47.658 09:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:31:47.658 09:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:47.658 09:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:47.658 09:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:47.658 09:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:47.658 09:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzhjY2JiODM5YmIxMmM3NmIwOGFmYThjYmU0ODI0NzRkMzZmZmQ3YTIzY2M2Zjk1shH7Bw==: 00:31:47.658 09:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjNhYmM0YzIwOWEzMWVlOGE5YTI3YzEwM2E1ZjVjZjc5NzFkODU0MzJjZGQ3Mzc5bapaxA==: 00:31:47.658 09:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:47.658 09:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:47.658 09:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzhjY2JiODM5YmIxMmM3NmIwOGFmYThjYmU0ODI0NzRkMzZmZmQ3YTIzY2M2Zjk1shH7Bw==: 00:31:47.658 09:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjNhYmM0YzIwOWEzMWVlOGE5YTI3YzEwM2E1ZjVjZjc5NzFkODU0MzJjZGQ3Mzc5bapaxA==: ]] 00:31:47.658 09:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjNhYmM0YzIwOWEzMWVlOGE5YTI3YzEwM2E1ZjVjZjc5NzFkODU0MzJjZGQ3Mzc5bapaxA==: 00:31:47.658 09:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:31:47.658 09:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:47.658 09:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:47.658 09:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:47.658 09:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:47.658 09:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:47.658 09:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:31:47.658 09:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:47.658 09:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:47.658 09:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:47.658 09:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:47.658 09:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:47.658 09:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:47.658 09:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:47.658 09:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:47.658 09:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:47.658 09:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:47.658 09:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:47.658 09:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:47.658 09:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:47.658 09:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:47.658 09:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:47.658 09:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:47.658 09:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.594 nvme0n1 00:31:48.594 09:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:48.594 09:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:48.594 09:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:48.594 09:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.594 09:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:48.594 09:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:48.594 09:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:48.594 09:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:48.594 09:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:48.594 09:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.594 09:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:48.594 09:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:48.594 09:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:31:48.594 09:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:48.594 09:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:48.594 09:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:48.594 09:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:48.594 09:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjFiZjZiNjFjYzNlMzY1ZTM2MDI3ZTBjMWQwZTdlNzJr9MSw: 00:31:48.594 09:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmZmNWU4YTM1ZTU2ZTFkNjMxMTRiMzA5Yzk0NjEzZWOJn1fL: 00:31:48.594 09:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:48.594 09:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:48.594 09:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjFiZjZiNjFjYzNlMzY1ZTM2MDI3ZTBjMWQwZTdlNzJr9MSw: 00:31:48.594 09:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmZmNWU4YTM1ZTU2ZTFkNjMxMTRiMzA5Yzk0NjEzZWOJn1fL: ]] 00:31:48.594 09:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmZmNWU4YTM1ZTU2ZTFkNjMxMTRiMzA5Yzk0NjEzZWOJn1fL: 00:31:48.594 09:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:31:48.594 09:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:48.594 09:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:48.594 09:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:48.594 09:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:48.594 09:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:48.594 09:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:31:48.594 09:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:48.594 09:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.594 09:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:48.594 09:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:48.594 09:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:48.594 09:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:48.594 09:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:48.594 09:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:48.594 09:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:48.594 09:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:48.594 09:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:48.594 09:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:48.594 09:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:48.594 09:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:48.594 09:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:48.594 09:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:48.594 09:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:49.528 nvme0n1 00:31:49.528 09:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:49.528 09:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:49.528 09:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:49.528 09:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:49.528 09:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:49.528 09:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:49.528 09:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:49.528 09:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:49.528 09:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:49.528 09:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:49.528 09:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:49.528 09:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:49.528 09:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:31:49.528 09:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:49.528 09:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:49.528 09:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:49.528 09:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:49.528 09:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDg0YmNhMjIyOTIwMDNiNjZjYTllOGFhZDIxZTRiZGQ3ZjUyMzA0ZWQxYWY2MWQxpD8UKg==: 00:31:49.528 09:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjZiNmNlN2VjOWQwMDliMGU5MDQxNmIwODMxY2VhNGGRqHZC: 00:31:49.528 09:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:49.528 09:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:49.528 09:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDg0YmNhMjIyOTIwMDNiNjZjYTllOGFhZDIxZTRiZGQ3ZjUyMzA0ZWQxYWY2MWQxpD8UKg==: 00:31:49.528 09:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjZiNmNlN2VjOWQwMDliMGU5MDQxNmIwODMxY2VhNGGRqHZC: ]] 00:31:49.528 09:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjZiNmNlN2VjOWQwMDliMGU5MDQxNmIwODMxY2VhNGGRqHZC: 00:31:49.528 09:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:31:49.528 09:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:49.528 09:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:49.528 09:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:49.528 09:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:49.528 09:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:49.528 09:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:31:49.528 09:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:49.528 09:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:49.528 09:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:49.528 09:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:49.528 09:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:49.528 09:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:49.528 09:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:49.528 09:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:49.528 09:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:49.528 09:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:49.528 09:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:49.528 09:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:49.528 09:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:49.528 09:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:49.528 09:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:49.528 09:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:49.528 09:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.905 nvme0n1 00:31:50.905 09:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:50.905 09:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:50.905 09:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:50.905 09:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.905 09:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:50.905 09:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:50.905 09:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:50.905 09:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:50.905 09:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:50.905 09:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.905 09:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:50.905 09:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:50.905 09:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:31:50.905 09:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:50.905 09:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:50.905 09:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:50.905 09:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:50.905 09:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGY1ZTcyOGEyYWVmZWVjMDk2NmJiMDYzYjQyZjQ1MDNjZjM4MzAyM2RhY2YxYWExNjJmYmQxZDU0YzM1NGE0ONIBPhA=: 00:31:50.905 09:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:50.905 09:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:50.905 09:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:50.905 09:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGY1ZTcyOGEyYWVmZWVjMDk2NmJiMDYzYjQyZjQ1MDNjZjM4MzAyM2RhY2YxYWExNjJmYmQxZDU0YzM1NGE0ONIBPhA=: 00:31:50.905 09:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:50.905 09:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:31:50.905 09:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:50.905 09:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:50.905 09:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:50.905 09:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:50.905 09:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:50.905 09:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:31:50.905 09:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:50.905 09:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.905 09:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:50.905 09:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:50.905 09:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:50.905 09:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:50.905 09:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:50.906 09:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:50.906 09:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:50.906 09:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:50.906 09:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:50.906 09:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:50.906 09:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:50.906 09:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:50.906 09:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:50.906 09:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:50.906 09:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:51.840 nvme0n1 00:31:51.840 09:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:51.840 09:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:51.840 09:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:51.840 09:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:51.840 09:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:51.840 09:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:51.840 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:51.840 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:51.840 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:51.840 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:51.840 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:51.840 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:31:51.840 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:51.840 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:51.840 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:31:51.840 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:51.840 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:51.840 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:51.840 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:51.840 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTMyMWIxNWMyM2Y1MzMzY2NlODVjYTU4Mzg1Y2M3ZjPIwf4b: 00:31:51.840 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjExYWFhYWE2MmUxOTJmMDJjMzUzOWY2MjE1YzA0Y2EwODY0ZjEzMmMzNWQ2YjFiZmQ0Mjk3YTdlMDNiY2E0M8w8D9o=: 00:31:51.840 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:51.840 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:51.840 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTMyMWIxNWMyM2Y1MzMzY2NlODVjYTU4Mzg1Y2M3ZjPIwf4b: 00:31:51.840 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjExYWFhYWE2MmUxOTJmMDJjMzUzOWY2MjE1YzA0Y2EwODY0ZjEzMmMzNWQ2YjFiZmQ0Mjk3YTdlMDNiY2E0M8w8D9o=: ]] 00:31:51.840 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjExYWFhYWE2MmUxOTJmMDJjMzUzOWY2MjE1YzA0Y2EwODY0ZjEzMmMzNWQ2YjFiZmQ0Mjk3YTdlMDNiY2E0M8w8D9o=: 00:31:51.840 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:31:51.840 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:51.840 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:51.840 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:51.840 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:51.840 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:51.840 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:31:51.840 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:51.840 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:51.840 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:51.840 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:51.840 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:51.840 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:51.840 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:51.840 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:51.840 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:51.840 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:51.840 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:51.841 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:51.841 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:51.841 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:51.841 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:51.841 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:51.841 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:51.841 nvme0n1 00:31:51.841 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:51.841 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:51.841 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:51.841 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:51.841 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:51.841 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:51.841 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:51.841 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:51.841 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:51.841 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:51.841 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:51.841 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:51.841 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:31:51.841 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:51.841 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:51.841 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:51.841 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:51.841 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzhjY2JiODM5YmIxMmM3NmIwOGFmYThjYmU0ODI0NzRkMzZmZmQ3YTIzY2M2Zjk1shH7Bw==: 00:31:51.841 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjNhYmM0YzIwOWEzMWVlOGE5YTI3YzEwM2E1ZjVjZjc5NzFkODU0MzJjZGQ3Mzc5bapaxA==: 00:31:51.841 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:51.841 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:51.841 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzhjY2JiODM5YmIxMmM3NmIwOGFmYThjYmU0ODI0NzRkMzZmZmQ3YTIzY2M2Zjk1shH7Bw==: 00:31:51.841 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjNhYmM0YzIwOWEzMWVlOGE5YTI3YzEwM2E1ZjVjZjc5NzFkODU0MzJjZGQ3Mzc5bapaxA==: ]] 00:31:51.841 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjNhYmM0YzIwOWEzMWVlOGE5YTI3YzEwM2E1ZjVjZjc5NzFkODU0MzJjZGQ3Mzc5bapaxA==: 00:31:51.841 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:31:51.841 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:51.841 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:51.841 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:51.841 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:51.841 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:51.841 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:31:51.841 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:51.841 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:51.841 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:51.841 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:51.841 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:51.841 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:51.841 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:51.841 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:51.841 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:51.841 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:51.841 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:51.841 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:51.841 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:51.841 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:51.841 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:51.841 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:51.841 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.101 nvme0n1 00:31:52.101 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:52.101 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:52.101 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:52.101 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.101 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:52.101 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:52.101 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:52.101 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:52.101 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:52.101 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.101 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:52.101 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:52.101 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:31:52.101 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:52.101 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:52.101 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:52.101 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:52.101 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjFiZjZiNjFjYzNlMzY1ZTM2MDI3ZTBjMWQwZTdlNzJr9MSw: 00:31:52.101 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmZmNWU4YTM1ZTU2ZTFkNjMxMTRiMzA5Yzk0NjEzZWOJn1fL: 00:31:52.101 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:52.101 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:52.101 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjFiZjZiNjFjYzNlMzY1ZTM2MDI3ZTBjMWQwZTdlNzJr9MSw: 00:31:52.101 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmZmNWU4YTM1ZTU2ZTFkNjMxMTRiMzA5Yzk0NjEzZWOJn1fL: ]] 00:31:52.101 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmZmNWU4YTM1ZTU2ZTFkNjMxMTRiMzA5Yzk0NjEzZWOJn1fL: 00:31:52.101 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:31:52.101 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:52.101 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:52.101 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:52.101 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:52.101 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:52.101 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:31:52.101 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:52.101 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.101 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:52.101 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:52.101 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:52.101 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:52.101 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:52.101 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:52.101 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:52.101 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:52.101 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:52.101 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:52.101 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:52.101 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:52.101 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:52.101 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:52.101 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.360 nvme0n1 00:31:52.360 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:52.360 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:52.360 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:52.360 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.360 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:52.360 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:52.360 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:52.360 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:52.360 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:52.360 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.360 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:52.360 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:52.360 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:31:52.360 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:52.360 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:52.360 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:52.360 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:52.360 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDg0YmNhMjIyOTIwMDNiNjZjYTllOGFhZDIxZTRiZGQ3ZjUyMzA0ZWQxYWY2MWQxpD8UKg==: 00:31:52.360 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjZiNmNlN2VjOWQwMDliMGU5MDQxNmIwODMxY2VhNGGRqHZC: 00:31:52.360 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:52.360 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:52.360 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDg0YmNhMjIyOTIwMDNiNjZjYTllOGFhZDIxZTRiZGQ3ZjUyMzA0ZWQxYWY2MWQxpD8UKg==: 00:31:52.360 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjZiNmNlN2VjOWQwMDliMGU5MDQxNmIwODMxY2VhNGGRqHZC: ]] 00:31:52.360 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjZiNmNlN2VjOWQwMDliMGU5MDQxNmIwODMxY2VhNGGRqHZC: 00:31:52.360 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:31:52.360 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:52.360 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:52.360 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:52.360 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:52.360 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:52.360 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:31:52.360 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:52.360 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.360 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:52.360 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:52.360 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:52.360 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:52.360 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:52.360 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:52.360 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:52.360 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:52.360 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:52.360 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:52.360 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:52.360 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:52.360 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:52.360 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:52.360 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.620 nvme0n1 00:31:52.620 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:52.620 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:52.620 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:52.620 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:52.620 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.620 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:52.620 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:52.620 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:52.620 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:52.620 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.620 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:52.620 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:52.620 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:31:52.620 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:52.620 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:52.620 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:52.620 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:52.620 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGY1ZTcyOGEyYWVmZWVjMDk2NmJiMDYzYjQyZjQ1MDNjZjM4MzAyM2RhY2YxYWExNjJmYmQxZDU0YzM1NGE0ONIBPhA=: 00:31:52.620 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:52.620 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:52.620 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:52.620 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGY1ZTcyOGEyYWVmZWVjMDk2NmJiMDYzYjQyZjQ1MDNjZjM4MzAyM2RhY2YxYWExNjJmYmQxZDU0YzM1NGE0ONIBPhA=: 00:31:52.620 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:52.620 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:31:52.620 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:52.620 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:52.620 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:52.620 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:52.620 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:52.620 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:31:52.620 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:52.620 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.620 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:52.620 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:52.620 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:52.620 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:52.620 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:52.620 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:52.620 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:52.620 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:52.620 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:52.620 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:52.620 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:52.620 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:52.620 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:52.620 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:52.620 09:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.881 nvme0n1 00:31:52.881 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:52.881 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:52.881 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:52.881 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.881 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:52.881 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:52.881 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:52.881 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:52.881 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:52.881 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.881 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:52.881 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:52.881 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:52.881 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:31:52.881 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:52.881 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:52.881 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:52.881 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:52.881 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTMyMWIxNWMyM2Y1MzMzY2NlODVjYTU4Mzg1Y2M3ZjPIwf4b: 00:31:52.881 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjExYWFhYWE2MmUxOTJmMDJjMzUzOWY2MjE1YzA0Y2EwODY0ZjEzMmMzNWQ2YjFiZmQ0Mjk3YTdlMDNiY2E0M8w8D9o=: 00:31:52.881 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:52.881 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:52.881 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTMyMWIxNWMyM2Y1MzMzY2NlODVjYTU4Mzg1Y2M3ZjPIwf4b: 00:31:52.881 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjExYWFhYWE2MmUxOTJmMDJjMzUzOWY2MjE1YzA0Y2EwODY0ZjEzMmMzNWQ2YjFiZmQ0Mjk3YTdlMDNiY2E0M8w8D9o=: ]] 00:31:52.881 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjExYWFhYWE2MmUxOTJmMDJjMzUzOWY2MjE1YzA0Y2EwODY0ZjEzMmMzNWQ2YjFiZmQ0Mjk3YTdlMDNiY2E0M8w8D9o=: 00:31:52.881 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:31:52.881 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:52.881 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:52.881 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:52.881 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:52.881 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:52.881 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:31:52.881 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:52.881 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.881 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:52.881 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:52.881 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:52.881 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:52.881 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:52.881 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:52.881 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:52.881 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:52.881 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:52.881 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:52.881 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:52.881 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:52.881 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:52.881 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:52.881 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:53.142 nvme0n1 00:31:53.142 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:53.142 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:53.142 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:53.142 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:53.142 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:53.142 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:53.142 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:53.142 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:53.142 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:53.142 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:53.142 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:53.142 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:53.142 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:31:53.142 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:53.142 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:53.142 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:53.142 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:53.142 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzhjY2JiODM5YmIxMmM3NmIwOGFmYThjYmU0ODI0NzRkMzZmZmQ3YTIzY2M2Zjk1shH7Bw==: 00:31:53.142 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjNhYmM0YzIwOWEzMWVlOGE5YTI3YzEwM2E1ZjVjZjc5NzFkODU0MzJjZGQ3Mzc5bapaxA==: 00:31:53.142 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:53.142 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:53.142 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzhjY2JiODM5YmIxMmM3NmIwOGFmYThjYmU0ODI0NzRkMzZmZmQ3YTIzY2M2Zjk1shH7Bw==: 00:31:53.142 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjNhYmM0YzIwOWEzMWVlOGE5YTI3YzEwM2E1ZjVjZjc5NzFkODU0MzJjZGQ3Mzc5bapaxA==: ]] 00:31:53.142 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjNhYmM0YzIwOWEzMWVlOGE5YTI3YzEwM2E1ZjVjZjc5NzFkODU0MzJjZGQ3Mzc5bapaxA==: 00:31:53.142 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:31:53.142 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:53.142 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:53.142 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:53.142 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:53.142 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:53.142 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:31:53.142 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:53.142 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:53.142 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:53.142 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:53.142 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:53.142 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:53.142 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:53.142 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:53.142 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:53.142 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:53.142 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:53.142 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:53.142 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:53.142 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:53.142 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:53.142 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:53.142 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:53.402 nvme0n1 00:31:53.403 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:53.403 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:53.403 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:53.403 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:53.403 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:53.403 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:53.403 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:53.403 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:53.403 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:53.403 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:53.403 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:53.403 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:53.403 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:31:53.403 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:53.403 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:53.403 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:53.403 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:53.403 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjFiZjZiNjFjYzNlMzY1ZTM2MDI3ZTBjMWQwZTdlNzJr9MSw: 00:31:53.403 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmZmNWU4YTM1ZTU2ZTFkNjMxMTRiMzA5Yzk0NjEzZWOJn1fL: 00:31:53.403 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:53.403 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:53.403 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjFiZjZiNjFjYzNlMzY1ZTM2MDI3ZTBjMWQwZTdlNzJr9MSw: 00:31:53.403 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmZmNWU4YTM1ZTU2ZTFkNjMxMTRiMzA5Yzk0NjEzZWOJn1fL: ]] 00:31:53.403 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmZmNWU4YTM1ZTU2ZTFkNjMxMTRiMzA5Yzk0NjEzZWOJn1fL: 00:31:53.403 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:31:53.403 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:53.403 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:53.403 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:53.403 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:53.403 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:53.403 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:31:53.403 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:53.403 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:53.403 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:53.403 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:53.403 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:53.403 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:53.403 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:53.403 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:53.403 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:53.403 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:53.403 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:53.403 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:53.403 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:53.403 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:53.403 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:53.403 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:53.403 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:53.663 nvme0n1 00:31:53.663 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:53.663 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:53.663 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:53.663 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:53.663 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:53.663 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:53.663 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:53.663 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:53.663 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:53.663 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:53.663 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:53.663 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:53.663 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:31:53.663 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:53.663 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:53.663 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:53.663 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:53.663 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDg0YmNhMjIyOTIwMDNiNjZjYTllOGFhZDIxZTRiZGQ3ZjUyMzA0ZWQxYWY2MWQxpD8UKg==: 00:31:53.663 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjZiNmNlN2VjOWQwMDliMGU5MDQxNmIwODMxY2VhNGGRqHZC: 00:31:53.663 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:53.663 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:53.663 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDg0YmNhMjIyOTIwMDNiNjZjYTllOGFhZDIxZTRiZGQ3ZjUyMzA0ZWQxYWY2MWQxpD8UKg==: 00:31:53.663 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjZiNmNlN2VjOWQwMDliMGU5MDQxNmIwODMxY2VhNGGRqHZC: ]] 00:31:53.663 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjZiNmNlN2VjOWQwMDliMGU5MDQxNmIwODMxY2VhNGGRqHZC: 00:31:53.663 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:31:53.663 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:53.663 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:53.664 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:53.664 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:53.664 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:53.664 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:31:53.664 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:53.664 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:53.664 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:53.664 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:53.664 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:53.664 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:53.664 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:53.664 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:53.664 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:53.664 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:53.664 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:53.664 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:53.664 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:53.664 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:53.664 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:53.664 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:53.664 09:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:53.923 nvme0n1 00:31:53.923 09:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:53.923 09:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:53.923 09:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:53.923 09:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:53.923 09:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:53.923 09:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:53.923 09:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:53.924 09:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:53.924 09:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:53.924 09:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:53.924 09:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:53.924 09:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:53.924 09:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:31:53.924 09:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:53.924 09:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:53.924 09:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:53.924 09:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:53.924 09:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGY1ZTcyOGEyYWVmZWVjMDk2NmJiMDYzYjQyZjQ1MDNjZjM4MzAyM2RhY2YxYWExNjJmYmQxZDU0YzM1NGE0ONIBPhA=: 00:31:53.924 09:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:53.924 09:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:53.924 09:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:53.924 09:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGY1ZTcyOGEyYWVmZWVjMDk2NmJiMDYzYjQyZjQ1MDNjZjM4MzAyM2RhY2YxYWExNjJmYmQxZDU0YzM1NGE0ONIBPhA=: 00:31:53.924 09:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:53.924 09:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:31:53.924 09:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:53.924 09:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:53.924 09:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:53.924 09:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:53.924 09:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:53.924 09:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:31:53.924 09:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:53.924 09:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:53.924 09:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:53.924 09:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:53.924 09:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:53.924 09:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:53.924 09:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:53.924 09:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:53.924 09:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:53.924 09:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:53.924 09:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:53.924 09:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:53.924 09:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:53.924 09:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:53.924 09:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:53.924 09:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:53.924 09:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:54.184 nvme0n1 00:31:54.184 09:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:54.184 09:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:54.184 09:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:54.184 09:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:54.184 09:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:54.184 09:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:54.184 09:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:54.184 09:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:54.184 09:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:54.184 09:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:54.184 09:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:54.184 09:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:54.184 09:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:54.184 09:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:31:54.184 09:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:54.184 09:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:54.185 09:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:54.185 09:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:54.185 09:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTMyMWIxNWMyM2Y1MzMzY2NlODVjYTU4Mzg1Y2M3ZjPIwf4b: 00:31:54.185 09:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjExYWFhYWE2MmUxOTJmMDJjMzUzOWY2MjE1YzA0Y2EwODY0ZjEzMmMzNWQ2YjFiZmQ0Mjk3YTdlMDNiY2E0M8w8D9o=: 00:31:54.185 09:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:54.185 09:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:54.185 09:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTMyMWIxNWMyM2Y1MzMzY2NlODVjYTU4Mzg1Y2M3ZjPIwf4b: 00:31:54.185 09:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjExYWFhYWE2MmUxOTJmMDJjMzUzOWY2MjE1YzA0Y2EwODY0ZjEzMmMzNWQ2YjFiZmQ0Mjk3YTdlMDNiY2E0M8w8D9o=: ]] 00:31:54.185 09:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjExYWFhYWE2MmUxOTJmMDJjMzUzOWY2MjE1YzA0Y2EwODY0ZjEzMmMzNWQ2YjFiZmQ0Mjk3YTdlMDNiY2E0M8w8D9o=: 00:31:54.185 09:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:31:54.185 09:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:54.185 09:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:54.185 09:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:54.185 09:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:54.185 09:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:54.185 09:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:31:54.185 09:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:54.185 09:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:54.185 09:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:54.185 09:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:54.185 09:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:54.185 09:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:54.185 09:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:54.185 09:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:54.185 09:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:54.185 09:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:54.185 09:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:54.185 09:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:54.185 09:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:54.185 09:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:54.185 09:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:54.185 09:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:54.185 09:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:54.449 nvme0n1 00:31:54.449 09:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:54.449 09:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:54.449 09:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:54.449 09:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:54.449 09:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:54.449 09:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:54.449 09:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:54.449 09:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:54.449 09:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:54.449 09:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:54.449 09:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:54.449 09:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:54.449 09:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:31:54.449 09:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:54.449 09:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:54.449 09:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:54.449 09:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:54.449 09:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzhjY2JiODM5YmIxMmM3NmIwOGFmYThjYmU0ODI0NzRkMzZmZmQ3YTIzY2M2Zjk1shH7Bw==: 00:31:54.449 09:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjNhYmM0YzIwOWEzMWVlOGE5YTI3YzEwM2E1ZjVjZjc5NzFkODU0MzJjZGQ3Mzc5bapaxA==: 00:31:54.449 09:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:54.449 09:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:54.449 09:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzhjY2JiODM5YmIxMmM3NmIwOGFmYThjYmU0ODI0NzRkMzZmZmQ3YTIzY2M2Zjk1shH7Bw==: 00:31:54.449 09:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjNhYmM0YzIwOWEzMWVlOGE5YTI3YzEwM2E1ZjVjZjc5NzFkODU0MzJjZGQ3Mzc5bapaxA==: ]] 00:31:54.449 09:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjNhYmM0YzIwOWEzMWVlOGE5YTI3YzEwM2E1ZjVjZjc5NzFkODU0MzJjZGQ3Mzc5bapaxA==: 00:31:54.449 09:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:31:54.449 09:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:54.449 09:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:54.449 09:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:54.449 09:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:54.449 09:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:54.449 09:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:31:54.449 09:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:54.449 09:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:54.449 09:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:54.449 09:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:54.449 09:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:54.449 09:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:54.449 09:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:54.449 09:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:54.449 09:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:54.449 09:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:54.449 09:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:54.449 09:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:54.449 09:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:54.449 09:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:54.449 09:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:54.449 09:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:54.449 09:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:54.731 nvme0n1 00:31:54.731 09:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:54.731 09:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:54.731 09:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:54.731 09:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:54.731 09:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:54.995 09:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:54.995 09:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:54.995 09:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:54.995 09:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:54.995 09:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:54.995 09:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:54.995 09:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:54.995 09:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:31:54.995 09:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:54.995 09:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:54.995 09:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:54.995 09:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:54.995 09:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjFiZjZiNjFjYzNlMzY1ZTM2MDI3ZTBjMWQwZTdlNzJr9MSw: 00:31:54.995 09:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmZmNWU4YTM1ZTU2ZTFkNjMxMTRiMzA5Yzk0NjEzZWOJn1fL: 00:31:54.995 09:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:54.995 09:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:54.995 09:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjFiZjZiNjFjYzNlMzY1ZTM2MDI3ZTBjMWQwZTdlNzJr9MSw: 00:31:54.995 09:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmZmNWU4YTM1ZTU2ZTFkNjMxMTRiMzA5Yzk0NjEzZWOJn1fL: ]] 00:31:54.995 09:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmZmNWU4YTM1ZTU2ZTFkNjMxMTRiMzA5Yzk0NjEzZWOJn1fL: 00:31:54.995 09:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:31:54.995 09:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:54.995 09:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:54.995 09:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:54.995 09:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:54.995 09:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:54.995 09:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:31:54.995 09:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:54.995 09:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:54.995 09:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:54.995 09:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:54.995 09:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:54.995 09:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:54.995 09:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:54.995 09:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:54.995 09:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:54.995 09:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:54.996 09:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:54.996 09:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:54.996 09:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:54.996 09:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:54.996 09:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:54.996 09:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:54.996 09:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.254 nvme0n1 00:31:55.254 09:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:55.254 09:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:55.254 09:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:55.254 09:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.254 09:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:55.254 09:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:55.254 09:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:55.254 09:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:55.254 09:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:55.254 09:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.254 09:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:55.254 09:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:55.254 09:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:31:55.254 09:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:55.254 09:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:55.254 09:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:55.254 09:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:55.254 09:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDg0YmNhMjIyOTIwMDNiNjZjYTllOGFhZDIxZTRiZGQ3ZjUyMzA0ZWQxYWY2MWQxpD8UKg==: 00:31:55.254 09:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjZiNmNlN2VjOWQwMDliMGU5MDQxNmIwODMxY2VhNGGRqHZC: 00:31:55.254 09:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:55.254 09:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:55.254 09:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDg0YmNhMjIyOTIwMDNiNjZjYTllOGFhZDIxZTRiZGQ3ZjUyMzA0ZWQxYWY2MWQxpD8UKg==: 00:31:55.254 09:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjZiNmNlN2VjOWQwMDliMGU5MDQxNmIwODMxY2VhNGGRqHZC: ]] 00:31:55.254 09:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjZiNmNlN2VjOWQwMDliMGU5MDQxNmIwODMxY2VhNGGRqHZC: 00:31:55.254 09:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:31:55.254 09:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:55.254 09:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:55.254 09:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:55.254 09:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:55.254 09:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:55.254 09:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:31:55.254 09:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:55.254 09:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.254 09:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:55.254 09:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:55.254 09:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:55.254 09:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:55.254 09:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:55.254 09:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:55.254 09:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:55.254 09:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:55.254 09:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:55.254 09:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:55.254 09:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:55.254 09:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:55.254 09:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:55.254 09:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:55.254 09:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.511 nvme0n1 00:31:55.511 09:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:55.511 09:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:55.511 09:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:55.511 09:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:55.511 09:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.511 09:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:55.511 09:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:55.511 09:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:55.511 09:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:55.511 09:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.511 09:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:55.511 09:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:55.511 09:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:31:55.511 09:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:55.511 09:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:55.511 09:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:55.511 09:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:55.511 09:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGY1ZTcyOGEyYWVmZWVjMDk2NmJiMDYzYjQyZjQ1MDNjZjM4MzAyM2RhY2YxYWExNjJmYmQxZDU0YzM1NGE0ONIBPhA=: 00:31:55.511 09:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:55.511 09:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:55.511 09:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:55.511 09:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGY1ZTcyOGEyYWVmZWVjMDk2NmJiMDYzYjQyZjQ1MDNjZjM4MzAyM2RhY2YxYWExNjJmYmQxZDU0YzM1NGE0ONIBPhA=: 00:31:55.511 09:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:55.511 09:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:31:55.511 09:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:55.511 09:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:55.512 09:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:55.512 09:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:55.512 09:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:55.512 09:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:31:55.512 09:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:55.512 09:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.512 09:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:55.512 09:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:55.512 09:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:55.512 09:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:55.512 09:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:55.512 09:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:55.512 09:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:55.512 09:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:55.512 09:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:55.512 09:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:55.512 09:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:55.512 09:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:55.512 09:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:55.512 09:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:55.512 09:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:56.078 nvme0n1 00:31:56.078 09:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:56.078 09:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:56.078 09:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:56.078 09:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:56.078 09:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:56.078 09:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:56.078 09:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:56.078 09:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:56.078 09:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:56.078 09:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:56.078 09:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:56.078 09:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:56.078 09:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:56.078 09:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:31:56.078 09:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:56.078 09:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:56.078 09:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:56.078 09:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:56.078 09:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTMyMWIxNWMyM2Y1MzMzY2NlODVjYTU4Mzg1Y2M3ZjPIwf4b: 00:31:56.078 09:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjExYWFhYWE2MmUxOTJmMDJjMzUzOWY2MjE1YzA0Y2EwODY0ZjEzMmMzNWQ2YjFiZmQ0Mjk3YTdlMDNiY2E0M8w8D9o=: 00:31:56.078 09:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:56.078 09:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:56.078 09:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTMyMWIxNWMyM2Y1MzMzY2NlODVjYTU4Mzg1Y2M3ZjPIwf4b: 00:31:56.078 09:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjExYWFhYWE2MmUxOTJmMDJjMzUzOWY2MjE1YzA0Y2EwODY0ZjEzMmMzNWQ2YjFiZmQ0Mjk3YTdlMDNiY2E0M8w8D9o=: ]] 00:31:56.078 09:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjExYWFhYWE2MmUxOTJmMDJjMzUzOWY2MjE1YzA0Y2EwODY0ZjEzMmMzNWQ2YjFiZmQ0Mjk3YTdlMDNiY2E0M8w8D9o=: 00:31:56.078 09:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:31:56.078 09:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:56.078 09:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:56.078 09:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:56.078 09:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:56.078 09:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:56.078 09:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:31:56.078 09:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:56.078 09:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:56.078 09:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:56.078 09:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:56.078 09:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:56.078 09:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:56.078 09:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:56.078 09:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:56.078 09:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:56.078 09:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:56.078 09:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:56.078 09:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:56.078 09:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:56.078 09:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:56.078 09:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:56.078 09:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:56.078 09:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:56.647 nvme0n1 00:31:56.647 09:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:56.647 09:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:56.647 09:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:56.647 09:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:56.647 09:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:56.647 09:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:56.647 09:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:56.647 09:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:56.647 09:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:56.647 09:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:56.647 09:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:56.647 09:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:56.647 09:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:31:56.647 09:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:56.647 09:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:56.647 09:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:56.647 09:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:56.647 09:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzhjY2JiODM5YmIxMmM3NmIwOGFmYThjYmU0ODI0NzRkMzZmZmQ3YTIzY2M2Zjk1shH7Bw==: 00:31:56.647 09:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjNhYmM0YzIwOWEzMWVlOGE5YTI3YzEwM2E1ZjVjZjc5NzFkODU0MzJjZGQ3Mzc5bapaxA==: 00:31:56.647 09:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:56.647 09:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:56.647 09:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzhjY2JiODM5YmIxMmM3NmIwOGFmYThjYmU0ODI0NzRkMzZmZmQ3YTIzY2M2Zjk1shH7Bw==: 00:31:56.647 09:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjNhYmM0YzIwOWEzMWVlOGE5YTI3YzEwM2E1ZjVjZjc5NzFkODU0MzJjZGQ3Mzc5bapaxA==: ]] 00:31:56.647 09:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjNhYmM0YzIwOWEzMWVlOGE5YTI3YzEwM2E1ZjVjZjc5NzFkODU0MzJjZGQ3Mzc5bapaxA==: 00:31:56.647 09:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:31:56.647 09:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:56.647 09:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:56.647 09:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:56.647 09:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:56.647 09:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:56.647 09:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:31:56.647 09:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:56.647 09:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:56.647 09:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:56.647 09:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:56.647 09:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:56.647 09:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:56.647 09:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:56.647 09:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:56.647 09:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:56.647 09:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:56.647 09:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:56.647 09:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:56.647 09:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:56.647 09:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:56.647 09:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:56.647 09:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:56.647 09:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.214 nvme0n1 00:31:57.214 09:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:57.214 09:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:57.214 09:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:57.214 09:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.214 09:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:57.214 09:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:57.214 09:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:57.214 09:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:57.214 09:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:57.214 09:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.214 09:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:57.214 09:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:57.214 09:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:31:57.214 09:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:57.214 09:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:57.214 09:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:57.214 09:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:57.214 09:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjFiZjZiNjFjYzNlMzY1ZTM2MDI3ZTBjMWQwZTdlNzJr9MSw: 00:31:57.214 09:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmZmNWU4YTM1ZTU2ZTFkNjMxMTRiMzA5Yzk0NjEzZWOJn1fL: 00:31:57.214 09:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:57.214 09:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:57.214 09:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjFiZjZiNjFjYzNlMzY1ZTM2MDI3ZTBjMWQwZTdlNzJr9MSw: 00:31:57.214 09:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmZmNWU4YTM1ZTU2ZTFkNjMxMTRiMzA5Yzk0NjEzZWOJn1fL: ]] 00:31:57.214 09:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmZmNWU4YTM1ZTU2ZTFkNjMxMTRiMzA5Yzk0NjEzZWOJn1fL: 00:31:57.214 09:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:31:57.214 09:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:57.214 09:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:57.214 09:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:57.214 09:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:57.214 09:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:57.214 09:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:31:57.214 09:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:57.214 09:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.214 09:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:57.214 09:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:57.214 09:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:57.214 09:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:57.214 09:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:57.214 09:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:57.214 09:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:57.214 09:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:57.214 09:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:57.214 09:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:57.214 09:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:57.214 09:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:57.214 09:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:57.214 09:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:57.214 09:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.782 nvme0n1 00:31:57.782 09:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:57.782 09:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:57.782 09:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:57.782 09:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:57.782 09:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.782 09:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:57.782 09:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:57.782 09:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:57.782 09:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:57.782 09:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.782 09:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:57.782 09:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:57.782 09:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:31:57.782 09:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:57.782 09:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:57.782 09:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:57.782 09:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:57.782 09:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDg0YmNhMjIyOTIwMDNiNjZjYTllOGFhZDIxZTRiZGQ3ZjUyMzA0ZWQxYWY2MWQxpD8UKg==: 00:31:57.782 09:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjZiNmNlN2VjOWQwMDliMGU5MDQxNmIwODMxY2VhNGGRqHZC: 00:31:57.782 09:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:57.782 09:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:57.782 09:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDg0YmNhMjIyOTIwMDNiNjZjYTllOGFhZDIxZTRiZGQ3ZjUyMzA0ZWQxYWY2MWQxpD8UKg==: 00:31:57.782 09:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjZiNmNlN2VjOWQwMDliMGU5MDQxNmIwODMxY2VhNGGRqHZC: ]] 00:31:57.782 09:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjZiNmNlN2VjOWQwMDliMGU5MDQxNmIwODMxY2VhNGGRqHZC: 00:31:57.782 09:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:31:57.782 09:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:57.782 09:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:57.782 09:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:57.782 09:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:57.782 09:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:57.783 09:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:31:57.783 09:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:57.783 09:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.783 09:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:57.783 09:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:57.783 09:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:57.783 09:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:57.783 09:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:57.783 09:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:57.783 09:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:57.783 09:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:57.783 09:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:57.783 09:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:57.783 09:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:57.783 09:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:57.783 09:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:57.783 09:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:57.783 09:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:58.351 nvme0n1 00:31:58.351 09:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:58.351 09:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:58.351 09:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:58.351 09:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:58.351 09:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:58.351 09:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:58.351 09:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:58.351 09:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:58.351 09:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:58.351 09:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:58.351 09:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:58.351 09:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:58.351 09:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:31:58.351 09:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:58.351 09:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:58.351 09:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:58.351 09:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:58.351 09:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGY1ZTcyOGEyYWVmZWVjMDk2NmJiMDYzYjQyZjQ1MDNjZjM4MzAyM2RhY2YxYWExNjJmYmQxZDU0YzM1NGE0ONIBPhA=: 00:31:58.351 09:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:58.351 09:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:58.351 09:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:58.351 09:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGY1ZTcyOGEyYWVmZWVjMDk2NmJiMDYzYjQyZjQ1MDNjZjM4MzAyM2RhY2YxYWExNjJmYmQxZDU0YzM1NGE0ONIBPhA=: 00:31:58.351 09:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:58.351 09:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:31:58.351 09:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:58.351 09:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:58.351 09:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:58.351 09:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:58.351 09:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:58.351 09:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:31:58.351 09:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:58.352 09:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:58.352 09:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:58.352 09:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:58.352 09:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:58.352 09:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:58.352 09:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:58.352 09:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:58.352 09:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:58.352 09:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:58.352 09:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:58.352 09:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:58.352 09:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:58.352 09:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:58.352 09:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:58.352 09:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:58.352 09:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:58.922 nvme0n1 00:31:58.922 09:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:58.922 09:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:58.922 09:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:58.922 09:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:58.922 09:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:58.922 09:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:58.922 09:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:58.922 09:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:58.922 09:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:58.922 09:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:58.922 09:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:58.922 09:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:58.922 09:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:58.922 09:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:31:58.922 09:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:58.922 09:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:58.922 09:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:58.922 09:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:58.922 09:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTMyMWIxNWMyM2Y1MzMzY2NlODVjYTU4Mzg1Y2M3ZjPIwf4b: 00:31:58.922 09:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjExYWFhYWE2MmUxOTJmMDJjMzUzOWY2MjE1YzA0Y2EwODY0ZjEzMmMzNWQ2YjFiZmQ0Mjk3YTdlMDNiY2E0M8w8D9o=: 00:31:58.923 09:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:58.923 09:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:58.923 09:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTMyMWIxNWMyM2Y1MzMzY2NlODVjYTU4Mzg1Y2M3ZjPIwf4b: 00:31:58.923 09:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjExYWFhYWE2MmUxOTJmMDJjMzUzOWY2MjE1YzA0Y2EwODY0ZjEzMmMzNWQ2YjFiZmQ0Mjk3YTdlMDNiY2E0M8w8D9o=: ]] 00:31:58.923 09:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjExYWFhYWE2MmUxOTJmMDJjMzUzOWY2MjE1YzA0Y2EwODY0ZjEzMmMzNWQ2YjFiZmQ0Mjk3YTdlMDNiY2E0M8w8D9o=: 00:31:58.923 09:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:31:58.923 09:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:58.923 09:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:58.923 09:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:58.923 09:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:58.923 09:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:58.923 09:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:31:58.923 09:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:58.923 09:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:58.923 09:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:58.923 09:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:58.923 09:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:58.923 09:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:58.923 09:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:58.923 09:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:58.923 09:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:58.923 09:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:58.923 09:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:58.923 09:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:58.923 09:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:58.923 09:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:58.923 09:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:58.923 09:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:58.923 09:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.860 nvme0n1 00:31:59.860 09:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:59.860 09:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:59.860 09:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:59.860 09:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.860 09:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:59.860 09:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:59.860 09:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:59.860 09:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:59.860 09:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:59.860 09:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.860 09:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:59.860 09:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:59.860 09:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:31:59.860 09:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:59.860 09:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:59.860 09:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:59.860 09:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:59.860 09:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzhjY2JiODM5YmIxMmM3NmIwOGFmYThjYmU0ODI0NzRkMzZmZmQ3YTIzY2M2Zjk1shH7Bw==: 00:31:59.860 09:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjNhYmM0YzIwOWEzMWVlOGE5YTI3YzEwM2E1ZjVjZjc5NzFkODU0MzJjZGQ3Mzc5bapaxA==: 00:31:59.860 09:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:59.860 09:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:59.860 09:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzhjY2JiODM5YmIxMmM3NmIwOGFmYThjYmU0ODI0NzRkMzZmZmQ3YTIzY2M2Zjk1shH7Bw==: 00:31:59.860 09:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjNhYmM0YzIwOWEzMWVlOGE5YTI3YzEwM2E1ZjVjZjc5NzFkODU0MzJjZGQ3Mzc5bapaxA==: ]] 00:31:59.860 09:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjNhYmM0YzIwOWEzMWVlOGE5YTI3YzEwM2E1ZjVjZjc5NzFkODU0MzJjZGQ3Mzc5bapaxA==: 00:31:59.860 09:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:31:59.860 09:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:59.860 09:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:59.860 09:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:59.860 09:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:59.860 09:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:59.860 09:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:31:59.860 09:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:59.860 09:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.860 09:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:59.860 09:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:59.860 09:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:59.860 09:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:59.860 09:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:59.860 09:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:59.860 09:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:59.860 09:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:59.860 09:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:59.860 09:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:59.860 09:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:59.860 09:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:59.860 09:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:59.860 09:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:59.860 09:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:01.238 nvme0n1 00:32:01.238 09:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:01.238 09:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:01.238 09:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:01.238 09:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:01.238 09:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:01.238 09:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:01.238 09:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:01.238 09:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:01.238 09:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:01.238 09:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:01.238 09:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:01.238 09:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:01.238 09:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:32:01.238 09:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:01.238 09:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:01.238 09:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:01.238 09:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:01.238 09:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjFiZjZiNjFjYzNlMzY1ZTM2MDI3ZTBjMWQwZTdlNzJr9MSw: 00:32:01.238 09:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmZmNWU4YTM1ZTU2ZTFkNjMxMTRiMzA5Yzk0NjEzZWOJn1fL: 00:32:01.238 09:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:01.238 09:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:01.238 09:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjFiZjZiNjFjYzNlMzY1ZTM2MDI3ZTBjMWQwZTdlNzJr9MSw: 00:32:01.238 09:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmZmNWU4YTM1ZTU2ZTFkNjMxMTRiMzA5Yzk0NjEzZWOJn1fL: ]] 00:32:01.238 09:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmZmNWU4YTM1ZTU2ZTFkNjMxMTRiMzA5Yzk0NjEzZWOJn1fL: 00:32:01.238 09:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:32:01.238 09:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:01.238 09:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:01.238 09:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:01.238 09:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:01.238 09:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:01.238 09:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:32:01.238 09:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:01.238 09:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:01.238 09:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:01.238 09:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:01.238 09:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:01.238 09:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:01.238 09:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:01.238 09:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:01.238 09:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:01.238 09:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:01.238 09:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:01.238 09:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:01.238 09:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:01.238 09:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:01.238 09:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:01.238 09:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:01.238 09:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:01.806 nvme0n1 00:32:01.806 09:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:01.806 09:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:01.806 09:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:01.806 09:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:01.806 09:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:02.066 09:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:02.066 09:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:02.066 09:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:02.066 09:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:02.066 09:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:02.066 09:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:02.066 09:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:02.066 09:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:32:02.066 09:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:02.066 09:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:02.066 09:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:02.066 09:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:02.066 09:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDg0YmNhMjIyOTIwMDNiNjZjYTllOGFhZDIxZTRiZGQ3ZjUyMzA0ZWQxYWY2MWQxpD8UKg==: 00:32:02.066 09:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjZiNmNlN2VjOWQwMDliMGU5MDQxNmIwODMxY2VhNGGRqHZC: 00:32:02.066 09:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:02.066 09:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:02.066 09:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDg0YmNhMjIyOTIwMDNiNjZjYTllOGFhZDIxZTRiZGQ3ZjUyMzA0ZWQxYWY2MWQxpD8UKg==: 00:32:02.066 09:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjZiNmNlN2VjOWQwMDliMGU5MDQxNmIwODMxY2VhNGGRqHZC: ]] 00:32:02.066 09:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjZiNmNlN2VjOWQwMDliMGU5MDQxNmIwODMxY2VhNGGRqHZC: 00:32:02.066 09:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:32:02.066 09:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:02.066 09:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:02.066 09:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:02.066 09:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:02.066 09:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:02.066 09:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:32:02.066 09:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:02.066 09:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:02.066 09:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:02.066 09:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:02.066 09:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:02.066 09:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:02.066 09:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:02.066 09:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:02.066 09:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:02.066 09:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:02.066 09:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:02.066 09:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:02.066 09:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:02.066 09:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:02.066 09:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:02.066 09:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:02.066 09:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:03.007 nvme0n1 00:32:03.007 09:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:03.007 09:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:03.007 09:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:03.007 09:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:03.007 09:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:03.007 09:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:03.007 09:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:03.007 09:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:03.007 09:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:03.007 09:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:03.007 09:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:03.007 09:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:03.007 09:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:32:03.007 09:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:03.007 09:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:03.007 09:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:03.007 09:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:03.007 09:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGY1ZTcyOGEyYWVmZWVjMDk2NmJiMDYzYjQyZjQ1MDNjZjM4MzAyM2RhY2YxYWExNjJmYmQxZDU0YzM1NGE0ONIBPhA=: 00:32:03.007 09:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:03.007 09:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:03.007 09:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:03.007 09:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGY1ZTcyOGEyYWVmZWVjMDk2NmJiMDYzYjQyZjQ1MDNjZjM4MzAyM2RhY2YxYWExNjJmYmQxZDU0YzM1NGE0ONIBPhA=: 00:32:03.007 09:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:03.007 09:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:32:03.007 09:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:03.007 09:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:03.007 09:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:03.007 09:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:03.007 09:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:03.007 09:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:32:03.007 09:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:03.007 09:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:03.007 09:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:03.007 09:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:03.007 09:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:03.007 09:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:03.007 09:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:03.007 09:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:03.007 09:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:03.007 09:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:03.007 09:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:03.007 09:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:03.007 09:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:03.007 09:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:03.007 09:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:03.007 09:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:03.007 09:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:03.944 nvme0n1 00:32:03.944 09:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:03.944 09:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:03.944 09:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:03.944 09:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:03.945 09:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:03.945 09:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:03.945 09:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:03.945 09:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:03.945 09:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:03.945 09:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:03.945 09:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:03.945 09:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:32:03.945 09:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:03.945 09:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:03.945 09:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:03.945 09:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:03.945 09:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzhjY2JiODM5YmIxMmM3NmIwOGFmYThjYmU0ODI0NzRkMzZmZmQ3YTIzY2M2Zjk1shH7Bw==: 00:32:03.945 09:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjNhYmM0YzIwOWEzMWVlOGE5YTI3YzEwM2E1ZjVjZjc5NzFkODU0MzJjZGQ3Mzc5bapaxA==: 00:32:03.945 09:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:03.945 09:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:03.945 09:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzhjY2JiODM5YmIxMmM3NmIwOGFmYThjYmU0ODI0NzRkMzZmZmQ3YTIzY2M2Zjk1shH7Bw==: 00:32:03.945 09:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjNhYmM0YzIwOWEzMWVlOGE5YTI3YzEwM2E1ZjVjZjc5NzFkODU0MzJjZGQ3Mzc5bapaxA==: ]] 00:32:03.945 09:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjNhYmM0YzIwOWEzMWVlOGE5YTI3YzEwM2E1ZjVjZjc5NzFkODU0MzJjZGQ3Mzc5bapaxA==: 00:32:03.945 09:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:03.945 09:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:03.945 09:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:03.945 09:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:03.945 09:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:32:03.945 09:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:03.945 09:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:03.945 09:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:03.945 09:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:03.945 09:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:03.945 09:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:03.945 09:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:03.945 09:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:03.945 09:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:03.945 09:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:03.945 09:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:32:03.945 09:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:32:03.945 09:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:32:03.945 09:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:32:03.945 09:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:03.945 09:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:32:03.945 09:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:03.945 09:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:32:03.945 09:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:03.945 09:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:04.203 request: 00:32:04.203 { 00:32:04.203 "name": "nvme0", 00:32:04.203 "trtype": "tcp", 00:32:04.203 "traddr": "10.0.0.1", 00:32:04.203 "adrfam": "ipv4", 00:32:04.203 "trsvcid": "4420", 00:32:04.203 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:32:04.203 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:32:04.203 "prchk_reftag": false, 00:32:04.203 "prchk_guard": false, 00:32:04.203 "hdgst": false, 00:32:04.203 "ddgst": false, 00:32:04.203 "method": "bdev_nvme_attach_controller", 00:32:04.203 "req_id": 1 00:32:04.203 } 00:32:04.203 Got JSON-RPC error response 00:32:04.203 response: 00:32:04.203 { 00:32:04.203 "code": -5, 00:32:04.203 "message": "Input/output error" 00:32:04.203 } 00:32:04.203 09:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:32:04.203 09:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:32:04.203 09:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:04.203 09:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:04.203 09:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:04.203 09:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:32:04.203 09:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:04.203 09:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:32:04.203 09:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:04.203 09:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:04.203 09:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:32:04.203 09:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:32:04.203 09:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:04.203 09:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:04.203 09:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:04.203 09:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:04.203 09:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:04.203 09:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:04.203 09:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:04.203 09:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:04.203 09:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:04.203 09:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:04.203 09:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:32:04.203 09:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:32:04.203 09:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:32:04.203 09:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:32:04.203 09:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:04.203 09:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:32:04.203 09:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:04.204 09:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:32:04.204 09:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:04.204 09:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:04.204 request: 00:32:04.204 { 00:32:04.204 "name": "nvme0", 00:32:04.204 "trtype": "tcp", 00:32:04.204 "traddr": "10.0.0.1", 00:32:04.204 "adrfam": "ipv4", 00:32:04.204 "trsvcid": "4420", 00:32:04.204 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:32:04.204 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:32:04.204 "prchk_reftag": false, 00:32:04.204 "prchk_guard": false, 00:32:04.204 "hdgst": false, 00:32:04.204 "ddgst": false, 00:32:04.204 "dhchap_key": "key2", 00:32:04.204 "method": "bdev_nvme_attach_controller", 00:32:04.204 "req_id": 1 00:32:04.204 } 00:32:04.204 Got JSON-RPC error response 00:32:04.204 response: 00:32:04.204 { 00:32:04.204 "code": -5, 00:32:04.204 "message": "Input/output error" 00:32:04.204 } 00:32:04.204 09:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:32:04.204 09:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:32:04.204 09:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:04.204 09:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:04.204 09:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:04.204 09:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:32:04.204 09:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:04.204 09:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:32:04.204 09:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:04.204 09:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:04.204 09:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:32:04.204 09:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:32:04.204 09:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:04.204 09:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:04.204 09:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:04.204 09:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:04.204 09:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:04.204 09:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:04.204 09:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:04.204 09:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:04.204 09:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:04.204 09:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:04.204 09:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:32:04.204 09:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:32:04.204 09:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:32:04.204 09:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:32:04.204 09:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:04.204 09:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:32:04.204 09:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:04.204 09:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:32:04.204 09:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:04.204 09:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:04.462 request: 00:32:04.462 { 00:32:04.462 "name": "nvme0", 00:32:04.462 "trtype": "tcp", 00:32:04.462 "traddr": "10.0.0.1", 00:32:04.462 "adrfam": "ipv4", 00:32:04.462 "trsvcid": "4420", 00:32:04.462 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:32:04.462 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:32:04.462 "prchk_reftag": false, 00:32:04.462 "prchk_guard": false, 00:32:04.462 "hdgst": false, 00:32:04.462 "ddgst": false, 00:32:04.462 "dhchap_key": "key1", 00:32:04.462 "dhchap_ctrlr_key": "ckey2", 00:32:04.462 "method": "bdev_nvme_attach_controller", 00:32:04.462 "req_id": 1 00:32:04.462 } 00:32:04.462 Got JSON-RPC error response 00:32:04.462 response: 00:32:04.462 { 00:32:04.462 "code": -5, 00:32:04.462 "message": "Input/output error" 00:32:04.462 } 00:32:04.462 09:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:32:04.462 09:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:32:04.462 09:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:04.462 09:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:04.462 09:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:04.462 09:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:32:04.462 09:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:32:04.462 09:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:32:04.462 09:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:04.462 09:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:32:04.462 09:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:04.462 09:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:32:04.462 09:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:04.462 09:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:04.462 rmmod nvme_tcp 00:32:04.462 rmmod nvme_fabrics 00:32:04.462 09:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:04.462 09:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:32:04.462 09:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:32:04.462 09:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 1100200 ']' 00:32:04.462 09:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 1100200 00:32:04.462 09:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@950 -- # '[' -z 1100200 ']' 00:32:04.462 09:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # kill -0 1100200 00:32:04.462 09:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # uname 00:32:04.462 09:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:04.462 09:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1100200 00:32:04.462 09:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:04.462 09:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:04.462 09:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1100200' 00:32:04.462 killing process with pid 1100200 00:32:04.462 09:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@969 -- # kill 1100200 00:32:04.462 09:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@974 -- # wait 1100200 00:32:04.722 09:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:04.722 09:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:04.722 09:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:04.722 09:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:04.722 09:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:04.722 09:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:04.722 09:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:04.722 09:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:06.625 09:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:06.625 09:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:32:06.625 09:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:32:06.625 09:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:32:06.625 09:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:32:06.625 09:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:32:06.625 09:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:06.625 09:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:32:06.625 09:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:32:06.625 09:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:06.625 09:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:32:06.625 09:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:32:06.625 09:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:08.001 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:32:08.001 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:32:08.001 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:32:08.001 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:32:08.001 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:32:08.001 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:32:08.001 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:32:08.001 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:32:08.001 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:32:08.001 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:32:08.001 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:32:08.001 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:32:08.001 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:32:08.001 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:32:08.001 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:32:08.001 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:32:08.936 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:32:08.936 09:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.cVr /tmp/spdk.key-null.zvm /tmp/spdk.key-sha256.Aum /tmp/spdk.key-sha384.cYj /tmp/spdk.key-sha512.hq5 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:32:08.936 09:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:10.316 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:32:10.316 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:32:10.316 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:32:10.316 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:32:10.316 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:32:10.316 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:32:10.316 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:32:10.316 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:32:10.316 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:32:10.316 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:32:10.316 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:32:10.316 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:32:10.316 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:32:10.316 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:32:10.316 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:32:10.316 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:32:10.316 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:32:10.316 00:32:10.316 real 0m49.838s 00:32:10.316 user 0m47.723s 00:32:10.316 sys 0m5.864s 00:32:10.316 09:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:10.316 09:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:10.316 ************************************ 00:32:10.316 END TEST nvmf_auth_host 00:32:10.316 ************************************ 00:32:10.316 09:05:28 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:32:10.316 09:05:28 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:32:10.316 09:05:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:32:10.316 09:05:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:10.316 09:05:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:10.316 ************************************ 00:32:10.316 START TEST nvmf_digest 00:32:10.316 ************************************ 00:32:10.316 09:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:32:10.316 * Looking for test storage... 00:32:10.316 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:10.316 09:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:10.316 09:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:32:10.316 09:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:10.316 09:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:10.316 09:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:10.316 09:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:10.316 09:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:10.316 09:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:10.316 09:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:10.316 09:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:10.316 09:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:10.316 09:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:10.316 09:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:10.316 09:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:10.316 09:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:10.316 09:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:10.316 09:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:10.316 09:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:10.316 09:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:10.316 09:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:10.316 09:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:10.316 09:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:10.316 09:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:10.316 09:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:10.316 09:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:10.316 09:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:32:10.316 09:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:10.316 09:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:32:10.316 09:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:10.316 09:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:10.316 09:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:10.316 09:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:10.316 09:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:10.316 09:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:10.316 09:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:10.316 09:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:10.316 09:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:32:10.316 09:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:32:10.316 09:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:32:10.316 09:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:32:10.316 09:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:32:10.316 09:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:10.317 09:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:10.317 09:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:10.317 09:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:10.317 09:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:10.317 09:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:10.317 09:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:10.317 09:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:10.317 09:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:10.317 09:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:10.317 09:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:32:10.317 09:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:32:12.221 09:05:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:12.221 09:05:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:32:12.221 09:05:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:12.221 09:05:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:12.221 09:05:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:12.221 09:05:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:12.222 09:05:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:12.222 09:05:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:32:12.222 09:05:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:12.222 09:05:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:32:12.222 09:05:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:32:12.222 09:05:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:32:12.222 09:05:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:32:12.222 09:05:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:32:12.222 09:05:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:32:12.222 09:05:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:12.222 09:05:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:12.222 09:05:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:12.222 09:05:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:12.222 09:05:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:12.222 09:05:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:12.222 09:05:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:12.222 09:05:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:12.222 09:05:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:12.222 09:05:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:12.222 09:05:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:12.222 09:05:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:12.222 09:05:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:12.222 09:05:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:12.222 09:05:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:12.222 09:05:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:12.222 09:05:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:12.222 09:05:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:12.222 09:05:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:32:12.222 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:32:12.222 09:05:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:12.222 09:05:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:12.222 09:05:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:12.222 09:05:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:12.222 09:05:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:12.222 09:05:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:12.222 09:05:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:32:12.222 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:32:12.222 09:05:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:12.222 09:05:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:12.222 09:05:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:12.222 09:05:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:12.222 09:05:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:12.222 09:05:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:12.222 09:05:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:12.222 09:05:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:12.222 09:05:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:12.222 09:05:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:12.222 09:05:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:12.222 09:05:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:12.222 09:05:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:12.222 09:05:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:12.222 09:05:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:12.222 09:05:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:32:12.222 Found net devices under 0000:0a:00.0: cvl_0_0 00:32:12.222 09:05:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:12.222 09:05:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:12.222 09:05:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:12.222 09:05:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:12.222 09:05:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:12.222 09:05:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:12.222 09:05:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:12.222 09:05:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:12.222 09:05:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:32:12.222 Found net devices under 0000:0a:00.1: cvl_0_1 00:32:12.222 09:05:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:12.222 09:05:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:12.222 09:05:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:32:12.222 09:05:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:12.222 09:05:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:12.222 09:05:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:12.222 09:05:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:12.222 09:05:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:12.222 09:05:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:12.222 09:05:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:12.222 09:05:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:12.222 09:05:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:12.222 09:05:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:12.222 09:05:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:12.222 09:05:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:12.222 09:05:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:12.222 09:05:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:12.222 09:05:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:12.222 09:05:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:12.480 09:05:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:12.480 09:05:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:12.480 09:05:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:12.480 09:05:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:12.480 09:05:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:12.480 09:05:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:12.480 09:05:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:12.480 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:12.480 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.139 ms 00:32:12.480 00:32:12.480 --- 10.0.0.2 ping statistics --- 00:32:12.480 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:12.480 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:32:12.480 09:05:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:12.480 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:12.480 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.185 ms 00:32:12.480 00:32:12.480 --- 10.0.0.1 ping statistics --- 00:32:12.480 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:12.480 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:32:12.480 09:05:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:12.480 09:05:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:32:12.480 09:05:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:12.480 09:05:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:12.480 09:05:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:12.480 09:05:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:12.480 09:05:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:12.480 09:05:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:12.480 09:05:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:12.480 09:05:30 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:32:12.480 09:05:30 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:32:12.480 09:05:30 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:32:12.480 09:05:30 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:32:12.480 09:05:30 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:12.480 09:05:30 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:32:12.480 ************************************ 00:32:12.480 START TEST nvmf_digest_clean 00:32:12.480 ************************************ 00:32:12.480 09:05:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1125 -- # run_digest 00:32:12.480 09:05:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:32:12.480 09:05:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:32:12.480 09:05:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:32:12.480 09:05:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:32:12.480 09:05:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:32:12.480 09:05:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:12.480 09:05:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:12.480 09:05:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:12.480 09:05:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=1109714 00:32:12.480 09:05:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:32:12.480 09:05:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 1109714 00:32:12.480 09:05:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 1109714 ']' 00:32:12.480 09:05:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:12.480 09:05:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:12.480 09:05:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:12.480 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:12.480 09:05:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:12.481 09:05:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:12.481 [2024-07-26 09:05:30.834307] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:32:12.481 [2024-07-26 09:05:30.834394] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:12.481 EAL: No free 2048 kB hugepages reported on node 1 00:32:12.481 [2024-07-26 09:05:30.873938] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:12.481 [2024-07-26 09:05:30.906011] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:12.738 [2024-07-26 09:05:30.998352] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:12.738 [2024-07-26 09:05:30.998420] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:12.738 [2024-07-26 09:05:30.998435] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:12.738 [2024-07-26 09:05:30.998448] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:12.738 [2024-07-26 09:05:30.998460] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:12.738 [2024-07-26 09:05:30.998490] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:12.738 09:05:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:12.738 09:05:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:32:12.738 09:05:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:12.738 09:05:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:12.738 09:05:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:12.738 09:05:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:12.738 09:05:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:32:12.738 09:05:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:32:12.738 09:05:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:32:12.738 09:05:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:12.738 09:05:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:12.738 null0 00:32:12.738 [2024-07-26 09:05:31.181968] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:12.996 [2024-07-26 09:05:31.206211] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:12.996 09:05:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:12.996 09:05:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:32:12.996 09:05:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:32:12.996 09:05:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:32:12.996 09:05:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:32:12.996 09:05:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:32:12.996 09:05:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:32:12.996 09:05:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:32:12.996 09:05:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1109733 00:32:12.996 09:05:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:32:12.996 09:05:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1109733 /var/tmp/bperf.sock 00:32:12.996 09:05:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 1109733 ']' 00:32:12.996 09:05:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:12.996 09:05:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:12.996 09:05:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:12.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:12.996 09:05:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:12.996 09:05:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:12.996 [2024-07-26 09:05:31.248464] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:32:12.996 [2024-07-26 09:05:31.248536] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1109733 ] 00:32:12.996 EAL: No free 2048 kB hugepages reported on node 1 00:32:12.996 [2024-07-26 09:05:31.279803] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:12.996 [2024-07-26 09:05:31.308592] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:12.996 [2024-07-26 09:05:31.394773] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:13.254 09:05:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:13.254 09:05:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:32:13.254 09:05:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:32:13.254 09:05:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:32:13.254 09:05:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:32:13.512 09:05:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:13.512 09:05:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:13.817 nvme0n1 00:32:13.817 09:05:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:32:13.817 09:05:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:13.817 Running I/O for 2 seconds... 00:32:16.349 00:32:16.349 Latency(us) 00:32:16.349 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:16.349 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:32:16.349 nvme0n1 : 2.01 17836.42 69.67 0.00 0.00 7167.53 3810.80 18738.44 00:32:16.349 =================================================================================================================== 00:32:16.349 Total : 17836.42 69.67 0.00 0.00 7167.53 3810.80 18738.44 00:32:16.349 0 00:32:16.349 09:05:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:32:16.350 09:05:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:32:16.350 09:05:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:32:16.350 09:05:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:32:16.350 | select(.opcode=="crc32c") 00:32:16.350 | "\(.module_name) \(.executed)"' 00:32:16.350 09:05:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:32:16.350 09:05:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:32:16.350 09:05:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:32:16.350 09:05:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:32:16.350 09:05:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:32:16.350 09:05:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1109733 00:32:16.350 09:05:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 1109733 ']' 00:32:16.350 09:05:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 1109733 00:32:16.350 09:05:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:32:16.350 09:05:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:16.350 09:05:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1109733 00:32:16.350 09:05:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:32:16.350 09:05:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:32:16.350 09:05:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1109733' 00:32:16.350 killing process with pid 1109733 00:32:16.350 09:05:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 1109733 00:32:16.350 Received shutdown signal, test time was about 2.000000 seconds 00:32:16.350 00:32:16.350 Latency(us) 00:32:16.350 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:16.350 =================================================================================================================== 00:32:16.350 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:16.350 09:05:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 1109733 00:32:16.350 09:05:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:32:16.350 09:05:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:32:16.350 09:05:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:32:16.350 09:05:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:32:16.350 09:05:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:32:16.350 09:05:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:32:16.350 09:05:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:32:16.350 09:05:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1110145 00:32:16.350 09:05:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:32:16.350 09:05:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1110145 /var/tmp/bperf.sock 00:32:16.350 09:05:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 1110145 ']' 00:32:16.350 09:05:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:16.350 09:05:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:16.350 09:05:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:16.350 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:16.350 09:05:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:16.350 09:05:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:16.608 [2024-07-26 09:05:34.817340] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:32:16.608 [2024-07-26 09:05:34.817419] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1110145 ] 00:32:16.608 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:16.608 Zero copy mechanism will not be used. 00:32:16.608 EAL: No free 2048 kB hugepages reported on node 1 00:32:16.608 [2024-07-26 09:05:34.848607] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:16.608 [2024-07-26 09:05:34.880562] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:16.608 [2024-07-26 09:05:34.974149] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:16.608 09:05:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:16.608 09:05:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:32:16.608 09:05:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:32:16.608 09:05:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:32:16.608 09:05:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:32:17.176 09:05:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:17.176 09:05:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:17.433 nvme0n1 00:32:17.433 09:05:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:32:17.433 09:05:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:17.691 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:17.691 Zero copy mechanism will not be used. 00:32:17.691 Running I/O for 2 seconds... 00:32:19.591 00:32:19.591 Latency(us) 00:32:19.591 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:19.591 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:32:19.591 nvme0n1 : 2.00 3601.34 450.17 0.00 0.00 4438.58 4199.16 12039.21 00:32:19.591 =================================================================================================================== 00:32:19.591 Total : 3601.34 450.17 0.00 0.00 4438.58 4199.16 12039.21 00:32:19.591 0 00:32:19.591 09:05:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:32:19.591 09:05:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:32:19.591 09:05:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:32:19.591 09:05:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:32:19.591 09:05:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:32:19.591 | select(.opcode=="crc32c") 00:32:19.591 | "\(.module_name) \(.executed)"' 00:32:19.851 09:05:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:32:19.851 09:05:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:32:19.851 09:05:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:32:19.851 09:05:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:32:19.851 09:05:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1110145 00:32:19.851 09:05:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 1110145 ']' 00:32:19.851 09:05:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 1110145 00:32:19.851 09:05:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:32:19.851 09:05:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:19.851 09:05:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1110145 00:32:19.851 09:05:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:32:19.851 09:05:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:32:19.851 09:05:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1110145' 00:32:19.851 killing process with pid 1110145 00:32:19.851 09:05:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 1110145 00:32:19.851 Received shutdown signal, test time was about 2.000000 seconds 00:32:19.851 00:32:19.851 Latency(us) 00:32:19.851 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:19.851 =================================================================================================================== 00:32:19.851 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:19.851 09:05:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 1110145 00:32:20.109 09:05:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:32:20.109 09:05:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:32:20.110 09:05:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:32:20.110 09:05:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:32:20.110 09:05:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:32:20.110 09:05:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:32:20.110 09:05:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:32:20.110 09:05:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1110549 00:32:20.110 09:05:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:32:20.110 09:05:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1110549 /var/tmp/bperf.sock 00:32:20.110 09:05:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 1110549 ']' 00:32:20.110 09:05:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:20.110 09:05:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:20.110 09:05:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:20.110 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:20.110 09:05:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:20.110 09:05:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:20.110 [2024-07-26 09:05:38.493387] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:32:20.110 [2024-07-26 09:05:38.493480] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1110549 ] 00:32:20.110 EAL: No free 2048 kB hugepages reported on node 1 00:32:20.110 [2024-07-26 09:05:38.527989] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:20.110 [2024-07-26 09:05:38.556704] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:20.368 [2024-07-26 09:05:38.646348] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:20.368 09:05:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:20.369 09:05:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:32:20.369 09:05:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:32:20.369 09:05:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:32:20.369 09:05:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:32:20.938 09:05:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:20.938 09:05:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:21.196 nvme0n1 00:32:21.196 09:05:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:32:21.196 09:05:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:21.454 Running I/O for 2 seconds... 00:32:23.358 00:32:23.358 Latency(us) 00:32:23.358 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:23.358 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:23.358 nvme0n1 : 2.01 19648.67 76.75 0.00 0.00 6499.08 2682.12 12913.02 00:32:23.358 =================================================================================================================== 00:32:23.358 Total : 19648.67 76.75 0.00 0.00 6499.08 2682.12 12913.02 00:32:23.358 0 00:32:23.358 09:05:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:32:23.358 09:05:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:32:23.358 09:05:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:32:23.358 09:05:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:32:23.358 09:05:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:32:23.358 | select(.opcode=="crc32c") 00:32:23.358 | "\(.module_name) \(.executed)"' 00:32:23.616 09:05:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:32:23.616 09:05:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:32:23.616 09:05:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:32:23.616 09:05:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:32:23.616 09:05:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1110549 00:32:23.616 09:05:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 1110549 ']' 00:32:23.616 09:05:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 1110549 00:32:23.616 09:05:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:32:23.616 09:05:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:23.616 09:05:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1110549 00:32:23.616 09:05:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:32:23.616 09:05:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:32:23.616 09:05:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1110549' 00:32:23.616 killing process with pid 1110549 00:32:23.616 09:05:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 1110549 00:32:23.616 Received shutdown signal, test time was about 2.000000 seconds 00:32:23.616 00:32:23.616 Latency(us) 00:32:23.616 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:23.616 =================================================================================================================== 00:32:23.616 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:23.616 09:05:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 1110549 00:32:23.874 09:05:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:32:23.874 09:05:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:32:23.874 09:05:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:32:23.874 09:05:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:32:23.874 09:05:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:32:23.874 09:05:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:32:23.874 09:05:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:32:23.874 09:05:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1111076 00:32:23.874 09:05:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:32:23.874 09:05:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1111076 /var/tmp/bperf.sock 00:32:23.874 09:05:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 1111076 ']' 00:32:23.874 09:05:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:23.874 09:05:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:23.874 09:05:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:23.874 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:23.874 09:05:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:23.874 09:05:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:23.874 [2024-07-26 09:05:42.255598] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:32:23.874 [2024-07-26 09:05:42.255674] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1111076 ] 00:32:23.874 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:23.874 Zero copy mechanism will not be used. 00:32:23.874 EAL: No free 2048 kB hugepages reported on node 1 00:32:23.874 [2024-07-26 09:05:42.286405] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:23.874 [2024-07-26 09:05:42.315119] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:24.132 [2024-07-26 09:05:42.403516] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:24.132 09:05:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:24.132 09:05:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:32:24.132 09:05:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:32:24.132 09:05:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:32:24.132 09:05:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:32:24.390 09:05:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:24.390 09:05:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:24.956 nvme0n1 00:32:24.956 09:05:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:32:24.956 09:05:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:24.956 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:24.956 Zero copy mechanism will not be used. 00:32:24.956 Running I/O for 2 seconds... 00:32:26.854 00:32:26.854 Latency(us) 00:32:26.854 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:26.854 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:32:26.854 nvme0n1 : 2.00 3388.93 423.62 0.00 0.00 4710.91 3276.80 9903.22 00:32:26.854 =================================================================================================================== 00:32:26.854 Total : 3388.93 423.62 0.00 0.00 4710.91 3276.80 9903.22 00:32:26.854 0 00:32:26.854 09:05:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:32:26.854 09:05:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:32:26.854 09:05:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:32:26.854 09:05:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:32:26.854 09:05:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:32:26.854 | select(.opcode=="crc32c") 00:32:26.854 | "\(.module_name) \(.executed)"' 00:32:27.112 09:05:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:32:27.112 09:05:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:32:27.112 09:05:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:32:27.112 09:05:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:32:27.112 09:05:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1111076 00:32:27.112 09:05:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 1111076 ']' 00:32:27.112 09:05:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 1111076 00:32:27.112 09:05:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:32:27.112 09:05:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:27.112 09:05:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1111076 00:32:27.370 09:05:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:32:27.370 09:05:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:32:27.370 09:05:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1111076' 00:32:27.370 killing process with pid 1111076 00:32:27.370 09:05:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 1111076 00:32:27.370 Received shutdown signal, test time was about 2.000000 seconds 00:32:27.370 00:32:27.370 Latency(us) 00:32:27.370 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:27.370 =================================================================================================================== 00:32:27.370 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:27.370 09:05:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 1111076 00:32:27.370 09:05:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 1109714 00:32:27.370 09:05:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 1109714 ']' 00:32:27.370 09:05:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 1109714 00:32:27.370 09:05:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:32:27.370 09:05:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:27.370 09:05:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1109714 00:32:27.370 09:05:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:27.370 09:05:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:27.370 09:05:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1109714' 00:32:27.370 killing process with pid 1109714 00:32:27.370 09:05:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 1109714 00:32:27.370 09:05:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 1109714 00:32:27.628 00:32:27.628 real 0m15.249s 00:32:27.628 user 0m29.576s 00:32:27.628 sys 0m4.465s 00:32:27.628 09:05:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:27.628 09:05:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:27.628 ************************************ 00:32:27.628 END TEST nvmf_digest_clean 00:32:27.628 ************************************ 00:32:27.628 09:05:46 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:32:27.628 09:05:46 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:32:27.628 09:05:46 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:27.628 09:05:46 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:32:27.628 ************************************ 00:32:27.628 START TEST nvmf_digest_error 00:32:27.628 ************************************ 00:32:27.628 09:05:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1125 -- # run_digest_error 00:32:27.628 09:05:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:32:27.628 09:05:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:27.628 09:05:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:27.628 09:05:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:27.886 09:05:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=1111511 00:32:27.886 09:05:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:32:27.886 09:05:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 1111511 00:32:27.886 09:05:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 1111511 ']' 00:32:27.886 09:05:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:27.886 09:05:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:27.886 09:05:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:27.886 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:27.886 09:05:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:27.886 09:05:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:27.886 [2024-07-26 09:05:46.138936] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:32:27.886 [2024-07-26 09:05:46.139028] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:27.886 EAL: No free 2048 kB hugepages reported on node 1 00:32:27.886 [2024-07-26 09:05:46.176295] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:27.886 [2024-07-26 09:05:46.202223] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:27.886 [2024-07-26 09:05:46.285322] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:27.886 [2024-07-26 09:05:46.285378] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:27.886 [2024-07-26 09:05:46.285405] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:27.886 [2024-07-26 09:05:46.285416] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:27.886 [2024-07-26 09:05:46.285425] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:27.886 [2024-07-26 09:05:46.285451] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:27.886 09:05:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:27.886 09:05:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:32:27.886 09:05:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:27.886 09:05:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:27.886 09:05:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:28.144 09:05:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:28.144 09:05:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:32:28.144 09:05:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:28.144 09:05:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:28.144 [2024-07-26 09:05:46.370016] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:32:28.144 09:05:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:28.144 09:05:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:32:28.144 09:05:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:32:28.144 09:05:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:28.145 09:05:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:28.145 null0 00:32:28.145 [2024-07-26 09:05:46.490224] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:28.145 [2024-07-26 09:05:46.514473] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:28.145 09:05:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:28.145 09:05:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:32:28.145 09:05:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:32:28.145 09:05:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:32:28.145 09:05:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:32:28.145 09:05:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:32:28.145 09:05:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1111591 00:32:28.145 09:05:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:32:28.145 09:05:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1111591 /var/tmp/bperf.sock 00:32:28.145 09:05:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 1111591 ']' 00:32:28.145 09:05:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:28.145 09:05:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:28.145 09:05:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:28.145 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:28.145 09:05:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:28.145 09:05:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:28.145 [2024-07-26 09:05:46.562462] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:32:28.145 [2024-07-26 09:05:46.562544] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1111591 ] 00:32:28.145 EAL: No free 2048 kB hugepages reported on node 1 00:32:28.145 [2024-07-26 09:05:46.597554] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:28.402 [2024-07-26 09:05:46.628095] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:28.402 [2024-07-26 09:05:46.718909] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:28.402 09:05:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:28.402 09:05:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:32:28.402 09:05:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:28.402 09:05:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:28.660 09:05:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:32:28.660 09:05:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:28.660 09:05:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:28.660 09:05:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:28.660 09:05:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:28.660 09:05:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:29.225 nvme0n1 00:32:29.225 09:05:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:32:29.225 09:05:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:29.225 09:05:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:29.225 09:05:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:29.225 09:05:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:32:29.225 09:05:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:29.225 Running I/O for 2 seconds... 00:32:29.225 [2024-07-26 09:05:47.568323] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162d280) 00:32:29.225 [2024-07-26 09:05:47.568405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:6008 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.225 [2024-07-26 09:05:47.568425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:29.225 [2024-07-26 09:05:47.587283] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162d280) 00:32:29.225 [2024-07-26 09:05:47.587317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:3905 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.225 [2024-07-26 09:05:47.587334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:29.225 [2024-07-26 09:05:47.606507] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162d280) 00:32:29.225 [2024-07-26 09:05:47.606540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:15146 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.225 [2024-07-26 09:05:47.606558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:29.225 [2024-07-26 09:05:47.628145] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162d280) 00:32:29.225 [2024-07-26 09:05:47.628180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:1027 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.225 [2024-07-26 09:05:47.628200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:29.225 [2024-07-26 09:05:47.641381] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162d280) 00:32:29.225 [2024-07-26 09:05:47.641414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:15113 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.225 [2024-07-26 09:05:47.641432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:29.225 [2024-07-26 09:05:47.659757] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162d280) 00:32:29.225 [2024-07-26 09:05:47.659788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:21135 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.225 [2024-07-26 09:05:47.659805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:29.225 [2024-07-26 09:05:47.682802] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162d280) 00:32:29.225 [2024-07-26 09:05:47.682836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:4290 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.225 [2024-07-26 09:05:47.682854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:29.485 [2024-07-26 09:05:47.696892] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162d280) 00:32:29.485 [2024-07-26 09:05:47.696924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:6243 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.485 [2024-07-26 09:05:47.696941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:29.485 [2024-07-26 09:05:47.715544] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162d280) 00:32:29.485 [2024-07-26 09:05:47.715583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:24168 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.485 [2024-07-26 09:05:47.715600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:29.485 [2024-07-26 09:05:47.735284] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162d280) 00:32:29.485 [2024-07-26 09:05:47.735318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:595 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.485 [2024-07-26 09:05:47.735335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:29.485 [2024-07-26 09:05:47.755036] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162d280) 00:32:29.485 [2024-07-26 09:05:47.755092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11813 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.485 [2024-07-26 09:05:47.755125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:29.485 [2024-07-26 09:05:47.773717] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162d280) 00:32:29.485 [2024-07-26 09:05:47.773748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:458 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.485 [2024-07-26 09:05:47.773765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:29.485 [2024-07-26 09:05:47.791151] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162d280) 00:32:29.485 [2024-07-26 09:05:47.791181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:4330 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.485 [2024-07-26 09:05:47.791198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:29.485 [2024-07-26 09:05:47.804541] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162d280) 00:32:29.485 [2024-07-26 09:05:47.804573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:19693 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.485 [2024-07-26 09:05:47.804591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:29.485 [2024-07-26 09:05:47.824304] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162d280) 00:32:29.485 [2024-07-26 09:05:47.824351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:10272 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.485 [2024-07-26 09:05:47.824369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:29.485 [2024-07-26 09:05:47.842116] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162d280) 00:32:29.485 [2024-07-26 09:05:47.842149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:7635 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.485 [2024-07-26 09:05:47.842167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:29.485 [2024-07-26 09:05:47.862225] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162d280) 00:32:29.485 [2024-07-26 09:05:47.862377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:11861 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.485 [2024-07-26 09:05:47.862425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:29.485 [2024-07-26 09:05:47.881259] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162d280) 00:32:29.485 [2024-07-26 09:05:47.881291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:10415 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.485 [2024-07-26 09:05:47.881308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:29.485 [2024-07-26 09:05:47.894501] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162d280) 00:32:29.485 [2024-07-26 09:05:47.894532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:7920 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.485 [2024-07-26 09:05:47.894549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:29.485 [2024-07-26 09:05:47.913484] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162d280) 00:32:29.485 [2024-07-26 09:05:47.913515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:11526 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.485 [2024-07-26 09:05:47.913532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:29.485 [2024-07-26 09:05:47.930628] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162d280) 00:32:29.485 [2024-07-26 09:05:47.930659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23775 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.485 [2024-07-26 09:05:47.930675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:29.745 [2024-07-26 09:05:47.951206] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162d280) 00:32:29.745 [2024-07-26 09:05:47.951239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:18906 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.745 [2024-07-26 09:05:47.951256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:29.745 [2024-07-26 09:05:47.969485] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162d280) 00:32:29.745 [2024-07-26 09:05:47.969516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:20549 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.745 [2024-07-26 09:05:47.969533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:29.745 [2024-07-26 09:05:47.987963] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162d280) 00:32:29.745 [2024-07-26 09:05:47.987994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:24797 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.745 [2024-07-26 09:05:47.988011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:29.745 [2024-07-26 09:05:48.005738] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162d280) 00:32:29.745 [2024-07-26 09:05:48.005769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:13101 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.745 [2024-07-26 09:05:48.005785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:29.745 [2024-07-26 09:05:48.023200] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162d280) 00:32:29.745 [2024-07-26 09:05:48.023239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:161 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.745 [2024-07-26 09:05:48.023257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:29.745 [2024-07-26 09:05:48.037815] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162d280) 00:32:29.745 [2024-07-26 09:05:48.037847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:11578 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.745 [2024-07-26 09:05:48.037864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:29.745 [2024-07-26 09:05:48.055282] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162d280) 00:32:29.745 [2024-07-26 09:05:48.055313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:9666 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.746 [2024-07-26 09:05:48.055330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:29.746 [2024-07-26 09:05:48.075337] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162d280) 00:32:29.746 [2024-07-26 09:05:48.075384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:10836 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.746 [2024-07-26 09:05:48.075402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:29.746 [2024-07-26 09:05:48.094758] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162d280) 00:32:29.746 [2024-07-26 09:05:48.094790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:3875 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.746 [2024-07-26 09:05:48.095345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:29.746 [2024-07-26 09:05:48.112560] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162d280) 00:32:29.746 [2024-07-26 09:05:48.112590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:11357 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.746 [2024-07-26 09:05:48.112606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:29.746 [2024-07-26 09:05:48.126598] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162d280) 00:32:29.746 [2024-07-26 09:05:48.126630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:4366 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.746 [2024-07-26 09:05:48.126646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:29.746 [2024-07-26 09:05:48.146893] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162d280) 00:32:29.746 [2024-07-26 09:05:48.146924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:15322 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.746 [2024-07-26 09:05:48.146940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:29.746 [2024-07-26 09:05:48.166808] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162d280) 00:32:29.746 [2024-07-26 09:05:48.166841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:23411 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.746 [2024-07-26 09:05:48.166867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:29.746 [2024-07-26 09:05:48.180187] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162d280) 00:32:29.746 [2024-07-26 09:05:48.180220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:20396 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.746 [2024-07-26 09:05:48.180238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:29.746 [2024-07-26 09:05:48.197686] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162d280) 00:32:29.746 [2024-07-26 09:05:48.197717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:6924 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.746 [2024-07-26 09:05:48.197733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:30.006 [2024-07-26 09:05:48.217657] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162d280) 00:32:30.006 [2024-07-26 09:05:48.217689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10166 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.006 [2024-07-26 09:05:48.217706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:30.006 [2024-07-26 09:05:48.236918] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162d280) 00:32:30.006 [2024-07-26 09:05:48.236949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:10642 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.006 [2024-07-26 09:05:48.236965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:30.006 [2024-07-26 09:05:48.258290] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162d280) 00:32:30.006 [2024-07-26 09:05:48.258341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:21140 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.006 [2024-07-26 09:05:48.258358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:30.006 [2024-07-26 09:05:48.271230] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162d280) 00:32:30.006 [2024-07-26 09:05:48.271264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2290 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.006 [2024-07-26 09:05:48.271281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:30.006 [2024-07-26 09:05:48.289682] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162d280) 00:32:30.006 [2024-07-26 09:05:48.289713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:6730 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.006 [2024-07-26 09:05:48.289730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:30.006 [2024-07-26 09:05:48.310276] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162d280) 00:32:30.006 [2024-07-26 09:05:48.310308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:16755 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.006 [2024-07-26 09:05:48.310325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:30.006 [2024-07-26 09:05:48.330636] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162d280) 00:32:30.006 [2024-07-26 09:05:48.330674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24563 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.006 [2024-07-26 09:05:48.330691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:30.006 [2024-07-26 09:05:48.349675] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162d280) 00:32:30.006 [2024-07-26 09:05:48.349705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:3820 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.006 [2024-07-26 09:05:48.349722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:30.006 [2024-07-26 09:05:48.362774] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162d280) 00:32:30.006 [2024-07-26 09:05:48.362804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:8815 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.006 [2024-07-26 09:05:48.362820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:30.006 [2024-07-26 09:05:48.381377] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162d280) 00:32:30.006 [2024-07-26 09:05:48.381408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:2294 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.006 [2024-07-26 09:05:48.381424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:30.007 [2024-07-26 09:05:48.400501] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162d280) 00:32:30.007 [2024-07-26 09:05:48.400532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:21526 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.007 [2024-07-26 09:05:48.400549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:30.007 [2024-07-26 09:05:48.420037] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162d280) 00:32:30.007 [2024-07-26 09:05:48.420089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:1556 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.007 [2024-07-26 09:05:48.420120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:30.007 [2024-07-26 09:05:48.438845] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162d280) 00:32:30.007 [2024-07-26 09:05:48.438876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:11112 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.007 [2024-07-26 09:05:48.438892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:30.007 [2024-07-26 09:05:48.452978] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162d280) 00:32:30.007 [2024-07-26 09:05:48.453010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:25068 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.007 [2024-07-26 09:05:48.453026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:30.266 [2024-07-26 09:05:48.471387] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162d280) 00:32:30.266 [2024-07-26 09:05:48.471418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:9914 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.266 [2024-07-26 09:05:48.471435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:30.266 [2024-07-26 09:05:48.490427] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162d280) 00:32:30.266 [2024-07-26 09:05:48.490458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14963 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.266 [2024-07-26 09:05:48.490474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:30.266 [2024-07-26 09:05:48.508840] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162d280) 00:32:30.266 [2024-07-26 09:05:48.508871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:18155 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.266 [2024-07-26 09:05:48.508887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:30.266 [2024-07-26 09:05:48.522679] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162d280) 00:32:30.266 [2024-07-26 09:05:48.522710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:6169 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.266 [2024-07-26 09:05:48.522726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:30.266 [2024-07-26 09:05:48.542056] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162d280) 00:32:30.266 [2024-07-26 09:05:48.542109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:20750 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.266 [2024-07-26 09:05:48.542126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:30.266 [2024-07-26 09:05:48.562744] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162d280) 00:32:30.267 [2024-07-26 09:05:48.562775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:15617 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.267 [2024-07-26 09:05:48.562791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:30.267 [2024-07-26 09:05:48.582207] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162d280) 00:32:30.267 [2024-07-26 09:05:48.582240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:23327 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.267 [2024-07-26 09:05:48.582257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:30.267 [2024-07-26 09:05:48.603210] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162d280) 00:32:30.267 [2024-07-26 09:05:48.603242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12076 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.267 [2024-07-26 09:05:48.603259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:30.267 [2024-07-26 09:05:48.618312] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162d280) 00:32:30.267 [2024-07-26 09:05:48.618363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:16303 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.267 [2024-07-26 09:05:48.618379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:30.267 [2024-07-26 09:05:48.640144] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162d280) 00:32:30.267 [2024-07-26 09:05:48.640176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:424 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.267 [2024-07-26 09:05:48.640198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:30.267 [2024-07-26 09:05:48.661289] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162d280) 00:32:30.267 [2024-07-26 09:05:48.661321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:10392 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.267 [2024-07-26 09:05:48.661337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:30.267 [2024-07-26 09:05:48.681463] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162d280) 00:32:30.267 [2024-07-26 09:05:48.681500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:6343 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.267 [2024-07-26 09:05:48.681521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:30.267 [2024-07-26 09:05:48.700187] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162d280) 00:32:30.267 [2024-07-26 09:05:48.700217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:12458 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.267 [2024-07-26 09:05:48.700233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:30.267 [2024-07-26 09:05:48.718189] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162d280) 00:32:30.267 [2024-07-26 09:05:48.718220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:5444 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.267 [2024-07-26 09:05:48.718236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:30.527 [2024-07-26 09:05:48.733171] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162d280) 00:32:30.527 [2024-07-26 09:05:48.733203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:4188 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.527 [2024-07-26 09:05:48.733220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:30.527 [2024-07-26 09:05:48.754345] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162d280) 00:32:30.527 [2024-07-26 09:05:48.754392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:13947 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.527 [2024-07-26 09:05:48.754410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:30.527 [2024-07-26 09:05:48.774121] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162d280) 00:32:30.527 [2024-07-26 09:05:48.774155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:21885 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.527 [2024-07-26 09:05:48.774172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:30.527 [2024-07-26 09:05:48.793966] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162d280) 00:32:30.527 [2024-07-26 09:05:48.794004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:21404 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.527 [2024-07-26 09:05:48.794540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:30.527 [2024-07-26 09:05:48.809231] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162d280) 00:32:30.527 [2024-07-26 09:05:48.809271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7648 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.527 [2024-07-26 09:05:48.809290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:30.527 [2024-07-26 09:05:48.830950] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162d280) 00:32:30.527 [2024-07-26 09:05:48.830988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:1996 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.527 [2024-07-26 09:05:48.831008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:30.527 [2024-07-26 09:05:48.851638] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162d280) 00:32:30.527 [2024-07-26 09:05:48.851676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:20756 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.527 [2024-07-26 09:05:48.851696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:30.527 [2024-07-26 09:05:48.873486] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162d280) 00:32:30.527 [2024-07-26 09:05:48.873524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:9577 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.527 [2024-07-26 09:05:48.873544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:30.527 [2024-07-26 09:05:48.889612] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162d280) 00:32:30.527 [2024-07-26 09:05:48.889648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:10507 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.527 [2024-07-26 09:05:48.889669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:30.527 [2024-07-26 09:05:48.908820] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162d280) 00:32:30.527 [2024-07-26 09:05:48.908858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:9258 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.527 [2024-07-26 09:05:48.908879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:30.527 [2024-07-26 09:05:48.929181] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162d280) 00:32:30.527 [2024-07-26 09:05:48.929215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7631 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.527 [2024-07-26 09:05:48.929232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:30.527 [2024-07-26 09:05:48.949633] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162d280) 00:32:30.528 [2024-07-26 09:05:48.949670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:5718 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.528 [2024-07-26 09:05:48.949690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:30.528 [2024-07-26 09:05:48.970179] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162d280) 00:32:30.528 [2024-07-26 09:05:48.970211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:22578 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.528 [2024-07-26 09:05:48.970227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:30.787 [2024-07-26 09:05:48.988581] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162d280) 00:32:30.787 [2024-07-26 09:05:48.988619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:12533 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.788 [2024-07-26 09:05:48.988639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:30.788 [2024-07-26 09:05:49.004306] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162d280) 00:32:30.788 [2024-07-26 09:05:49.004352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14202 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.788 [2024-07-26 09:05:49.004368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:30.788 [2024-07-26 09:05:49.023258] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162d280) 00:32:30.788 [2024-07-26 09:05:49.023289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:19197 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.788 [2024-07-26 09:05:49.023305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:30.788 [2024-07-26 09:05:49.042836] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162d280) 00:32:30.788 [2024-07-26 09:05:49.042872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:12824 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.788 [2024-07-26 09:05:49.042892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:30.788 [2024-07-26 09:05:49.064971] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162d280) 00:32:30.788 [2024-07-26 09:05:49.065008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:3969 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.788 [2024-07-26 09:05:49.065028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:30.788 [2024-07-26 09:05:49.078675] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162d280) 00:32:30.788 [2024-07-26 09:05:49.078712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:16723 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.788 [2024-07-26 09:05:49.078732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:30.788 [2024-07-26 09:05:49.096296] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162d280) 00:32:30.788 [2024-07-26 09:05:49.096328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:22245 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.788 [2024-07-26 09:05:49.096360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:30.788 [2024-07-26 09:05:49.118186] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162d280) 00:32:30.788 [2024-07-26 09:05:49.118216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:6134 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.788 [2024-07-26 09:05:49.118233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:30.788 [2024-07-26 09:05:49.137934] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162d280) 00:32:30.788 [2024-07-26 09:05:49.137977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:1806 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.788 [2024-07-26 09:05:49.137998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:30.788 [2024-07-26 09:05:49.159046] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162d280) 00:32:30.788 [2024-07-26 09:05:49.159091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:12295 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.788 [2024-07-26 09:05:49.159111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:30.788 [2024-07-26 09:05:49.180873] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162d280) 00:32:30.788 [2024-07-26 09:05:49.180910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:1675 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.788 [2024-07-26 09:05:49.180930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:30.788 [2024-07-26 09:05:49.201201] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162d280) 00:32:30.788 [2024-07-26 09:05:49.201233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:3952 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.788 [2024-07-26 09:05:49.201250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:30.788 [2024-07-26 09:05:49.221280] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162d280) 00:32:30.788 [2024-07-26 09:05:49.221313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:5837 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.788 [2024-07-26 09:05:49.221330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:30.788 [2024-07-26 09:05:49.236253] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162d280) 00:32:30.788 [2024-07-26 09:05:49.236284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:13235 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.788 [2024-07-26 09:05:49.236301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:31.050 [2024-07-26 09:05:49.257680] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162d280) 00:32:31.050 [2024-07-26 09:05:49.257731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:15702 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.050 [2024-07-26 09:05:49.257752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:31.051 [2024-07-26 09:05:49.277847] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162d280) 00:32:31.051 [2024-07-26 09:05:49.277884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:8347 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.051 [2024-07-26 09:05:49.277904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:31.051 [2024-07-26 09:05:49.297755] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162d280) 00:32:31.051 [2024-07-26 09:05:49.297792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:21473 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.051 [2024-07-26 09:05:49.297812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:31.051 [2024-07-26 09:05:49.318015] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162d280) 00:32:31.051 [2024-07-26 09:05:49.318052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:15846 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.051 [2024-07-26 09:05:49.318084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:31.051 [2024-07-26 09:05:49.338433] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162d280) 00:32:31.051 [2024-07-26 09:05:49.338979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:24480 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.051 [2024-07-26 09:05:49.339116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:31.051 [2024-07-26 09:05:49.357474] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162d280) 00:32:31.051 [2024-07-26 09:05:49.357511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:3069 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.051 [2024-07-26 09:05:49.357531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:31.051 [2024-07-26 09:05:49.372934] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162d280) 00:32:31.051 [2024-07-26 09:05:49.372972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:18268 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.051 [2024-07-26 09:05:49.372992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:31.051 [2024-07-26 09:05:49.394088] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162d280) 00:32:31.051 [2024-07-26 09:05:49.394138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:19356 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.051 [2024-07-26 09:05:49.394155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:31.051 [2024-07-26 09:05:49.414143] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162d280) 00:32:31.051 [2024-07-26 09:05:49.414173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:7346 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.051 [2024-07-26 09:05:49.414190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:31.051 [2024-07-26 09:05:49.434755] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162d280) 00:32:31.051 [2024-07-26 09:05:49.434792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:14475 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.051 [2024-07-26 09:05:49.434812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:31.051 [2024-07-26 09:05:49.449570] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162d280) 00:32:31.051 [2024-07-26 09:05:49.449607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:18493 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.051 [2024-07-26 09:05:49.449627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:31.051 [2024-07-26 09:05:49.470163] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162d280) 00:32:31.051 [2024-07-26 09:05:49.470193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:20292 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.051 [2024-07-26 09:05:49.470215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:31.051 [2024-07-26 09:05:49.491306] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162d280) 00:32:31.051 [2024-07-26 09:05:49.491352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:7367 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.051 [2024-07-26 09:05:49.491373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:31.051 [2024-07-26 09:05:49.505363] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162d280) 00:32:31.051 [2024-07-26 09:05:49.505399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8620 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.051 [2024-07-26 09:05:49.505418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:31.339 [2024-07-26 09:05:49.526323] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162d280) 00:32:31.339 [2024-07-26 09:05:49.526357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:24951 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.339 [2024-07-26 09:05:49.526390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:31.339 [2024-07-26 09:05:49.548864] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x162d280) 00:32:31.339 [2024-07-26 09:05:49.548902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:12987 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.339 [2024-07-26 09:05:49.548922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:31.339 00:32:31.339 Latency(us) 00:32:31.339 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:31.339 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:32:31.339 nvme0n1 : 2.01 13679.65 53.44 0.00 0.00 9346.20 4514.70 27573.67 00:32:31.339 =================================================================================================================== 00:32:31.339 Total : 13679.65 53.44 0.00 0.00 9346.20 4514.70 27573.67 00:32:31.339 0 00:32:31.339 09:05:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:32:31.339 09:05:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:32:31.339 09:05:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:32:31.339 09:05:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:32:31.339 | .driver_specific 00:32:31.339 | .nvme_error 00:32:31.339 | .status_code 00:32:31.339 | .command_transient_transport_error' 00:32:31.597 09:05:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 107 > 0 )) 00:32:31.597 09:05:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1111591 00:32:31.597 09:05:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 1111591 ']' 00:32:31.597 09:05:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 1111591 00:32:31.597 09:05:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:32:31.597 09:05:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:31.597 09:05:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1111591 00:32:31.597 09:05:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:32:31.597 09:05:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:32:31.597 09:05:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1111591' 00:32:31.597 killing process with pid 1111591 00:32:31.597 09:05:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 1111591 00:32:31.597 Received shutdown signal, test time was about 2.000000 seconds 00:32:31.597 00:32:31.597 Latency(us) 00:32:31.597 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:31.597 =================================================================================================================== 00:32:31.597 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:31.597 09:05:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 1111591 00:32:31.855 09:05:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:32:31.855 09:05:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:32:31.855 09:05:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:32:31.855 09:05:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:32:31.855 09:05:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:32:31.855 09:05:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1112065 00:32:31.855 09:05:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:32:31.855 09:05:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1112065 /var/tmp/bperf.sock 00:32:31.855 09:05:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 1112065 ']' 00:32:31.855 09:05:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:31.855 09:05:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:31.855 09:05:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:31.855 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:31.855 09:05:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:31.855 09:05:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:31.855 [2024-07-26 09:05:50.145236] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:32:31.855 [2024-07-26 09:05:50.145315] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1112065 ] 00:32:31.855 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:31.855 Zero copy mechanism will not be used. 00:32:31.855 EAL: No free 2048 kB hugepages reported on node 1 00:32:31.855 [2024-07-26 09:05:50.177157] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:31.855 [2024-07-26 09:05:50.207109] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:31.855 [2024-07-26 09:05:50.297461] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:32.113 09:05:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:32.113 09:05:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:32:32.113 09:05:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:32.113 09:05:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:32.370 09:05:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:32:32.370 09:05:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:32.370 09:05:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:32.370 09:05:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:32.370 09:05:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:32.370 09:05:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:32.938 nvme0n1 00:32:32.938 09:05:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:32:32.938 09:05:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:32.938 09:05:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:32.938 09:05:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:32.938 09:05:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:32:32.938 09:05:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:32.938 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:32.938 Zero copy mechanism will not be used. 00:32:32.938 Running I/O for 2 seconds... 00:32:32.938 [2024-07-26 09:05:51.241122] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:32.938 [2024-07-26 09:05:51.241172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.939 [2024-07-26 09:05:51.241199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:32.939 [2024-07-26 09:05:51.250363] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:32.939 [2024-07-26 09:05:51.250417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.939 [2024-07-26 09:05:51.250437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:32.939 [2024-07-26 09:05:51.259622] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:32.939 [2024-07-26 09:05:51.259658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.939 [2024-07-26 09:05:51.259678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:32.939 [2024-07-26 09:05:51.268605] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:32.939 [2024-07-26 09:05:51.268640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.939 [2024-07-26 09:05:51.268670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:32.939 [2024-07-26 09:05:51.277590] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:32.939 [2024-07-26 09:05:51.277624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.939 [2024-07-26 09:05:51.277643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:32.939 [2024-07-26 09:05:51.286980] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:32.939 [2024-07-26 09:05:51.287015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.939 [2024-07-26 09:05:51.287034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:32.939 [2024-07-26 09:05:51.296052] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:32.939 [2024-07-26 09:05:51.296110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.939 [2024-07-26 09:05:51.296128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:32.939 [2024-07-26 09:05:51.305043] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:32.939 [2024-07-26 09:05:51.305087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.939 [2024-07-26 09:05:51.305108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:32.939 [2024-07-26 09:05:51.314152] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:32.939 [2024-07-26 09:05:51.314183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.939 [2024-07-26 09:05:51.314200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:32.939 [2024-07-26 09:05:51.323150] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:32.939 [2024-07-26 09:05:51.323180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.939 [2024-07-26 09:05:51.323196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:32.939 [2024-07-26 09:05:51.332168] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:32.939 [2024-07-26 09:05:51.332197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.939 [2024-07-26 09:05:51.332230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:32.939 [2024-07-26 09:05:51.341301] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:32.939 [2024-07-26 09:05:51.341331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.939 [2024-07-26 09:05:51.341348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:32.939 [2024-07-26 09:05:51.350326] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:32.939 [2024-07-26 09:05:51.350378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.939 [2024-07-26 09:05:51.350398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:32.939 [2024-07-26 09:05:51.359307] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:32.939 [2024-07-26 09:05:51.359337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.939 [2024-07-26 09:05:51.359354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:32.939 [2024-07-26 09:05:51.368448] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:32.939 [2024-07-26 09:05:51.368482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.939 [2024-07-26 09:05:51.368501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:32.939 [2024-07-26 09:05:51.378044] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:32.939 [2024-07-26 09:05:51.378087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.939 [2024-07-26 09:05:51.378107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:32.939 [2024-07-26 09:05:51.388973] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:32.939 [2024-07-26 09:05:51.389009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.939 [2024-07-26 09:05:51.389028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:33.198 [2024-07-26 09:05:51.399849] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:33.198 [2024-07-26 09:05:51.399886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.198 [2024-07-26 09:05:51.399905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:33.198 [2024-07-26 09:05:51.410249] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:33.198 [2024-07-26 09:05:51.410279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.198 [2024-07-26 09:05:51.410296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:33.198 [2024-07-26 09:05:51.420864] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:33.199 [2024-07-26 09:05:51.420899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.199 [2024-07-26 09:05:51.420919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:33.199 [2024-07-26 09:05:51.430253] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:33.199 [2024-07-26 09:05:51.430299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.199 [2024-07-26 09:05:51.430316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:33.199 [2024-07-26 09:05:51.439564] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:33.199 [2024-07-26 09:05:51.439599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.199 [2024-07-26 09:05:51.439618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:33.199 [2024-07-26 09:05:51.448621] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:33.199 [2024-07-26 09:05:51.448655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.199 [2024-07-26 09:05:51.448674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:33.199 [2024-07-26 09:05:51.457766] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:33.199 [2024-07-26 09:05:51.457800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.199 [2024-07-26 09:05:51.457819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:33.199 [2024-07-26 09:05:51.466868] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:33.199 [2024-07-26 09:05:51.466902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.199 [2024-07-26 09:05:51.466921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:33.199 [2024-07-26 09:05:51.475922] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:33.199 [2024-07-26 09:05:51.475956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.199 [2024-07-26 09:05:51.475975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:33.199 [2024-07-26 09:05:51.484965] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:33.199 [2024-07-26 09:05:51.484999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.199 [2024-07-26 09:05:51.485019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:33.199 [2024-07-26 09:05:51.494084] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:33.199 [2024-07-26 09:05:51.494132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.199 [2024-07-26 09:05:51.494150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:33.199 [2024-07-26 09:05:51.503326] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:33.199 [2024-07-26 09:05:51.503374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.199 [2024-07-26 09:05:51.503394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:33.199 [2024-07-26 09:05:51.512342] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:33.199 [2024-07-26 09:05:51.512372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.199 [2024-07-26 09:05:51.512413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:33.199 [2024-07-26 09:05:51.521435] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:33.199 [2024-07-26 09:05:51.521468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.199 [2024-07-26 09:05:51.521486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:33.199 [2024-07-26 09:05:51.530500] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:33.199 [2024-07-26 09:05:51.530534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.199 [2024-07-26 09:05:51.530553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:33.199 [2024-07-26 09:05:51.539527] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:33.199 [2024-07-26 09:05:51.539559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.199 [2024-07-26 09:05:51.539578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:33.199 [2024-07-26 09:05:51.548515] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:33.199 [2024-07-26 09:05:51.548549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.199 [2024-07-26 09:05:51.548568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:33.199 [2024-07-26 09:05:51.557566] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:33.199 [2024-07-26 09:05:51.557600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.199 [2024-07-26 09:05:51.557619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:33.199 [2024-07-26 09:05:51.566776] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:33.199 [2024-07-26 09:05:51.566810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.199 [2024-07-26 09:05:51.566829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:33.199 [2024-07-26 09:05:51.575882] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:33.199 [2024-07-26 09:05:51.575916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.199 [2024-07-26 09:05:51.575935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:33.199 [2024-07-26 09:05:51.584929] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:33.199 [2024-07-26 09:05:51.584963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.199 [2024-07-26 09:05:51.584982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:33.199 [2024-07-26 09:05:51.594120] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:33.199 [2024-07-26 09:05:51.594149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.199 [2024-07-26 09:05:51.594166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:33.199 [2024-07-26 09:05:51.603254] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:33.199 [2024-07-26 09:05:51.603284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.199 [2024-07-26 09:05:51.603301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:33.199 [2024-07-26 09:05:51.612287] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:33.199 [2024-07-26 09:05:51.612318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.199 [2024-07-26 09:05:51.612336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:33.199 [2024-07-26 09:05:51.621355] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:33.199 [2024-07-26 09:05:51.621404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.199 [2024-07-26 09:05:51.621423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:33.199 [2024-07-26 09:05:51.630489] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:33.199 [2024-07-26 09:05:51.630522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.199 [2024-07-26 09:05:51.630541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:33.199 [2024-07-26 09:05:51.639515] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:33.199 [2024-07-26 09:05:51.639549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.199 [2024-07-26 09:05:51.639568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:33.199 [2024-07-26 09:05:51.648522] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:33.199 [2024-07-26 09:05:51.648556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.199 [2024-07-26 09:05:51.648575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:33.199 [2024-07-26 09:05:51.657531] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:33.199 [2024-07-26 09:05:51.657565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.200 [2024-07-26 09:05:51.657595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:33.458 [2024-07-26 09:05:51.666581] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:33.458 [2024-07-26 09:05:51.666615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.458 [2024-07-26 09:05:51.666645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:33.458 [2024-07-26 09:05:51.675632] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:33.459 [2024-07-26 09:05:51.675667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.459 [2024-07-26 09:05:51.675692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:33.459 [2024-07-26 09:05:51.684623] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:33.459 [2024-07-26 09:05:51.684656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.459 [2024-07-26 09:05:51.684675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:33.459 [2024-07-26 09:05:51.693572] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:33.459 [2024-07-26 09:05:51.693604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.459 [2024-07-26 09:05:51.693623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:33.459 [2024-07-26 09:05:51.702542] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:33.459 [2024-07-26 09:05:51.702574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.459 [2024-07-26 09:05:51.702593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:33.459 [2024-07-26 09:05:51.711524] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:33.459 [2024-07-26 09:05:51.711557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.459 [2024-07-26 09:05:51.711576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:33.459 [2024-07-26 09:05:51.720661] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:33.459 [2024-07-26 09:05:51.720694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.459 [2024-07-26 09:05:51.720713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:33.459 [2024-07-26 09:05:51.729715] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:33.459 [2024-07-26 09:05:51.729748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.459 [2024-07-26 09:05:51.729767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:33.459 [2024-07-26 09:05:51.738742] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:33.459 [2024-07-26 09:05:51.738776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.459 [2024-07-26 09:05:51.738795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:33.459 [2024-07-26 09:05:51.747745] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:33.459 [2024-07-26 09:05:51.747784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.459 [2024-07-26 09:05:51.747804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:33.459 [2024-07-26 09:05:51.756891] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:33.459 [2024-07-26 09:05:51.756926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.459 [2024-07-26 09:05:51.756945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:33.459 [2024-07-26 09:05:51.765940] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:33.459 [2024-07-26 09:05:51.765974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.459 [2024-07-26 09:05:51.765993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:33.459 [2024-07-26 09:05:51.775034] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:33.459 [2024-07-26 09:05:51.775075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.459 [2024-07-26 09:05:51.775096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:33.459 [2024-07-26 09:05:51.784073] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:33.459 [2024-07-26 09:05:51.784105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.459 [2024-07-26 09:05:51.784139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:33.459 [2024-07-26 09:05:51.792988] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:33.459 [2024-07-26 09:05:51.793021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.459 [2024-07-26 09:05:51.793039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:33.459 [2024-07-26 09:05:51.801947] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:33.459 [2024-07-26 09:05:51.801981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.459 [2024-07-26 09:05:51.801999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:33.459 [2024-07-26 09:05:51.810947] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:33.459 [2024-07-26 09:05:51.810981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.459 [2024-07-26 09:05:51.811000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:33.459 [2024-07-26 09:05:51.819941] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:33.459 [2024-07-26 09:05:51.819973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.459 [2024-07-26 09:05:51.819992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:33.459 [2024-07-26 09:05:51.829038] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:33.459 [2024-07-26 09:05:51.829081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.459 [2024-07-26 09:05:51.829101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:33.459 [2024-07-26 09:05:51.837929] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:33.459 [2024-07-26 09:05:51.837962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.459 [2024-07-26 09:05:51.837983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:33.459 [2024-07-26 09:05:51.846993] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:33.459 [2024-07-26 09:05:51.847026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.459 [2024-07-26 09:05:51.847045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:33.459 [2024-07-26 09:05:51.856153] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:33.459 [2024-07-26 09:05:51.856196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.459 [2024-07-26 09:05:51.856213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:33.459 [2024-07-26 09:05:51.865203] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:33.459 [2024-07-26 09:05:51.865233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.459 [2024-07-26 09:05:51.865256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:33.459 [2024-07-26 09:05:51.874163] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:33.459 [2024-07-26 09:05:51.874207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.459 [2024-07-26 09:05:51.874226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:33.459 [2024-07-26 09:05:51.883193] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:33.459 [2024-07-26 09:05:51.883223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.459 [2024-07-26 09:05:51.883244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:33.459 [2024-07-26 09:05:51.892185] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:33.459 [2024-07-26 09:05:51.892215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.459 [2024-07-26 09:05:51.892233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:33.459 [2024-07-26 09:05:51.901132] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:33.459 [2024-07-26 09:05:51.901162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.459 [2024-07-26 09:05:51.901189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:33.459 [2024-07-26 09:05:51.910069] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:33.459 [2024-07-26 09:05:51.910101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.459 [2024-07-26 09:05:51.910135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:33.719 [2024-07-26 09:05:51.919099] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:33.719 [2024-07-26 09:05:51.919146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.719 [2024-07-26 09:05:51.919171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:33.719 [2024-07-26 09:05:51.928014] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:33.719 [2024-07-26 09:05:51.928047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.719 [2024-07-26 09:05:51.928077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:33.719 [2024-07-26 09:05:51.936917] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:33.719 [2024-07-26 09:05:51.936949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.719 [2024-07-26 09:05:51.936967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:33.719 [2024-07-26 09:05:51.945941] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:33.719 [2024-07-26 09:05:51.945974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.719 [2024-07-26 09:05:51.945993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:33.719 [2024-07-26 09:05:51.955012] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:33.719 [2024-07-26 09:05:51.955045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.719 [2024-07-26 09:05:51.955073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:33.719 [2024-07-26 09:05:51.964100] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:33.719 [2024-07-26 09:05:51.964144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.719 [2024-07-26 09:05:51.964164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:33.719 [2024-07-26 09:05:51.973206] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:33.719 [2024-07-26 09:05:51.973235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.719 [2024-07-26 09:05:51.973253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:33.719 [2024-07-26 09:05:51.982386] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:33.719 [2024-07-26 09:05:51.982433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.719 [2024-07-26 09:05:51.982452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:33.719 [2024-07-26 09:05:51.991448] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:33.719 [2024-07-26 09:05:51.991481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.719 [2024-07-26 09:05:51.991499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:33.719 [2024-07-26 09:05:52.000571] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:33.719 [2024-07-26 09:05:52.000604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.719 [2024-07-26 09:05:52.000629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:33.719 [2024-07-26 09:05:52.009850] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:33.719 [2024-07-26 09:05:52.009884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.719 [2024-07-26 09:05:52.009904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:33.719 [2024-07-26 09:05:52.018896] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:33.719 [2024-07-26 09:05:52.018929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.719 [2024-07-26 09:05:52.018949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:33.719 [2024-07-26 09:05:52.027903] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:33.719 [2024-07-26 09:05:52.027936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.719 [2024-07-26 09:05:52.027954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:33.719 [2024-07-26 09:05:52.037437] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:33.719 [2024-07-26 09:05:52.037472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.719 [2024-07-26 09:05:52.037491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:33.719 [2024-07-26 09:05:52.047163] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:33.719 [2024-07-26 09:05:52.047194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.719 [2024-07-26 09:05:52.047214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:33.719 [2024-07-26 09:05:52.056301] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:33.719 [2024-07-26 09:05:52.056346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.719 [2024-07-26 09:05:52.056374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:33.719 [2024-07-26 09:05:52.065441] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:33.719 [2024-07-26 09:05:52.065475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.719 [2024-07-26 09:05:52.065494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:33.719 [2024-07-26 09:05:52.074545] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:33.719 [2024-07-26 09:05:52.074577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.719 [2024-07-26 09:05:52.074595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:33.719 [2024-07-26 09:05:52.083573] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:33.719 [2024-07-26 09:05:52.083606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.719 [2024-07-26 09:05:52.083625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:33.719 [2024-07-26 09:05:52.092555] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:33.719 [2024-07-26 09:05:52.092587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.719 [2024-07-26 09:05:52.092606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:33.719 [2024-07-26 09:05:52.101573] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:33.719 [2024-07-26 09:05:52.101607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.719 [2024-07-26 09:05:52.101625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:33.720 [2024-07-26 09:05:52.110645] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:33.720 [2024-07-26 09:05:52.110679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.720 [2024-07-26 09:05:52.110698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:33.720 [2024-07-26 09:05:52.119695] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:33.720 [2024-07-26 09:05:52.119729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.720 [2024-07-26 09:05:52.119747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:33.720 [2024-07-26 09:05:52.128735] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:33.720 [2024-07-26 09:05:52.128768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.720 [2024-07-26 09:05:52.128787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:33.720 [2024-07-26 09:05:52.137690] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:33.720 [2024-07-26 09:05:52.137732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.720 [2024-07-26 09:05:52.137751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:33.720 [2024-07-26 09:05:52.146661] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:33.720 [2024-07-26 09:05:52.146693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.720 [2024-07-26 09:05:52.146713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:33.720 [2024-07-26 09:05:52.155751] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:33.720 [2024-07-26 09:05:52.155785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.720 [2024-07-26 09:05:52.155803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:33.720 [2024-07-26 09:05:52.165488] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:33.720 [2024-07-26 09:05:52.165524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.720 [2024-07-26 09:05:52.165554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:33.720 [2024-07-26 09:05:52.174693] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:33.720 [2024-07-26 09:05:52.174727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.720 [2024-07-26 09:05:52.174746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:33.979 [2024-07-26 09:05:52.183722] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:33.979 [2024-07-26 09:05:52.183755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.979 [2024-07-26 09:05:52.183780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:33.979 [2024-07-26 09:05:52.192851] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:33.979 [2024-07-26 09:05:52.192885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.979 [2024-07-26 09:05:52.192908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:33.979 [2024-07-26 09:05:52.201692] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:33.979 [2024-07-26 09:05:52.201726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.979 [2024-07-26 09:05:52.201752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:33.979 [2024-07-26 09:05:52.210702] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:33.979 [2024-07-26 09:05:52.210746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.979 [2024-07-26 09:05:52.210765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:33.979 [2024-07-26 09:05:52.219759] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:33.979 [2024-07-26 09:05:52.219793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.979 [2024-07-26 09:05:52.219818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:33.979 [2024-07-26 09:05:52.228904] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:33.979 [2024-07-26 09:05:52.228937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.979 [2024-07-26 09:05:52.228957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:33.979 [2024-07-26 09:05:52.238051] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:33.979 [2024-07-26 09:05:52.238113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.979 [2024-07-26 09:05:52.238130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:33.979 [2024-07-26 09:05:52.247151] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:33.979 [2024-07-26 09:05:52.247196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.979 [2024-07-26 09:05:52.247213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:33.979 [2024-07-26 09:05:52.256290] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:33.979 [2024-07-26 09:05:52.256335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.979 [2024-07-26 09:05:52.256363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:33.979 [2024-07-26 09:05:52.265651] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:33.979 [2024-07-26 09:05:52.265686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.979 [2024-07-26 09:05:52.265705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:33.979 [2024-07-26 09:05:52.274779] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:33.979 [2024-07-26 09:05:52.274813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.979 [2024-07-26 09:05:52.274832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:33.979 [2024-07-26 09:05:52.283981] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:33.979 [2024-07-26 09:05:52.284015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.979 [2024-07-26 09:05:52.284044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:33.979 [2024-07-26 09:05:52.293033] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:33.979 [2024-07-26 09:05:52.293073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.979 [2024-07-26 09:05:52.293117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:33.979 [2024-07-26 09:05:52.302118] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:33.979 [2024-07-26 09:05:52.302149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.979 [2024-07-26 09:05:52.302166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:33.979 [2024-07-26 09:05:52.311143] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:33.979 [2024-07-26 09:05:52.311173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.979 [2024-07-26 09:05:52.311190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:33.979 [2024-07-26 09:05:52.320049] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:33.979 [2024-07-26 09:05:52.320090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.979 [2024-07-26 09:05:52.320109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:33.979 [2024-07-26 09:05:52.329102] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:33.979 [2024-07-26 09:05:52.329131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.979 [2024-07-26 09:05:52.329152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:33.979 [2024-07-26 09:05:52.338386] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:33.979 [2024-07-26 09:05:52.338436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.979 [2024-07-26 09:05:52.338456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:33.979 [2024-07-26 09:05:52.347458] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:33.979 [2024-07-26 09:05:52.347493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.979 [2024-07-26 09:05:52.347512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:33.979 [2024-07-26 09:05:52.356536] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:33.979 [2024-07-26 09:05:52.356570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.979 [2024-07-26 09:05:52.356589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:33.980 [2024-07-26 09:05:52.365538] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:33.980 [2024-07-26 09:05:52.365571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.980 [2024-07-26 09:05:52.365590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:33.980 [2024-07-26 09:05:52.374631] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:33.980 [2024-07-26 09:05:52.374665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.980 [2024-07-26 09:05:52.374684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:33.980 [2024-07-26 09:05:52.383762] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:33.980 [2024-07-26 09:05:52.383795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.980 [2024-07-26 09:05:52.383816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:33.980 [2024-07-26 09:05:52.392891] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:33.980 [2024-07-26 09:05:52.392924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.980 [2024-07-26 09:05:52.392942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:33.980 [2024-07-26 09:05:52.402103] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:33.980 [2024-07-26 09:05:52.402147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.980 [2024-07-26 09:05:52.402164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:33.980 [2024-07-26 09:05:52.411406] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:33.980 [2024-07-26 09:05:52.411452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.980 [2024-07-26 09:05:52.411472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:33.980 [2024-07-26 09:05:52.420517] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:33.980 [2024-07-26 09:05:52.420550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.980 [2024-07-26 09:05:52.420576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:33.980 [2024-07-26 09:05:52.429561] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:33.980 [2024-07-26 09:05:52.429594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.980 [2024-07-26 09:05:52.429613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:34.240 [2024-07-26 09:05:52.438696] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:34.240 [2024-07-26 09:05:52.438730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.240 [2024-07-26 09:05:52.438752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.240 [2024-07-26 09:05:52.447920] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:34.240 [2024-07-26 09:05:52.447954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.240 [2024-07-26 09:05:52.447983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:34.240 [2024-07-26 09:05:52.457135] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:34.240 [2024-07-26 09:05:52.457165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.240 [2024-07-26 09:05:52.457184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:34.240 [2024-07-26 09:05:52.466217] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:34.240 [2024-07-26 09:05:52.466247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.240 [2024-07-26 09:05:52.466267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:34.240 [2024-07-26 09:05:52.475278] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:34.240 [2024-07-26 09:05:52.475308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.240 [2024-07-26 09:05:52.475325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.240 [2024-07-26 09:05:52.484286] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:34.240 [2024-07-26 09:05:52.484316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.240 [2024-07-26 09:05:52.484335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:34.240 [2024-07-26 09:05:52.493292] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:34.240 [2024-07-26 09:05:52.493322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.240 [2024-07-26 09:05:52.493361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:34.240 [2024-07-26 09:05:52.502340] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:34.240 [2024-07-26 09:05:52.502384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.240 [2024-07-26 09:05:52.502403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:34.240 [2024-07-26 09:05:52.511423] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:34.240 [2024-07-26 09:05:52.511450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.240 [2024-07-26 09:05:52.511468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.240 [2024-07-26 09:05:52.520688] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:34.241 [2024-07-26 09:05:52.520722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.241 [2024-07-26 09:05:52.520742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:34.241 [2024-07-26 09:05:52.529787] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:34.241 [2024-07-26 09:05:52.529826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.241 [2024-07-26 09:05:52.529845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:34.241 [2024-07-26 09:05:52.538888] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:34.241 [2024-07-26 09:05:52.538921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.241 [2024-07-26 09:05:52.538940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:34.241 [2024-07-26 09:05:52.548117] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:34.241 [2024-07-26 09:05:52.548147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.241 [2024-07-26 09:05:52.548168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.241 [2024-07-26 09:05:52.557224] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:34.241 [2024-07-26 09:05:52.557254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.241 [2024-07-26 09:05:52.557282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:34.241 [2024-07-26 09:05:52.566408] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:34.241 [2024-07-26 09:05:52.566451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.241 [2024-07-26 09:05:52.566471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:34.241 [2024-07-26 09:05:52.575487] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:34.241 [2024-07-26 09:05:52.575519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.241 [2024-07-26 09:05:52.575538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:34.241 [2024-07-26 09:05:52.584494] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:34.241 [2024-07-26 09:05:52.584527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.241 [2024-07-26 09:05:52.584547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.241 [2024-07-26 09:05:52.593451] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:34.241 [2024-07-26 09:05:52.593485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.241 [2024-07-26 09:05:52.593503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:34.241 [2024-07-26 09:05:52.602415] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:34.241 [2024-07-26 09:05:52.602449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.241 [2024-07-26 09:05:52.602467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:34.241 [2024-07-26 09:05:52.611415] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:34.241 [2024-07-26 09:05:52.611448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.241 [2024-07-26 09:05:52.611467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:34.241 [2024-07-26 09:05:52.620478] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:34.241 [2024-07-26 09:05:52.620511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.241 [2024-07-26 09:05:52.620531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.241 [2024-07-26 09:05:52.629523] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:34.241 [2024-07-26 09:05:52.629556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.241 [2024-07-26 09:05:52.629575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:34.241 [2024-07-26 09:05:52.638546] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:34.241 [2024-07-26 09:05:52.638580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.241 [2024-07-26 09:05:52.638599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:34.241 [2024-07-26 09:05:52.647577] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:34.241 [2024-07-26 09:05:52.647610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.241 [2024-07-26 09:05:52.647629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:34.241 [2024-07-26 09:05:52.656588] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:34.241 [2024-07-26 09:05:52.656620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.241 [2024-07-26 09:05:52.656640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.241 [2024-07-26 09:05:52.665556] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:34.241 [2024-07-26 09:05:52.665590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.241 [2024-07-26 09:05:52.665609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:34.241 [2024-07-26 09:05:52.674595] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:34.241 [2024-07-26 09:05:52.674628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.241 [2024-07-26 09:05:52.674647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:34.241 [2024-07-26 09:05:52.683591] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:34.241 [2024-07-26 09:05:52.683631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.241 [2024-07-26 09:05:52.683650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:34.241 [2024-07-26 09:05:52.692581] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:34.241 [2024-07-26 09:05:52.692614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.241 [2024-07-26 09:05:52.692633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.502 [2024-07-26 09:05:52.701625] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:34.502 [2024-07-26 09:05:52.701660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.502 [2024-07-26 09:05:52.701679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:34.502 [2024-07-26 09:05:52.710570] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:34.502 [2024-07-26 09:05:52.710603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.502 [2024-07-26 09:05:52.710622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:34.502 [2024-07-26 09:05:52.719564] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:34.502 [2024-07-26 09:05:52.719597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.502 [2024-07-26 09:05:52.719616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:34.502 [2024-07-26 09:05:52.728546] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:34.502 [2024-07-26 09:05:52.728579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.502 [2024-07-26 09:05:52.728598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.502 [2024-07-26 09:05:52.737553] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:34.502 [2024-07-26 09:05:52.737585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.502 [2024-07-26 09:05:52.737604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:34.502 [2024-07-26 09:05:52.746542] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:34.502 [2024-07-26 09:05:52.746576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.502 [2024-07-26 09:05:52.746595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:34.502 [2024-07-26 09:05:52.755800] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:34.502 [2024-07-26 09:05:52.755833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.502 [2024-07-26 09:05:52.755852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:34.502 [2024-07-26 09:05:52.764893] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:34.502 [2024-07-26 09:05:52.764927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.502 [2024-07-26 09:05:52.764946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.502 [2024-07-26 09:05:52.774218] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:34.502 [2024-07-26 09:05:52.774251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.502 [2024-07-26 09:05:52.774268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:34.502 [2024-07-26 09:05:52.783583] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:34.502 [2024-07-26 09:05:52.783617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.502 [2024-07-26 09:05:52.783636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:34.502 [2024-07-26 09:05:52.792660] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:34.502 [2024-07-26 09:05:52.792695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.502 [2024-07-26 09:05:52.792713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:34.502 [2024-07-26 09:05:52.801922] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:34.502 [2024-07-26 09:05:52.801955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.503 [2024-07-26 09:05:52.801974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.503 [2024-07-26 09:05:52.811023] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:34.503 [2024-07-26 09:05:52.811056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.503 [2024-07-26 09:05:52.811084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:34.503 [2024-07-26 09:05:52.820083] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:34.503 [2024-07-26 09:05:52.820130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.503 [2024-07-26 09:05:52.820146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:34.503 [2024-07-26 09:05:52.829126] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:34.503 [2024-07-26 09:05:52.829156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.503 [2024-07-26 09:05:52.829189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:34.503 [2024-07-26 09:05:52.838117] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:34.503 [2024-07-26 09:05:52.838146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.503 [2024-07-26 09:05:52.838168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.503 [2024-07-26 09:05:52.847157] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:34.503 [2024-07-26 09:05:52.847186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.503 [2024-07-26 09:05:52.847203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:34.503 [2024-07-26 09:05:52.856129] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:34.503 [2024-07-26 09:05:52.856158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.503 [2024-07-26 09:05:52.856174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:34.503 [2024-07-26 09:05:52.865211] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:34.503 [2024-07-26 09:05:52.865240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.503 [2024-07-26 09:05:52.865257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:34.503 [2024-07-26 09:05:52.874247] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:34.503 [2024-07-26 09:05:52.874279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.503 [2024-07-26 09:05:52.874296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.503 [2024-07-26 09:05:52.883297] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:34.503 [2024-07-26 09:05:52.883328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.503 [2024-07-26 09:05:52.883361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:34.503 [2024-07-26 09:05:52.892347] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:34.503 [2024-07-26 09:05:52.892391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.503 [2024-07-26 09:05:52.892410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:34.503 [2024-07-26 09:05:52.901467] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:34.503 [2024-07-26 09:05:52.901501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.503 [2024-07-26 09:05:52.901520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:34.503 [2024-07-26 09:05:52.910631] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:34.503 [2024-07-26 09:05:52.910664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.503 [2024-07-26 09:05:52.910682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.503 [2024-07-26 09:05:52.919798] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:34.503 [2024-07-26 09:05:52.919836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.503 [2024-07-26 09:05:52.919855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:34.503 [2024-07-26 09:05:52.929140] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:34.503 [2024-07-26 09:05:52.929170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.503 [2024-07-26 09:05:52.929187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:34.503 [2024-07-26 09:05:52.938245] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:34.503 [2024-07-26 09:05:52.938275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.503 [2024-07-26 09:05:52.938293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:34.503 [2024-07-26 09:05:52.947424] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:34.503 [2024-07-26 09:05:52.947458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.503 [2024-07-26 09:05:52.947477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.503 [2024-07-26 09:05:52.956597] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:34.503 [2024-07-26 09:05:52.956632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.503 [2024-07-26 09:05:52.956651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:34.764 [2024-07-26 09:05:52.965843] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:34.764 [2024-07-26 09:05:52.965879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.764 [2024-07-26 09:05:52.965899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:34.764 [2024-07-26 09:05:52.974868] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:34.764 [2024-07-26 09:05:52.974902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.764 [2024-07-26 09:05:52.974921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:34.764 [2024-07-26 09:05:52.983954] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:34.764 [2024-07-26 09:05:52.983988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.764 [2024-07-26 09:05:52.984007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.764 [2024-07-26 09:05:52.993069] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:34.764 [2024-07-26 09:05:52.993103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.764 [2024-07-26 09:05:52.993137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:34.764 [2024-07-26 09:05:53.002415] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:34.764 [2024-07-26 09:05:53.002450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.764 [2024-07-26 09:05:53.002470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:34.764 [2024-07-26 09:05:53.011418] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:34.764 [2024-07-26 09:05:53.011453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.764 [2024-07-26 09:05:53.011472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:34.764 [2024-07-26 09:05:53.020458] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:34.764 [2024-07-26 09:05:53.020492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.764 [2024-07-26 09:05:53.020511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.764 [2024-07-26 09:05:53.029619] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:34.764 [2024-07-26 09:05:53.029654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.764 [2024-07-26 09:05:53.029673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:34.764 [2024-07-26 09:05:53.038641] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:34.764 [2024-07-26 09:05:53.038676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.764 [2024-07-26 09:05:53.038694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:34.764 [2024-07-26 09:05:53.048403] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:34.764 [2024-07-26 09:05:53.048449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.764 [2024-07-26 09:05:53.048469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:34.764 [2024-07-26 09:05:53.057475] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:34.764 [2024-07-26 09:05:53.057510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.764 [2024-07-26 09:05:53.057528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.764 [2024-07-26 09:05:53.066834] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:34.764 [2024-07-26 09:05:53.066869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.764 [2024-07-26 09:05:53.066889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:34.764 [2024-07-26 09:05:53.076076] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:34.764 [2024-07-26 09:05:53.076116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.764 [2024-07-26 09:05:53.076135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:34.764 [2024-07-26 09:05:53.085851] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:34.764 [2024-07-26 09:05:53.085885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.765 [2024-07-26 09:05:53.085904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:34.765 [2024-07-26 09:05:53.096393] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:34.765 [2024-07-26 09:05:53.096441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.765 [2024-07-26 09:05:53.096461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.765 [2024-07-26 09:05:53.106729] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:34.765 [2024-07-26 09:05:53.106765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.765 [2024-07-26 09:05:53.106784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:34.765 [2024-07-26 09:05:53.117081] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:34.765 [2024-07-26 09:05:53.117128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.765 [2024-07-26 09:05:53.117145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:34.765 [2024-07-26 09:05:53.127041] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:34.765 [2024-07-26 09:05:53.127084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.765 [2024-07-26 09:05:53.127104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:34.765 [2024-07-26 09:05:53.136808] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:34.765 [2024-07-26 09:05:53.136844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.765 [2024-07-26 09:05:53.136863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.765 [2024-07-26 09:05:53.147163] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:34.765 [2024-07-26 09:05:53.147194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.765 [2024-07-26 09:05:53.147211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:34.765 [2024-07-26 09:05:53.156624] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:34.765 [2024-07-26 09:05:53.156659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.765 [2024-07-26 09:05:53.156678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:34.765 [2024-07-26 09:05:53.165682] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:34.765 [2024-07-26 09:05:53.165716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.765 [2024-07-26 09:05:53.165735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:34.765 [2024-07-26 09:05:53.174715] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:34.765 [2024-07-26 09:05:53.174748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.765 [2024-07-26 09:05:53.174767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.765 [2024-07-26 09:05:53.183725] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:34.765 [2024-07-26 09:05:53.183759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.765 [2024-07-26 09:05:53.183778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:34.765 [2024-07-26 09:05:53.192780] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:34.765 [2024-07-26 09:05:53.192814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.765 [2024-07-26 09:05:53.192832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:34.765 [2024-07-26 09:05:53.201845] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:34.765 [2024-07-26 09:05:53.201879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.765 [2024-07-26 09:05:53.201898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:34.765 [2024-07-26 09:05:53.210990] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:34.765 [2024-07-26 09:05:53.211024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.765 [2024-07-26 09:05:53.211042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.765 [2024-07-26 09:05:53.220033] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:34.765 [2024-07-26 09:05:53.220074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.765 [2024-07-26 09:05:53.220095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:35.025 [2024-07-26 09:05:53.229076] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:35.025 [2024-07-26 09:05:53.229126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.025 [2024-07-26 09:05:53.229143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:35.025 [2024-07-26 09:05:53.237911] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f5c390) 00:32:35.025 [2024-07-26 09:05:53.237945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.025 [2024-07-26 09:05:53.237975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:35.025 00:32:35.025 Latency(us) 00:32:35.025 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:35.025 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:32:35.025 nvme0n1 : 2.00 3387.75 423.47 0.00 0.00 4717.39 4320.52 11019.76 00:32:35.025 =================================================================================================================== 00:32:35.025 Total : 3387.75 423.47 0.00 0.00 4717.39 4320.52 11019.76 00:32:35.025 0 00:32:35.025 09:05:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:32:35.025 09:05:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:32:35.025 09:05:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:32:35.025 09:05:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:32:35.025 | .driver_specific 00:32:35.025 | .nvme_error 00:32:35.025 | .status_code 00:32:35.025 | .command_transient_transport_error' 00:32:35.284 09:05:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 219 > 0 )) 00:32:35.284 09:05:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1112065 00:32:35.284 09:05:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 1112065 ']' 00:32:35.284 09:05:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 1112065 00:32:35.284 09:05:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:32:35.284 09:05:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:35.284 09:05:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1112065 00:32:35.284 09:05:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:32:35.284 09:05:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:32:35.284 09:05:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1112065' 00:32:35.284 killing process with pid 1112065 00:32:35.284 09:05:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 1112065 00:32:35.285 Received shutdown signal, test time was about 2.000000 seconds 00:32:35.285 00:32:35.285 Latency(us) 00:32:35.285 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:35.285 =================================================================================================================== 00:32:35.285 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:35.285 09:05:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 1112065 00:32:35.542 09:05:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:32:35.542 09:05:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:32:35.543 09:05:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:32:35.543 09:05:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:32:35.543 09:05:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:32:35.543 09:05:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1112475 00:32:35.543 09:05:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:32:35.543 09:05:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1112475 /var/tmp/bperf.sock 00:32:35.543 09:05:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 1112475 ']' 00:32:35.543 09:05:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:35.543 09:05:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:35.543 09:05:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:35.543 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:35.543 09:05:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:35.543 09:05:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:35.543 [2024-07-26 09:05:53.808890] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:32:35.543 [2024-07-26 09:05:53.808980] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1112475 ] 00:32:35.543 EAL: No free 2048 kB hugepages reported on node 1 00:32:35.543 [2024-07-26 09:05:53.841349] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:35.543 [2024-07-26 09:05:53.868752] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:35.543 [2024-07-26 09:05:53.954111] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:35.800 09:05:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:35.800 09:05:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:32:35.800 09:05:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:35.800 09:05:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:36.058 09:05:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:32:36.058 09:05:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:36.058 09:05:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:36.058 09:05:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:36.058 09:05:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:36.058 09:05:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:36.316 nvme0n1 00:32:36.316 09:05:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:32:36.316 09:05:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:36.316 09:05:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:36.316 09:05:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:36.316 09:05:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:32:36.316 09:05:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:36.575 Running I/O for 2 seconds... 00:32:36.576 [2024-07-26 09:05:54.869089] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3d940) with pdu=0x2000190ee5c8 00:32:36.576 [2024-07-26 09:05:54.870135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24928 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.576 [2024-07-26 09:05:54.870185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:32:36.576 [2024-07-26 09:05:54.882009] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3d940) with pdu=0x2000190ef6a8 00:32:36.576 [2024-07-26 09:05:54.883030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:16287 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.576 [2024-07-26 09:05:54.883078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:32:36.576 [2024-07-26 09:05:54.895177] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3d940) with pdu=0x2000190fbcf0 00:32:36.576 [2024-07-26 09:05:54.896353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8443 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.576 [2024-07-26 09:05:54.896382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:32:36.576 [2024-07-26 09:05:54.907104] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3d940) with pdu=0x2000190eb760 00:32:36.576 [2024-07-26 09:05:54.908291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:6095 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.576 [2024-07-26 09:05:54.908324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:32:36.576 [2024-07-26 09:05:54.921226] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3d940) with pdu=0x2000190e5a90 00:32:36.576 [2024-07-26 09:05:54.922625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:22014 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.576 [2024-07-26 09:05:54.922658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:32:36.576 [2024-07-26 09:05:54.934208] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3d940) with pdu=0x2000190e9e10 00:32:36.576 [2024-07-26 09:05:54.935745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:21297 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.576 [2024-07-26 09:05:54.935777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:32:36.576 [2024-07-26 09:05:54.946083] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3d940) with pdu=0x2000190e73e0 00:32:36.576 [2024-07-26 09:05:54.947571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18857 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.576 [2024-07-26 09:05:54.947605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:32:36.576 [2024-07-26 09:05:54.957871] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3d940) with pdu=0x2000190fe720 00:32:36.576 [2024-07-26 09:05:54.958883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:5379 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.576 [2024-07-26 09:05:54.958917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:32:36.576 [2024-07-26 09:05:54.970571] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3d940) with pdu=0x2000190e3498 00:32:36.576 [2024-07-26 09:05:54.971469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:11579 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.576 [2024-07-26 09:05:54.971500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:32:36.576 [2024-07-26 09:05:54.985008] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3d940) with pdu=0x2000190f0bc0 00:32:36.576 [2024-07-26 09:05:54.986858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:17250 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.576 [2024-07-26 09:05:54.986891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:32:36.576 [2024-07-26 09:05:54.998168] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3d940) with pdu=0x2000190ec408 00:32:36.576 [2024-07-26 09:05:55.000266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:13413 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.576 [2024-07-26 09:05:55.000298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:32:36.576 [2024-07-26 09:05:55.007120] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3d940) with pdu=0x2000190f5378 00:32:36.576 [2024-07-26 09:05:55.007930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:309 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.576 [2024-07-26 09:05:55.007962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:32:36.576 [2024-07-26 09:05:55.019811] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3d940) with pdu=0x2000190e0a68 00:32:36.576 [2024-07-26 09:05:55.020643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:15039 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.576 [2024-07-26 09:05:55.020676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:32:36.576 [2024-07-26 09:05:55.032646] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3d940) with pdu=0x2000190e0ea0 00:32:36.576 [2024-07-26 09:05:55.033343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:17180 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.576 [2024-07-26 09:05:55.033376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:32:36.837 [2024-07-26 09:05:55.047051] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3d940) with pdu=0x2000190edd58 00:32:36.837 [2024-07-26 09:05:55.048730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:8243 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.837 [2024-07-26 09:05:55.048762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:32:36.837 [2024-07-26 09:05:55.058765] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3d940) with pdu=0x2000190e49b0 00:32:36.837 [2024-07-26 09:05:55.059955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:15204 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.837 [2024-07-26 09:05:55.059993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:32:36.837 [2024-07-26 09:05:55.071393] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3d940) with pdu=0x2000190e99d8 00:32:36.837 [2024-07-26 09:05:55.072384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:2154 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.837 [2024-07-26 09:05:55.072414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:32:36.837 [2024-07-26 09:05:55.083366] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3d940) with pdu=0x2000190e1b48 00:32:36.837 [2024-07-26 09:05:55.085270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:11509 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.837 [2024-07-26 09:05:55.085300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:32:36.837 [2024-07-26 09:05:55.094252] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3d940) with pdu=0x2000190f5be8 00:32:36.837 [2024-07-26 09:05:55.095094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:3397 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.837 [2024-07-26 09:05:55.095125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:32:36.837 [2024-07-26 09:05:55.107421] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3d940) with pdu=0x2000190ef270 00:32:36.837 [2024-07-26 09:05:55.108421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:320 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.837 [2024-07-26 09:05:55.108460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:32:36.837 [2024-07-26 09:05:55.121469] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3d940) with pdu=0x2000190ee190 00:32:36.837 [2024-07-26 09:05:55.122686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:24645 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.837 [2024-07-26 09:05:55.122718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:32:36.837 [2024-07-26 09:05:55.134826] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3d940) with pdu=0x2000190fa3a0 00:32:36.837 [2024-07-26 09:05:55.135894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:18740 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.837 [2024-07-26 09:05:55.135926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:32:36.837 [2024-07-26 09:05:55.149295] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3d940) with pdu=0x2000190f92c0 00:32:36.837 [2024-07-26 09:05:55.151338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:19199 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.837 [2024-07-26 09:05:55.151387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:32:36.837 [2024-07-26 09:05:55.158524] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3d940) with pdu=0x2000190e9168 00:32:36.837 [2024-07-26 09:05:55.159389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:13609 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.837 [2024-07-26 09:05:55.159426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:32:36.837 [2024-07-26 09:05:55.170397] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3d940) with pdu=0x2000190fb8b8 00:32:36.837 [2024-07-26 09:05:55.171260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:12939 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.837 [2024-07-26 09:05:55.171289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:32:36.837 [2024-07-26 09:05:55.183508] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3d940) with pdu=0x2000190fe2e8 00:32:36.837 [2024-07-26 09:05:55.184545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:10194 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.837 [2024-07-26 09:05:55.184582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:32:36.837 [2024-07-26 09:05:55.197497] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3d940) with pdu=0x2000190ecc78 00:32:36.837 [2024-07-26 09:05:55.198698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:9717 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.837 [2024-07-26 09:05:55.198727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:32:36.837 [2024-07-26 09:05:55.210406] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3d940) with pdu=0x2000190f2510 00:32:36.837 [2024-07-26 09:05:55.211786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:13358 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.837 [2024-07-26 09:05:55.211819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:32:36.837 [2024-07-26 09:05:55.224769] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3d940) with pdu=0x2000190ec408 00:32:36.837 [2024-07-26 09:05:55.226797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:24490 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.837 [2024-07-26 09:05:55.226834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:32:36.837 [2024-07-26 09:05:55.233641] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3d940) with pdu=0x2000190e2c28 00:32:36.837 [2024-07-26 09:05:55.234480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25205 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.837 [2024-07-26 09:05:55.234517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:32:36.837 [2024-07-26 09:05:55.245487] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3d940) with pdu=0x2000190e7c50 00:32:36.837 [2024-07-26 09:05:55.246322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:491 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.837 [2024-07-26 09:05:55.246369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:32:36.837 [2024-07-26 09:05:55.258666] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3d940) with pdu=0x2000190e0630 00:32:36.837 [2024-07-26 09:05:55.259663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:4110 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.837 [2024-07-26 09:05:55.259699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:32:36.837 [2024-07-26 09:05:55.272690] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3d940) with pdu=0x2000190f6890 00:32:36.837 [2024-07-26 09:05:55.273873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:16955 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.837 [2024-07-26 09:05:55.273905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:32:36.837 [2024-07-26 09:05:55.285768] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3d940) with pdu=0x2000190fe720 00:32:36.837 [2024-07-26 09:05:55.287136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:2678 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.837 [2024-07-26 09:05:55.287164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:32:37.100 [2024-07-26 09:05:55.297710] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3d940) with pdu=0x2000190f2948 00:32:37.100 [2024-07-26 09:05:55.299068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:8146 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.100 [2024-07-26 09:05:55.299106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:32:37.100 [2024-07-26 09:05:55.310952] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3d940) with pdu=0x2000190e99d8 00:32:37.100 [2024-07-26 09:05:55.312482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:21817 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.100 [2024-07-26 09:05:55.312519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:32:37.100 [2024-07-26 09:05:55.324242] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3d940) with pdu=0x2000190ddc00 00:32:37.100 [2024-07-26 09:05:55.325939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:15627 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.100 [2024-07-26 09:05:55.325976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:32:37.100 [2024-07-26 09:05:55.337398] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3d940) with pdu=0x2000190e6738 00:32:37.100 [2024-07-26 09:05:55.339270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:11996 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.100 [2024-07-26 09:05:55.339304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:32:37.100 [2024-07-26 09:05:55.349270] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3d940) with pdu=0x2000190ed0b0 00:32:37.100 [2024-07-26 09:05:55.350619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:15312 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.100 [2024-07-26 09:05:55.350646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:32:37.100 [2024-07-26 09:05:55.360761] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3d940) with pdu=0x2000190e1b48 00:32:37.100 [2024-07-26 09:05:55.362706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:2256 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.100 [2024-07-26 09:05:55.362739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:32:37.100 [2024-07-26 09:05:55.372481] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3d940) with pdu=0x2000190eb328 00:32:37.100 [2024-07-26 09:05:55.373339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:19732 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.100 [2024-07-26 09:05:55.373379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:32:37.100 [2024-07-26 09:05:55.385591] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3d940) with pdu=0x2000190e6738 00:32:37.100 [2024-07-26 09:05:55.386613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:2654 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.100 [2024-07-26 09:05:55.386647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:32:37.100 [2024-07-26 09:05:55.397568] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3d940) with pdu=0x2000190fa7d8 00:32:37.100 [2024-07-26 09:05:55.398579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:824 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.101 [2024-07-26 09:05:55.398616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:32:37.101 [2024-07-26 09:05:55.411597] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3d940) with pdu=0x2000190e5a90 00:32:37.101 [2024-07-26 09:05:55.412784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:15008 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.101 [2024-07-26 09:05:55.412822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:32:37.101 [2024-07-26 09:05:55.424574] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3d940) with pdu=0x2000190dece0 00:32:37.101 [2024-07-26 09:05:55.425951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:2947 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.101 [2024-07-26 09:05:55.425988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:32:37.101 [2024-07-26 09:05:55.436408] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3d940) with pdu=0x2000190f31b8 00:32:37.101 [2024-07-26 09:05:55.437778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:357 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.101 [2024-07-26 09:05:55.437809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:32:37.101 [2024-07-26 09:05:55.448235] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3d940) with pdu=0x2000190f4298 00:32:37.101 [2024-07-26 09:05:55.449090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:9869 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.101 [2024-07-26 09:05:55.449118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:32:37.101 [2024-07-26 09:05:55.460858] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3d940) with pdu=0x2000190e2c28 00:32:37.101 [2024-07-26 09:05:55.461530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:7977 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.101 [2024-07-26 09:05:55.461560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:32:37.101 [2024-07-26 09:05:55.474047] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3d940) with pdu=0x2000190ed4e8 00:32:37.101 [2024-07-26 09:05:55.474934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:5043 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.101 [2024-07-26 09:05:55.474964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:32:37.101 [2024-07-26 09:05:55.488473] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3d940) with pdu=0x2000190f6020 00:32:37.101 [2024-07-26 09:05:55.490330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:14342 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.101 [2024-07-26 09:05:55.490361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:32:37.101 [2024-07-26 09:05:55.500229] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3d940) with pdu=0x2000190e6b70 00:32:37.101 [2024-07-26 09:05:55.501578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:13187 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.101 [2024-07-26 09:05:55.501611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:32:37.101 [2024-07-26 09:05:55.511681] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3d940) with pdu=0x2000190de038 00:32:37.101 [2024-07-26 09:05:55.513623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:14748 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.101 [2024-07-26 09:05:55.513665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:32:37.101 [2024-07-26 09:05:55.522513] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3d940) with pdu=0x2000190f9b30 00:32:37.101 [2024-07-26 09:05:55.523343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:2351 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.101 [2024-07-26 09:05:55.523392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:32:37.101 [2024-07-26 09:05:55.535753] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3d940) with pdu=0x2000190f7100 00:32:37.101 [2024-07-26 09:05:55.536756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:10250 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.101 [2024-07-26 09:05:55.536792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:32:37.101 [2024-07-26 09:05:55.549056] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3d940) with pdu=0x2000190feb58 00:32:37.101 [2024-07-26 09:05:55.550257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:1789 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.101 [2024-07-26 09:05:55.550288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:32:37.362 [2024-07-26 09:05:55.563109] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3d940) with pdu=0x2000190fda78 00:32:37.362 [2024-07-26 09:05:55.564520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:18903 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.362 [2024-07-26 09:05:55.564548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:32:37.362 [2024-07-26 09:05:55.577174] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3d940) with pdu=0x2000190e1710 00:32:37.362 [2024-07-26 09:05:55.579230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:4531 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.362 [2024-07-26 09:05:55.579263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:32:37.362 [2024-07-26 09:05:55.586178] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3d940) with pdu=0x2000190fac10 00:32:37.362 [2024-07-26 09:05:55.586992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:4840 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.362 [2024-07-26 09:05:55.587038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:32:37.362 [2024-07-26 09:05:55.597952] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3d940) with pdu=0x2000190e0ea0 00:32:37.362 [2024-07-26 09:05:55.598775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23310 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.362 [2024-07-26 09:05:55.598807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:32:37.362 [2024-07-26 09:05:55.611158] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3d940) with pdu=0x2000190f35f0 00:32:37.362 [2024-07-26 09:05:55.612137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:21098 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.362 [2024-07-26 09:05:55.612169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:32:37.362 [2024-07-26 09:05:55.624313] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3d940) with pdu=0x2000190fb8b8 00:32:37.362 [2024-07-26 09:05:55.625536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:3882 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.362 [2024-07-26 09:05:55.625570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:32:37.362 [2024-07-26 09:05:55.637727] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3d940) with pdu=0x2000190f5be8 00:32:37.362 [2024-07-26 09:05:55.639114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:5064 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.362 [2024-07-26 09:05:55.639146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:32:37.362 [2024-07-26 09:05:55.649630] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3d940) with pdu=0x2000190e3498 00:32:37.362 [2024-07-26 09:05:55.650522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:19706 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.362 [2024-07-26 09:05:55.650551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:32:37.363 [2024-07-26 09:05:55.662419] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3d940) with pdu=0x2000190ea680 00:32:37.363 [2024-07-26 09:05:55.663053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:12949 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.363 [2024-07-26 09:05:55.663093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:32:37.363 [2024-07-26 09:05:55.676152] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3d940) with pdu=0x2000190f7da8 00:32:37.363 [2024-07-26 09:05:55.676386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:16817 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.363 [2024-07-26 09:05:55.676419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:37.363 [2024-07-26 09:05:55.689833] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3d940) with pdu=0x2000190f7da8 00:32:37.363 [2024-07-26 09:05:55.690073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:1409 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.363 [2024-07-26 09:05:55.690129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:37.363 [2024-07-26 09:05:55.703419] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3d940) with pdu=0x2000190f7da8 00:32:37.363 [2024-07-26 09:05:55.703643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:15157 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.363 [2024-07-26 09:05:55.703673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:37.363 [2024-07-26 09:05:55.716385] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3d940) with pdu=0x2000190f7da8 00:32:37.363 [2024-07-26 09:05:55.716583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:14407 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.363 [2024-07-26 09:05:55.716609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:37.363 [2024-07-26 09:05:55.730238] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3d940) with pdu=0x2000190f7da8 00:32:37.363 [2024-07-26 09:05:55.730502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:22930 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.363 [2024-07-26 09:05:55.730534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:37.363 [2024-07-26 09:05:55.743700] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3d940) with pdu=0x2000190f7da8 00:32:37.363 [2024-07-26 09:05:55.743885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:13467 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.363 [2024-07-26 09:05:55.743919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:37.363 [2024-07-26 09:05:55.756192] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3d940) with pdu=0x2000190f7da8 00:32:37.363 [2024-07-26 09:05:55.756402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:17251 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.363 [2024-07-26 09:05:55.756429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:37.363 [2024-07-26 09:05:55.768715] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3d940) with pdu=0x2000190f7da8 00:32:37.363 [2024-07-26 09:05:55.768923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:22047 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.363 [2024-07-26 09:05:55.768954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:37.363 [2024-07-26 09:05:55.781255] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3d940) with pdu=0x2000190f7da8 00:32:37.363 [2024-07-26 09:05:55.781575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:7496 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.363 [2024-07-26 09:05:55.781617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:37.363 [2024-07-26 09:05:55.793875] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3d940) with pdu=0x2000190f7da8 00:32:37.363 [2024-07-26 09:05:55.794190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:10574 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.363 [2024-07-26 09:05:55.794220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:37.363 [2024-07-26 09:05:55.806244] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3d940) with pdu=0x2000190f7da8 00:32:37.363 [2024-07-26 09:05:55.806481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:3158 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.363 [2024-07-26 09:05:55.806508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:37.363 [2024-07-26 09:05:55.818698] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3d940) with pdu=0x2000190f7da8 00:32:37.363 [2024-07-26 09:05:55.818906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:11629 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.363 [2024-07-26 09:05:55.818934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:37.624 [2024-07-26 09:05:55.831504] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3d940) with pdu=0x2000190f7da8 00:32:37.624 [2024-07-26 09:05:55.831824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:18305 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.624 [2024-07-26 09:05:55.831867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:37.624 [2024-07-26 09:05:55.843769] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3d940) with pdu=0x2000190f7da8 00:32:37.624 [2024-07-26 09:05:55.843960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:5504 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.624 [2024-07-26 09:05:55.843993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:37.624 [2024-07-26 09:05:55.856116] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3d940) with pdu=0x2000190f7da8 00:32:37.624 [2024-07-26 09:05:55.856321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:24594 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.624 [2024-07-26 09:05:55.856350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:37.624 [2024-07-26 09:05:55.868343] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3d940) with pdu=0x2000190f7da8 00:32:37.624 [2024-07-26 09:05:55.868549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:3122 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.624 [2024-07-26 09:05:55.868575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:37.624 [2024-07-26 09:05:55.880672] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3d940) with pdu=0x2000190f7da8 00:32:37.624 [2024-07-26 09:05:55.880865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:6422 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.624 [2024-07-26 09:05:55.880892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:37.624 [2024-07-26 09:05:55.893321] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3d940) with pdu=0x2000190f7da8 00:32:37.624 [2024-07-26 09:05:55.893594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:8906 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.624 [2024-07-26 09:05:55.893624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:37.624 [2024-07-26 09:05:55.906032] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3d940) with pdu=0x2000190f7da8 00:32:37.624 [2024-07-26 09:05:55.906294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:1483 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.624 [2024-07-26 09:05:55.906324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:37.624 [2024-07-26 09:05:55.918413] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3d940) with pdu=0x2000190f7da8 00:32:37.624 [2024-07-26 09:05:55.918603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:17179 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.624 [2024-07-26 09:05:55.918629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:37.624 [2024-07-26 09:05:55.930806] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3d940) with pdu=0x2000190f7da8 00:32:37.624 [2024-07-26 09:05:55.930998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:11028 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.624 [2024-07-26 09:05:55.931024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:37.624 [2024-07-26 09:05:55.943313] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3d940) with pdu=0x2000190f7da8 00:32:37.624 [2024-07-26 09:05:55.943611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:8306 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.624 [2024-07-26 09:05:55.943639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:37.624 [2024-07-26 09:05:55.955793] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3d940) with pdu=0x2000190f7da8 00:32:37.624 [2024-07-26 09:05:55.955992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:18405 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.624 [2024-07-26 09:05:55.956018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:37.624 [2024-07-26 09:05:55.968163] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3d940) with pdu=0x2000190f7da8 00:32:37.624 [2024-07-26 09:05:55.968424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:7048 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.624 [2024-07-26 09:05:55.968455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:37.624 [2024-07-26 09:05:55.980705] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3d940) with pdu=0x2000190f7da8 00:32:37.624 [2024-07-26 09:05:55.980896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:1205 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.625 [2024-07-26 09:05:55.980922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:37.625 [2024-07-26 09:05:55.993164] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3d940) with pdu=0x2000190f7da8 00:32:37.625 [2024-07-26 09:05:55.993379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:15728 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.625 [2024-07-26 09:05:55.993406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:37.625 [2024-07-26 09:05:56.005604] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3d940) with pdu=0x2000190f7da8 00:32:37.625 [2024-07-26 09:05:56.005816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:13809 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.625 [2024-07-26 09:05:56.005844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:37.625 [2024-07-26 09:05:56.018017] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3d940) with pdu=0x2000190f7da8 00:32:37.625 [2024-07-26 09:05:56.018316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:10634 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.625 [2024-07-26 09:05:56.018361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:37.625 [2024-07-26 09:05:56.030591] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3d940) with pdu=0x2000190f7da8 00:32:37.625 [2024-07-26 09:05:56.030877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:10598 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.625 [2024-07-26 09:05:56.030905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:37.625 [2024-07-26 09:05:56.042957] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3d940) with pdu=0x2000190f7da8 00:32:37.625 [2024-07-26 09:05:56.043212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:7789 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.625 [2024-07-26 09:05:56.043241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:37.625 [2024-07-26 09:05:56.055272] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3d940) with pdu=0x2000190f7da8 00:32:37.625 [2024-07-26 09:05:56.055575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:22197 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.625 [2024-07-26 09:05:56.055603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:37.625 [2024-07-26 09:05:56.067801] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3d940) with pdu=0x2000190f7da8 00:32:37.625 [2024-07-26 09:05:56.068066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:9695 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.625 [2024-07-26 09:05:56.068095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:37.625 [2024-07-26 09:05:56.080494] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3d940) with pdu=0x2000190f7da8 00:32:37.625 [2024-07-26 09:05:56.080763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:25282 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.625 [2024-07-26 09:05:56.080806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:37.886 [2024-07-26 09:05:56.093329] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3d940) with pdu=0x2000190f7da8 00:32:37.886 [2024-07-26 09:05:56.093618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:17343 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.886 [2024-07-26 09:05:56.093651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:37.886 [2024-07-26 09:05:56.105837] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3d940) with pdu=0x2000190f7da8 00:32:37.886 [2024-07-26 09:05:56.106136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:9763 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.886 [2024-07-26 09:05:56.106165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:37.886 [2024-07-26 09:05:56.118303] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3d940) with pdu=0x2000190f7da8 00:32:37.886 [2024-07-26 09:05:56.118594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:25591 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.886 [2024-07-26 09:05:56.118622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:37.886 [2024-07-26 09:05:56.130751] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3d940) with pdu=0x2000190f7da8 00:32:37.886 [2024-07-26 09:05:56.130983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:9876 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.886 [2024-07-26 09:05:56.131009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:37.886 [2024-07-26 09:05:56.143431] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3d940) with pdu=0x2000190f7da8 00:32:37.886 [2024-07-26 09:05:56.143619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:3949 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.886 [2024-07-26 09:05:56.143645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:37.886 [2024-07-26 09:05:56.155975] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3d940) with pdu=0x2000190f7da8 00:32:37.886 [2024-07-26 09:05:56.156226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:4961 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.886 [2024-07-26 09:05:56.156261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:37.886 [2024-07-26 09:05:56.168603] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3d940) with pdu=0x2000190f7da8 00:32:37.886 [2024-07-26 09:05:56.168791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:21490 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.886 [2024-07-26 09:05:56.168824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:37.886 [2024-07-26 09:05:56.181025] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3d940) with pdu=0x2000190f7da8 00:32:37.886 [2024-07-26 09:05:56.181323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:10456 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.886 [2024-07-26 09:05:56.181367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:37.886 [2024-07-26 09:05:56.193480] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3d940) with pdu=0x2000190f7da8 00:32:37.886 [2024-07-26 09:05:56.193670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:11056 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.886 [2024-07-26 09:05:56.193695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:37.886 [2024-07-26 09:05:56.205874] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3d940) with pdu=0x2000190f7da8 00:32:37.886 [2024-07-26 09:05:56.206080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:8894 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.886 [2024-07-26 09:05:56.206108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:37.886 [2024-07-26 09:05:56.218320] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3d940) with pdu=0x2000190f7da8 00:32:37.886 [2024-07-26 09:05:56.218579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:11423 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.886 [2024-07-26 09:05:56.218606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:37.886 [2024-07-26 09:05:56.230798] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3d940) with pdu=0x2000190f7da8 00:32:37.886 [2024-07-26 09:05:56.230988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:19897 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.886 [2024-07-26 09:05:56.231014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:37.886 [2024-07-26 09:05:56.243323] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3d940) with pdu=0x2000190f7da8 00:32:37.886 [2024-07-26 09:05:56.243638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:8815 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.886 [2024-07-26 09:05:56.243666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:37.886 [2024-07-26 09:05:56.255881] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3d940) with pdu=0x2000190f7da8 00:32:37.886 [2024-07-26 09:05:56.256112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:20419 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.886 [2024-07-26 09:05:56.256140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:37.886 [2024-07-26 09:05:56.268289] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3d940) with pdu=0x2000190f7da8 00:32:37.886 [2024-07-26 09:05:56.268562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:3453 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.886 [2024-07-26 09:05:56.268590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:37.886 [2024-07-26 09:05:56.280777] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3d940) with pdu=0x2000190f7da8 00:32:37.886 [2024-07-26 09:05:56.281092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:9108 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.886 [2024-07-26 09:05:56.281120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:37.886 [2024-07-26 09:05:56.293328] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3d940) with pdu=0x2000190f7da8 00:32:37.886 [2024-07-26 09:05:56.293618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:6209 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.886 [2024-07-26 09:05:56.293645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:37.886 [2024-07-26 09:05:56.305905] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3d940) with pdu=0x2000190f7da8 00:32:37.886 [2024-07-26 09:05:56.306217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:10015 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.886 [2024-07-26 09:05:56.306247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:37.886 [2024-07-26 09:05:56.318393] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3d940) with pdu=0x2000190f7da8 00:32:37.887 [2024-07-26 09:05:56.318670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:8380 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.887 [2024-07-26 09:05:56.318698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:37.887 [2024-07-26 09:05:56.330794] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3d940) with pdu=0x2000190f7da8 00:32:37.887 [2024-07-26 09:05:56.330983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:16627 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.887 [2024-07-26 09:05:56.331008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:37.887 [2024-07-26 09:05:56.343384] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3d940) with pdu=0x2000190f7da8 00:32:37.887 [2024-07-26 09:05:56.343639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:12654 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.887 [2024-07-26 09:05:56.343667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:38.148 [2024-07-26 09:05:56.356184] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3d940) with pdu=0x2000190f7da8 00:32:38.148 [2024-07-26 09:05:56.356376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:14520 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.148 [2024-07-26 09:05:56.356417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:38.148 [2024-07-26 09:05:56.368731] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3d940) with pdu=0x2000190f7da8 00:32:38.148 [2024-07-26 09:05:56.369019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:6461 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.148 [2024-07-26 09:05:56.369072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:38.148 [2024-07-26 09:05:56.381629] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3d940) with pdu=0x2000190f7da8 00:32:38.148 [2024-07-26 09:05:56.381943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:17042 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.148 [2024-07-26 09:05:56.381972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:38.148 [2024-07-26 09:05:56.394220] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3d940) with pdu=0x2000190f7da8 00:32:38.148 [2024-07-26 09:05:56.394458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:23149 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.148 [2024-07-26 09:05:56.394487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:38.148 [2024-07-26 09:05:56.406701] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3d940) with pdu=0x2000190f7da8 00:32:38.148 [2024-07-26 09:05:56.406917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:16799 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.148 [2024-07-26 09:05:56.406945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:38.148 [2024-07-26 09:05:56.419164] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3d940) with pdu=0x2000190f7da8 00:32:38.148 [2024-07-26 09:05:56.419360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:4236 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.148 [2024-07-26 09:05:56.419402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:38.148 [2024-07-26 09:05:56.431636] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3d940) with pdu=0x2000190f7da8 00:32:38.148 [2024-07-26 09:05:56.431825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:19804 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.148 [2024-07-26 09:05:56.431852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:38.148 [2024-07-26 09:05:56.444153] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3d940) with pdu=0x2000190f7da8 00:32:38.148 [2024-07-26 09:05:56.444399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:14693 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.148 [2024-07-26 09:05:56.444429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:38.148 [2024-07-26 09:05:56.456711] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3d940) with pdu=0x2000190f7da8 00:32:38.148 [2024-07-26 09:05:56.456899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:8731 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.148 [2024-07-26 09:05:56.456925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:38.148 [2024-07-26 09:05:56.469084] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3d940) with pdu=0x2000190f7da8 00:32:38.148 [2024-07-26 09:05:56.469298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:15596 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.148 [2024-07-26 09:05:56.469327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:38.148 [2024-07-26 09:05:56.481494] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3d940) with pdu=0x2000190f7da8 00:32:38.148 [2024-07-26 09:05:56.481681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:12398 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.148 [2024-07-26 09:05:56.481706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:38.148 [2024-07-26 09:05:56.493884] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3d940) with pdu=0x2000190f7da8 00:32:38.148 [2024-07-26 09:05:56.494119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:19504 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.148 [2024-07-26 09:05:56.494160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:38.148 [2024-07-26 09:05:56.506526] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3d940) with pdu=0x2000190f7da8 00:32:38.148 [2024-07-26 09:05:56.506730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:2973 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.148 [2024-07-26 09:05:56.506758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:38.148 [2024-07-26 09:05:56.518939] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3d940) with pdu=0x2000190f7da8 00:32:38.148 [2024-07-26 09:05:56.519260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:16202 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.148 [2024-07-26 09:05:56.519289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:38.148 [2024-07-26 09:05:56.531572] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3d940) with pdu=0x2000190f7da8 00:32:38.148 [2024-07-26 09:05:56.531759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:15121 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.148 [2024-07-26 09:05:56.531785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:38.148 [2024-07-26 09:05:56.544171] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3d940) with pdu=0x2000190f7da8 00:32:38.148 [2024-07-26 09:05:56.544495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:12882 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.148 [2024-07-26 09:05:56.544523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:38.148 [2024-07-26 09:05:56.556576] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3d940) with pdu=0x2000190f7da8 00:32:38.148 [2024-07-26 09:05:56.556764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:25289 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.148 [2024-07-26 09:05:56.556789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:38.148 [2024-07-26 09:05:56.569143] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3d940) with pdu=0x2000190f7da8 00:32:38.148 [2024-07-26 09:05:56.569336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:328 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.148 [2024-07-26 09:05:56.569379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:38.148 [2024-07-26 09:05:56.581629] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3d940) with pdu=0x2000190f7da8 00:32:38.149 [2024-07-26 09:05:56.581816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:7875 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.149 [2024-07-26 09:05:56.581843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:38.149 [2024-07-26 09:05:56.594086] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3d940) with pdu=0x2000190f7da8 00:32:38.149 [2024-07-26 09:05:56.594370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:12038 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.149 [2024-07-26 09:05:56.594398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:38.149 [2024-07-26 09:05:56.606877] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3d940) with pdu=0x2000190f7da8 00:32:38.149 [2024-07-26 09:05:56.607176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:3442 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.149 [2024-07-26 09:05:56.607205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:38.408 [2024-07-26 09:05:56.619701] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3d940) with pdu=0x2000190f7da8 00:32:38.408 [2024-07-26 09:05:56.619894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:13107 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.408 [2024-07-26 09:05:56.619920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:38.408 [2024-07-26 09:05:56.631931] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3d940) with pdu=0x2000190f7da8 00:32:38.408 [2024-07-26 09:05:56.632164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:13614 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.408 [2024-07-26 09:05:56.632192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:38.408 [2024-07-26 09:05:56.644601] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3d940) with pdu=0x2000190f7da8 00:32:38.408 [2024-07-26 09:05:56.644895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:21297 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.408 [2024-07-26 09:05:56.644924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:38.408 [2024-07-26 09:05:56.657260] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3d940) with pdu=0x2000190f7da8 00:32:38.408 [2024-07-26 09:05:56.657564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:16541 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.408 [2024-07-26 09:05:56.657592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:38.408 [2024-07-26 09:05:56.669631] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3d940) with pdu=0x2000190f7da8 00:32:38.408 [2024-07-26 09:05:56.669822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:24075 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.408 [2024-07-26 09:05:56.669848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:38.408 [2024-07-26 09:05:56.682089] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3d940) with pdu=0x2000190f7da8 00:32:38.408 [2024-07-26 09:05:56.682281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:25134 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.408 [2024-07-26 09:05:56.682307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:38.408 [2024-07-26 09:05:56.694578] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3d940) with pdu=0x2000190f7da8 00:32:38.408 [2024-07-26 09:05:56.694768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:8042 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.408 [2024-07-26 09:05:56.694798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:38.408 [2024-07-26 09:05:56.706794] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3d940) with pdu=0x2000190f7da8 00:32:38.408 [2024-07-26 09:05:56.706988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:9854 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.408 [2024-07-26 09:05:56.707016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:38.408 [2024-07-26 09:05:56.719124] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3d940) with pdu=0x2000190f7da8 00:32:38.408 [2024-07-26 09:05:56.719394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:21889 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.408 [2024-07-26 09:05:56.719421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:38.408 [2024-07-26 09:05:56.731581] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3d940) with pdu=0x2000190f7da8 00:32:38.409 [2024-07-26 09:05:56.731803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24014 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.409 [2024-07-26 09:05:56.731831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:38.409 [2024-07-26 09:05:56.744174] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3d940) with pdu=0x2000190f7da8 00:32:38.409 [2024-07-26 09:05:56.744506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:20988 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.409 [2024-07-26 09:05:56.744533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:38.409 [2024-07-26 09:05:56.757407] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3d940) with pdu=0x2000190f7da8 00:32:38.409 [2024-07-26 09:05:56.757660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:18549 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.409 [2024-07-26 09:05:56.757696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:38.409 [2024-07-26 09:05:56.771209] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3d940) with pdu=0x2000190f7da8 00:32:38.409 [2024-07-26 09:05:56.771505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:10740 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.409 [2024-07-26 09:05:56.771537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:38.409 [2024-07-26 09:05:56.785041] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3d940) with pdu=0x2000190f7da8 00:32:38.409 [2024-07-26 09:05:56.785271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:21066 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.409 [2024-07-26 09:05:56.785305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:38.409 [2024-07-26 09:05:56.798828] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3d940) with pdu=0x2000190f7da8 00:32:38.409 [2024-07-26 09:05:56.799078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:13532 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.409 [2024-07-26 09:05:56.799107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:38.409 [2024-07-26 09:05:56.813040] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3d940) with pdu=0x2000190f7da8 00:32:38.409 [2024-07-26 09:05:56.813276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:12569 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.409 [2024-07-26 09:05:56.813304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:38.409 [2024-07-26 09:05:56.827427] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3d940) with pdu=0x2000190f7da8 00:32:38.409 [2024-07-26 09:05:56.827647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:6770 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.409 [2024-07-26 09:05:56.827680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:38.409 [2024-07-26 09:05:56.841689] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3d940) with pdu=0x2000190f7da8 00:32:38.409 [2024-07-26 09:05:56.841997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:526 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.409 [2024-07-26 09:05:56.842024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:38.409 [2024-07-26 09:05:56.855755] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3d940) with pdu=0x2000190f7da8 00:32:38.409 [2024-07-26 09:05:56.856084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:13232 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:38.409 [2024-07-26 09:05:56.856112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:38.668 00:32:38.668 Latency(us) 00:32:38.668 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:38.668 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:38.668 nvme0n1 : 2.01 20172.12 78.80 0.00 0.00 6330.15 2682.12 15534.46 00:32:38.668 =================================================================================================================== 00:32:38.668 Total : 20172.12 78.80 0.00 0.00 6330.15 2682.12 15534.46 00:32:38.668 0 00:32:38.668 09:05:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:32:38.668 09:05:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:32:38.668 09:05:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:32:38.668 09:05:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:32:38.668 | .driver_specific 00:32:38.668 | .nvme_error 00:32:38.668 | .status_code 00:32:38.668 | .command_transient_transport_error' 00:32:38.668 09:05:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 158 > 0 )) 00:32:38.668 09:05:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1112475 00:32:38.668 09:05:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 1112475 ']' 00:32:38.668 09:05:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 1112475 00:32:38.668 09:05:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:32:38.927 09:05:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:38.927 09:05:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1112475 00:32:38.927 09:05:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:32:38.927 09:05:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:32:38.927 09:05:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1112475' 00:32:38.927 killing process with pid 1112475 00:32:38.927 09:05:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 1112475 00:32:38.927 Received shutdown signal, test time was about 2.000000 seconds 00:32:38.927 00:32:38.927 Latency(us) 00:32:38.927 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:38.927 =================================================================================================================== 00:32:38.927 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:38.927 09:05:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 1112475 00:32:38.927 09:05:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:32:38.927 09:05:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:32:38.928 09:05:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:32:38.928 09:05:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:32:38.928 09:05:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:32:39.188 09:05:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1112875 00:32:39.188 09:05:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:32:39.188 09:05:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1112875 /var/tmp/bperf.sock 00:32:39.188 09:05:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 1112875 ']' 00:32:39.188 09:05:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:39.188 09:05:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:39.188 09:05:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:39.188 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:39.188 09:05:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:39.188 09:05:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:39.188 [2024-07-26 09:05:57.432961] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:32:39.188 [2024-07-26 09:05:57.433042] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1112875 ] 00:32:39.188 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:39.188 Zero copy mechanism will not be used. 00:32:39.188 EAL: No free 2048 kB hugepages reported on node 1 00:32:39.188 [2024-07-26 09:05:57.471014] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:39.188 [2024-07-26 09:05:57.498752] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:39.188 [2024-07-26 09:05:57.590256] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:39.447 09:05:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:39.447 09:05:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:32:39.447 09:05:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:39.447 09:05:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:39.706 09:05:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:32:39.706 09:05:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:39.706 09:05:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:39.706 09:05:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:39.706 09:05:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:39.706 09:05:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:40.271 nvme0n1 00:32:40.271 09:05:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:32:40.271 09:05:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:40.271 09:05:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:40.271 09:05:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:40.271 09:05:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:32:40.271 09:05:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:40.271 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:40.271 Zero copy mechanism will not be used. 00:32:40.271 Running I/O for 2 seconds... 00:32:40.271 [2024-07-26 09:05:58.591603] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:40.271 [2024-07-26 09:05:58.591974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.271 [2024-07-26 09:05:58.592026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:40.271 [2024-07-26 09:05:58.604145] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:40.271 [2024-07-26 09:05:58.604496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.272 [2024-07-26 09:05:58.604543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:40.272 [2024-07-26 09:05:58.616756] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:40.272 [2024-07-26 09:05:58.617118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.272 [2024-07-26 09:05:58.617148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:40.272 [2024-07-26 09:05:58.628154] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:40.272 [2024-07-26 09:05:58.628503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.272 [2024-07-26 09:05:58.628549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.272 [2024-07-26 09:05:58.639142] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:40.272 [2024-07-26 09:05:58.639497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.272 [2024-07-26 09:05:58.639543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:40.272 [2024-07-26 09:05:58.650458] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:40.272 [2024-07-26 09:05:58.650807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.272 [2024-07-26 09:05:58.650843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:40.272 [2024-07-26 09:05:58.662436] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:40.272 [2024-07-26 09:05:58.662791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.272 [2024-07-26 09:05:58.662820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:40.272 [2024-07-26 09:05:58.673630] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:40.272 [2024-07-26 09:05:58.673992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.272 [2024-07-26 09:05:58.674035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.272 [2024-07-26 09:05:58.684785] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:40.272 [2024-07-26 09:05:58.685194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.272 [2024-07-26 09:05:58.685222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:40.272 [2024-07-26 09:05:58.695366] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:40.272 [2024-07-26 09:05:58.695707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.272 [2024-07-26 09:05:58.695735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:40.272 [2024-07-26 09:05:58.706284] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:40.272 [2024-07-26 09:05:58.706617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.272 [2024-07-26 09:05:58.706647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:40.272 [2024-07-26 09:05:58.716778] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:40.272 [2024-07-26 09:05:58.717146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.272 [2024-07-26 09:05:58.717192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.272 [2024-07-26 09:05:58.727511] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:40.272 [2024-07-26 09:05:58.727859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.272 [2024-07-26 09:05:58.727888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:40.531 [2024-07-26 09:05:58.737832] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:40.531 [2024-07-26 09:05:58.738028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.531 [2024-07-26 09:05:58.738071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:40.531 [2024-07-26 09:05:58.748054] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:40.531 [2024-07-26 09:05:58.748410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.531 [2024-07-26 09:05:58.748439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:40.531 [2024-07-26 09:05:58.758375] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:40.531 [2024-07-26 09:05:58.758710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.531 [2024-07-26 09:05:58.758737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.531 [2024-07-26 09:05:58.769182] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:40.531 [2024-07-26 09:05:58.769523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.531 [2024-07-26 09:05:58.769550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:40.531 [2024-07-26 09:05:58.779805] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:40.531 [2024-07-26 09:05:58.780195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.531 [2024-07-26 09:05:58.780239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:40.531 [2024-07-26 09:05:58.790228] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:40.531 [2024-07-26 09:05:58.790577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.531 [2024-07-26 09:05:58.790605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:40.531 [2024-07-26 09:05:58.801259] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:40.531 [2024-07-26 09:05:58.801609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.531 [2024-07-26 09:05:58.801637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.531 [2024-07-26 09:05:58.812013] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:40.531 [2024-07-26 09:05:58.812397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.531 [2024-07-26 09:05:58.812440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:40.531 [2024-07-26 09:05:58.822349] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:40.531 [2024-07-26 09:05:58.822715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.531 [2024-07-26 09:05:58.822757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:40.531 [2024-07-26 09:05:58.832944] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:40.531 [2024-07-26 09:05:58.833108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.531 [2024-07-26 09:05:58.833138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:40.531 [2024-07-26 09:05:58.843185] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:40.531 [2024-07-26 09:05:58.843595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.531 [2024-07-26 09:05:58.843623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.531 [2024-07-26 09:05:58.853789] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:40.531 [2024-07-26 09:05:58.853931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.531 [2024-07-26 09:05:58.853961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:40.531 [2024-07-26 09:05:58.862982] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:40.531 [2024-07-26 09:05:58.863418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.531 [2024-07-26 09:05:58.863446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:40.531 [2024-07-26 09:05:58.872857] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:40.531 [2024-07-26 09:05:58.873293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.531 [2024-07-26 09:05:58.873337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:40.531 [2024-07-26 09:05:58.882923] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:40.531 [2024-07-26 09:05:58.883365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.531 [2024-07-26 09:05:58.883409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.531 [2024-07-26 09:05:58.893124] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:40.531 [2024-07-26 09:05:58.893478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.531 [2024-07-26 09:05:58.893525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:40.531 [2024-07-26 09:05:58.903493] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:40.531 [2024-07-26 09:05:58.903894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.531 [2024-07-26 09:05:58.903938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:40.531 [2024-07-26 09:05:58.913462] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:40.531 [2024-07-26 09:05:58.913858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.531 [2024-07-26 09:05:58.913902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:40.531 [2024-07-26 09:05:58.923907] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:40.531 [2024-07-26 09:05:58.924304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.531 [2024-07-26 09:05:58.924349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.531 [2024-07-26 09:05:58.933244] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:40.531 [2024-07-26 09:05:58.933594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.532 [2024-07-26 09:05:58.933637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:40.532 [2024-07-26 09:05:58.942158] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:40.532 [2024-07-26 09:05:58.942542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.532 [2024-07-26 09:05:58.942570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:40.532 [2024-07-26 09:05:58.951737] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:40.532 [2024-07-26 09:05:58.952136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.532 [2024-07-26 09:05:58.952165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:40.532 [2024-07-26 09:05:58.961421] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:40.532 [2024-07-26 09:05:58.961769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.532 [2024-07-26 09:05:58.961799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.532 [2024-07-26 09:05:58.970847] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:40.532 [2024-07-26 09:05:58.971231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.532 [2024-07-26 09:05:58.971261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:40.532 [2024-07-26 09:05:58.980322] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:40.532 [2024-07-26 09:05:58.980585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.532 [2024-07-26 09:05:58.980615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:40.532 [2024-07-26 09:05:58.989109] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:40.532 [2024-07-26 09:05:58.989457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.532 [2024-07-26 09:05:58.989502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:40.792 [2024-07-26 09:05:58.999495] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:40.792 [2024-07-26 09:05:58.999834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.792 [2024-07-26 09:05:58.999864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.792 [2024-07-26 09:05:59.010208] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:40.792 [2024-07-26 09:05:59.010574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.792 [2024-07-26 09:05:59.010603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:40.792 [2024-07-26 09:05:59.020087] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:40.792 [2024-07-26 09:05:59.020411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.792 [2024-07-26 09:05:59.020442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:40.792 [2024-07-26 09:05:59.029590] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:40.792 [2024-07-26 09:05:59.029977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.792 [2024-07-26 09:05:59.030021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:40.792 [2024-07-26 09:05:59.038964] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:40.792 [2024-07-26 09:05:59.039308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.792 [2024-07-26 09:05:59.039338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.792 [2024-07-26 09:05:59.048690] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:40.792 [2024-07-26 09:05:59.048991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.792 [2024-07-26 09:05:59.049020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:40.792 [2024-07-26 09:05:59.058470] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:40.792 [2024-07-26 09:05:59.058805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.792 [2024-07-26 09:05:59.058835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:40.792 [2024-07-26 09:05:59.067859] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:40.792 [2024-07-26 09:05:59.068212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.792 [2024-07-26 09:05:59.068242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:40.792 [2024-07-26 09:05:59.077473] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:40.792 [2024-07-26 09:05:59.077882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.792 [2024-07-26 09:05:59.077912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.792 [2024-07-26 09:05:59.086908] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:40.792 [2024-07-26 09:05:59.087323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.792 [2024-07-26 09:05:59.087353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:40.792 [2024-07-26 09:05:59.096639] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:40.792 [2024-07-26 09:05:59.096991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.792 [2024-07-26 09:05:59.097022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:40.792 [2024-07-26 09:05:59.106767] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:40.792 [2024-07-26 09:05:59.107139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.792 [2024-07-26 09:05:59.107171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:40.792 [2024-07-26 09:05:59.116470] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:40.792 [2024-07-26 09:05:59.116902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.792 [2024-07-26 09:05:59.116931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.792 [2024-07-26 09:05:59.125566] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:40.792 [2024-07-26 09:05:59.125846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.792 [2024-07-26 09:05:59.125875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:40.792 [2024-07-26 09:05:59.134591] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:40.792 [2024-07-26 09:05:59.134953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.792 [2024-07-26 09:05:59.134983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:40.792 [2024-07-26 09:05:59.142682] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:40.792 [2024-07-26 09:05:59.143000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.793 [2024-07-26 09:05:59.143027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:40.793 [2024-07-26 09:05:59.150724] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:40.793 [2024-07-26 09:05:59.151104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.793 [2024-07-26 09:05:59.151138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.793 [2024-07-26 09:05:59.160803] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:40.793 [2024-07-26 09:05:59.161083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.793 [2024-07-26 09:05:59.161112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:40.793 [2024-07-26 09:05:59.169284] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:40.793 [2024-07-26 09:05:59.169605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.793 [2024-07-26 09:05:59.169654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:40.793 [2024-07-26 09:05:59.177844] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:40.793 [2024-07-26 09:05:59.178236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.793 [2024-07-26 09:05:59.178265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:40.793 [2024-07-26 09:05:59.187562] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:40.793 [2024-07-26 09:05:59.187922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.793 [2024-07-26 09:05:59.187952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.793 [2024-07-26 09:05:59.196494] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:40.793 [2024-07-26 09:05:59.196845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.793 [2024-07-26 09:05:59.196874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:40.793 [2024-07-26 09:05:59.205856] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:40.793 [2024-07-26 09:05:59.206245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.793 [2024-07-26 09:05:59.206275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:40.793 [2024-07-26 09:05:59.215645] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:40.793 [2024-07-26 09:05:59.215983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.793 [2024-07-26 09:05:59.216012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:40.793 [2024-07-26 09:05:59.224603] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:40.793 [2024-07-26 09:05:59.224923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.793 [2024-07-26 09:05:59.224952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.793 [2024-07-26 09:05:59.234934] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:40.793 [2024-07-26 09:05:59.235343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.793 [2024-07-26 09:05:59.235386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:40.793 [2024-07-26 09:05:59.243733] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:40.793 [2024-07-26 09:05:59.244083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.793 [2024-07-26 09:05:59.244112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:41.053 [2024-07-26 09:05:59.253091] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:41.054 [2024-07-26 09:05:59.253482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.054 [2024-07-26 09:05:59.253510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:41.054 [2024-07-26 09:05:59.262662] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:41.054 [2024-07-26 09:05:59.263034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.054 [2024-07-26 09:05:59.263069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:41.054 [2024-07-26 09:05:59.273086] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:41.054 [2024-07-26 09:05:59.273386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.054 [2024-07-26 09:05:59.273416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:41.054 [2024-07-26 09:05:59.282668] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:41.054 [2024-07-26 09:05:59.283003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.054 [2024-07-26 09:05:59.283032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:41.054 [2024-07-26 09:05:59.292630] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:41.054 [2024-07-26 09:05:59.292888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.054 [2024-07-26 09:05:59.292917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:41.054 [2024-07-26 09:05:59.302049] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:41.054 [2024-07-26 09:05:59.302443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.054 [2024-07-26 09:05:59.302476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:41.054 [2024-07-26 09:05:59.312355] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:41.054 [2024-07-26 09:05:59.312745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.054 [2024-07-26 09:05:59.312775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:41.054 [2024-07-26 09:05:59.322280] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:41.054 [2024-07-26 09:05:59.322625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.054 [2024-07-26 09:05:59.322669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:41.054 [2024-07-26 09:05:59.330813] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:41.054 [2024-07-26 09:05:59.331256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.054 [2024-07-26 09:05:59.331291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:41.054 [2024-07-26 09:05:59.340868] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:41.054 [2024-07-26 09:05:59.341188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.054 [2024-07-26 09:05:59.341217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:41.054 [2024-07-26 09:05:59.350357] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:41.054 [2024-07-26 09:05:59.350705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.054 [2024-07-26 09:05:59.350735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:41.054 [2024-07-26 09:05:59.359003] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:41.054 [2024-07-26 09:05:59.359324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.054 [2024-07-26 09:05:59.359355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:41.054 [2024-07-26 09:05:59.369215] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:41.054 [2024-07-26 09:05:59.369558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.054 [2024-07-26 09:05:59.369587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:41.054 [2024-07-26 09:05:59.378885] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:41.054 [2024-07-26 09:05:59.379237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.054 [2024-07-26 09:05:59.379273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:41.054 [2024-07-26 09:05:59.389143] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:41.054 [2024-07-26 09:05:59.389450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.054 [2024-07-26 09:05:59.389478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:41.054 [2024-07-26 09:05:59.399081] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:41.054 [2024-07-26 09:05:59.399472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.054 [2024-07-26 09:05:59.399501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:41.054 [2024-07-26 09:05:59.408664] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:41.054 [2024-07-26 09:05:59.409026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.054 [2024-07-26 09:05:59.409054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:41.054 [2024-07-26 09:05:59.418769] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:41.054 [2024-07-26 09:05:59.419160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.054 [2024-07-26 09:05:59.419190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:41.054 [2024-07-26 09:05:59.428872] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:41.054 [2024-07-26 09:05:59.429246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.054 [2024-07-26 09:05:59.429276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:41.054 [2024-07-26 09:05:59.439072] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:41.054 [2024-07-26 09:05:59.439439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.054 [2024-07-26 09:05:59.439468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:41.054 [2024-07-26 09:05:59.449203] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:41.054 [2024-07-26 09:05:59.449530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.054 [2024-07-26 09:05:59.449563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:41.054 [2024-07-26 09:05:59.459273] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:41.054 [2024-07-26 09:05:59.459565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.054 [2024-07-26 09:05:59.459595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:41.054 [2024-07-26 09:05:59.469605] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:41.054 [2024-07-26 09:05:59.469967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.054 [2024-07-26 09:05:59.469996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:41.054 [2024-07-26 09:05:59.479666] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:41.054 [2024-07-26 09:05:59.480003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.054 [2024-07-26 09:05:59.480037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:41.054 [2024-07-26 09:05:59.489387] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:41.054 [2024-07-26 09:05:59.489630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.054 [2024-07-26 09:05:59.489659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:41.054 [2024-07-26 09:05:59.499382] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:41.054 [2024-07-26 09:05:59.499718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.054 [2024-07-26 09:05:59.499747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:41.055 [2024-07-26 09:05:59.509502] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:41.055 [2024-07-26 09:05:59.509845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.055 [2024-07-26 09:05:59.509874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:41.315 [2024-07-26 09:05:59.519491] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:41.315 [2024-07-26 09:05:59.519865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.315 [2024-07-26 09:05:59.519894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:41.315 [2024-07-26 09:05:59.529571] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:41.315 [2024-07-26 09:05:59.529978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.315 [2024-07-26 09:05:59.530023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:41.315 [2024-07-26 09:05:59.539353] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:41.315 [2024-07-26 09:05:59.539795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.315 [2024-07-26 09:05:59.539823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:41.315 [2024-07-26 09:05:59.549309] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:41.315 [2024-07-26 09:05:59.549649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.315 [2024-07-26 09:05:59.549678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:41.315 [2024-07-26 09:05:59.559507] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:41.315 [2024-07-26 09:05:59.559908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.315 [2024-07-26 09:05:59.559937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:41.315 [2024-07-26 09:05:59.569765] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:41.315 [2024-07-26 09:05:59.570225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.315 [2024-07-26 09:05:59.570255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:41.315 [2024-07-26 09:05:59.579353] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:41.315 [2024-07-26 09:05:59.579742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.315 [2024-07-26 09:05:59.579775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:41.315 [2024-07-26 09:05:59.589961] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:41.315 [2024-07-26 09:05:59.590337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.315 [2024-07-26 09:05:59.590376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:41.315 [2024-07-26 09:05:59.600086] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:41.315 [2024-07-26 09:05:59.600457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.315 [2024-07-26 09:05:59.600486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:41.315 [2024-07-26 09:05:59.610204] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:41.315 [2024-07-26 09:05:59.610553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.315 [2024-07-26 09:05:59.610583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:41.315 [2024-07-26 09:05:59.620290] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:41.315 [2024-07-26 09:05:59.620669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.315 [2024-07-26 09:05:59.620698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:41.316 [2024-07-26 09:05:59.629253] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:41.316 [2024-07-26 09:05:59.629605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.316 [2024-07-26 09:05:59.629634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:41.316 [2024-07-26 09:05:59.639583] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:41.316 [2024-07-26 09:05:59.639884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.316 [2024-07-26 09:05:59.639917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:41.316 [2024-07-26 09:05:59.649367] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:41.316 [2024-07-26 09:05:59.649768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.316 [2024-07-26 09:05:59.649797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:41.316 [2024-07-26 09:05:59.658279] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:41.316 [2024-07-26 09:05:59.658582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.316 [2024-07-26 09:05:59.658615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:41.316 [2024-07-26 09:05:59.668774] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:41.316 [2024-07-26 09:05:59.669132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.316 [2024-07-26 09:05:59.669161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:41.316 [2024-07-26 09:05:59.678840] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:41.316 [2024-07-26 09:05:59.679220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.316 [2024-07-26 09:05:59.679253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:41.316 [2024-07-26 09:05:59.688850] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:41.316 [2024-07-26 09:05:59.689117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.316 [2024-07-26 09:05:59.689146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:41.316 [2024-07-26 09:05:59.697627] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:41.316 [2024-07-26 09:05:59.697996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.316 [2024-07-26 09:05:59.698025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:41.316 [2024-07-26 09:05:59.708364] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:41.316 [2024-07-26 09:05:59.708700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.316 [2024-07-26 09:05:59.708729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:41.316 [2024-07-26 09:05:59.718606] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:41.316 [2024-07-26 09:05:59.718970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.316 [2024-07-26 09:05:59.719003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:41.316 [2024-07-26 09:05:59.728990] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:41.316 [2024-07-26 09:05:59.729392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.316 [2024-07-26 09:05:59.729421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:41.316 [2024-07-26 09:05:59.739294] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:41.316 [2024-07-26 09:05:59.739661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.316 [2024-07-26 09:05:59.739694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:41.316 [2024-07-26 09:05:59.749786] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:41.316 [2024-07-26 09:05:59.750159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.316 [2024-07-26 09:05:59.750188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:41.316 [2024-07-26 09:05:59.759954] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:41.316 [2024-07-26 09:05:59.760269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.316 [2024-07-26 09:05:59.760298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:41.316 [2024-07-26 09:05:59.769649] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:41.316 [2024-07-26 09:05:59.769970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.316 [2024-07-26 09:05:59.770014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:41.577 [2024-07-26 09:05:59.779351] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:41.577 [2024-07-26 09:05:59.779770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.577 [2024-07-26 09:05:59.779799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:41.577 [2024-07-26 09:05:59.789187] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:41.577 [2024-07-26 09:05:59.789490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.577 [2024-07-26 09:05:59.789520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:41.577 [2024-07-26 09:05:59.799157] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:41.577 [2024-07-26 09:05:59.799543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.577 [2024-07-26 09:05:59.799587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:41.577 [2024-07-26 09:05:59.809343] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:41.577 [2024-07-26 09:05:59.809659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.577 [2024-07-26 09:05:59.809688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:41.577 [2024-07-26 09:05:59.818473] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:41.577 [2024-07-26 09:05:59.818812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.577 [2024-07-26 09:05:59.818841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:41.577 [2024-07-26 09:05:59.828630] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:41.577 [2024-07-26 09:05:59.828972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.577 [2024-07-26 09:05:59.829001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:41.577 [2024-07-26 09:05:59.838616] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:41.577 [2024-07-26 09:05:59.838965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.577 [2024-07-26 09:05:59.839009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:41.577 [2024-07-26 09:05:59.847701] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:41.577 [2024-07-26 09:05:59.848123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.577 [2024-07-26 09:05:59.848162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:41.577 [2024-07-26 09:05:59.857711] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:41.577 [2024-07-26 09:05:59.858121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.577 [2024-07-26 09:05:59.858151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:41.577 [2024-07-26 09:05:59.868068] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:41.577 [2024-07-26 09:05:59.868412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.577 [2024-07-26 09:05:59.868462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:41.577 [2024-07-26 09:05:59.878433] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:41.577 [2024-07-26 09:05:59.878759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.577 [2024-07-26 09:05:59.878789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:41.577 [2024-07-26 09:05:59.888547] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:41.577 [2024-07-26 09:05:59.888906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.577 [2024-07-26 09:05:59.888936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:41.577 [2024-07-26 09:05:59.898350] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:41.577 [2024-07-26 09:05:59.898557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.577 [2024-07-26 09:05:59.898602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:41.577 [2024-07-26 09:05:59.907746] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:41.577 [2024-07-26 09:05:59.908025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.577 [2024-07-26 09:05:59.908054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:41.577 [2024-07-26 09:05:59.917208] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:41.577 [2024-07-26 09:05:59.917535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.577 [2024-07-26 09:05:59.917564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:41.577 [2024-07-26 09:05:59.926569] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:41.577 [2024-07-26 09:05:59.926936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.577 [2024-07-26 09:05:59.926969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:41.577 [2024-07-26 09:05:59.936640] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:41.577 [2024-07-26 09:05:59.936969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.577 [2024-07-26 09:05:59.936998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:41.577 [2024-07-26 09:05:59.946201] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:41.577 [2024-07-26 09:05:59.946503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.577 [2024-07-26 09:05:59.946532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:41.577 [2024-07-26 09:05:59.955112] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:41.577 [2024-07-26 09:05:59.955496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.577 [2024-07-26 09:05:59.955525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:41.577 [2024-07-26 09:05:59.965504] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:41.577 [2024-07-26 09:05:59.965877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.577 [2024-07-26 09:05:59.965906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:41.577 [2024-07-26 09:05:59.975641] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:41.577 [2024-07-26 09:05:59.975940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.577 [2024-07-26 09:05:59.975969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:41.577 [2024-07-26 09:05:59.985155] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:41.577 [2024-07-26 09:05:59.985568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.577 [2024-07-26 09:05:59.985598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:41.577 [2024-07-26 09:05:59.995115] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:41.577 [2024-07-26 09:05:59.995447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.577 [2024-07-26 09:05:59.995480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:41.577 [2024-07-26 09:06:00.005201] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:41.577 [2024-07-26 09:06:00.005521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.577 [2024-07-26 09:06:00.005556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:41.577 [2024-07-26 09:06:00.014987] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:41.578 [2024-07-26 09:06:00.015300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.578 [2024-07-26 09:06:00.015330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:41.578 [2024-07-26 09:06:00.024040] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:41.578 [2024-07-26 09:06:00.024454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.578 [2024-07-26 09:06:00.024492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:41.578 [2024-07-26 09:06:00.033575] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:41.578 [2024-07-26 09:06:00.033869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.578 [2024-07-26 09:06:00.033915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:41.838 [2024-07-26 09:06:00.041878] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:41.838 [2024-07-26 09:06:00.042298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.838 [2024-07-26 09:06:00.042342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:41.838 [2024-07-26 09:06:00.050958] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:41.838 [2024-07-26 09:06:00.051332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.838 [2024-07-26 09:06:00.051368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:41.838 [2024-07-26 09:06:00.060322] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:41.838 [2024-07-26 09:06:00.060626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.838 [2024-07-26 09:06:00.060656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:41.838 [2024-07-26 09:06:00.069078] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:41.838 [2024-07-26 09:06:00.069465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.838 [2024-07-26 09:06:00.069508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:41.838 [2024-07-26 09:06:00.078722] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:41.838 [2024-07-26 09:06:00.079082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.838 [2024-07-26 09:06:00.079123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:41.838 [2024-07-26 09:06:00.087512] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:41.838 [2024-07-26 09:06:00.087820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.838 [2024-07-26 09:06:00.087863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:41.838 [2024-07-26 09:06:00.096211] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:41.838 [2024-07-26 09:06:00.096515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.838 [2024-07-26 09:06:00.096559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:41.838 [2024-07-26 09:06:00.105695] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:41.838 [2024-07-26 09:06:00.106064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.838 [2024-07-26 09:06:00.106094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:41.838 [2024-07-26 09:06:00.115348] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:41.838 [2024-07-26 09:06:00.115725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.838 [2024-07-26 09:06:00.115755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:41.838 [2024-07-26 09:06:00.125104] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:41.838 [2024-07-26 09:06:00.125393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.838 [2024-07-26 09:06:00.125423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:41.838 [2024-07-26 09:06:00.134283] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:41.838 [2024-07-26 09:06:00.134665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.838 [2024-07-26 09:06:00.134695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:41.838 [2024-07-26 09:06:00.143170] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:41.838 [2024-07-26 09:06:00.143555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.838 [2024-07-26 09:06:00.143599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:41.838 [2024-07-26 09:06:00.153633] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:41.838 [2024-07-26 09:06:00.154009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.838 [2024-07-26 09:06:00.154038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:41.838 [2024-07-26 09:06:00.163462] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:41.838 [2024-07-26 09:06:00.163732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.838 [2024-07-26 09:06:00.163761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:41.838 [2024-07-26 09:06:00.172536] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:41.838 [2024-07-26 09:06:00.172842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.838 [2024-07-26 09:06:00.172871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:41.838 [2024-07-26 09:06:00.181781] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:41.838 [2024-07-26 09:06:00.182180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.838 [2024-07-26 09:06:00.182210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:41.838 [2024-07-26 09:06:00.191648] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:41.838 [2024-07-26 09:06:00.192040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.838 [2024-07-26 09:06:00.192082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:41.838 [2024-07-26 09:06:00.201358] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:41.838 [2024-07-26 09:06:00.201730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.838 [2024-07-26 09:06:00.201760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:41.838 [2024-07-26 09:06:00.211295] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:41.838 [2024-07-26 09:06:00.211677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.838 [2024-07-26 09:06:00.211710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:41.838 [2024-07-26 09:06:00.221014] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:41.838 [2024-07-26 09:06:00.221326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.838 [2024-07-26 09:06:00.221367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:41.838 [2024-07-26 09:06:00.229479] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:41.838 [2024-07-26 09:06:00.229803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.838 [2024-07-26 09:06:00.229832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:41.838 [2024-07-26 09:06:00.239366] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:41.838 [2024-07-26 09:06:00.239703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.838 [2024-07-26 09:06:00.239733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:41.838 [2024-07-26 09:06:00.249227] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:41.838 [2024-07-26 09:06:00.249617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.838 [2024-07-26 09:06:00.249650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:41.838 [2024-07-26 09:06:00.259037] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:41.838 [2024-07-26 09:06:00.259391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.838 [2024-07-26 09:06:00.259429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:41.838 [2024-07-26 09:06:00.268282] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:41.838 [2024-07-26 09:06:00.268648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.839 [2024-07-26 09:06:00.268678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:41.839 [2024-07-26 09:06:00.277121] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:41.839 [2024-07-26 09:06:00.277387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.839 [2024-07-26 09:06:00.277416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:41.839 [2024-07-26 09:06:00.286944] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:41.839 [2024-07-26 09:06:00.287339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.839 [2024-07-26 09:06:00.287368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:41.839 [2024-07-26 09:06:00.295767] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:41.839 [2024-07-26 09:06:00.296100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.839 [2024-07-26 09:06:00.296136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:42.097 [2024-07-26 09:06:00.305205] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:42.097 [2024-07-26 09:06:00.305505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.097 [2024-07-26 09:06:00.305534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:42.097 [2024-07-26 09:06:00.313122] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:42.097 [2024-07-26 09:06:00.313422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.097 [2024-07-26 09:06:00.313464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:42.097 [2024-07-26 09:06:00.321960] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:42.097 [2024-07-26 09:06:00.322262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.097 [2024-07-26 09:06:00.322291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:42.097 [2024-07-26 09:06:00.331332] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:42.097 [2024-07-26 09:06:00.331614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.097 [2024-07-26 09:06:00.331642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:42.097 [2024-07-26 09:06:00.339745] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:42.097 [2024-07-26 09:06:00.340021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.097 [2024-07-26 09:06:00.340049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:42.097 [2024-07-26 09:06:00.349481] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:42.097 [2024-07-26 09:06:00.349892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.098 [2024-07-26 09:06:00.349921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:42.098 [2024-07-26 09:06:00.359545] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:42.098 [2024-07-26 09:06:00.359924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.098 [2024-07-26 09:06:00.359953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:42.098 [2024-07-26 09:06:00.369386] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:42.098 [2024-07-26 09:06:00.369722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.098 [2024-07-26 09:06:00.369751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:42.098 [2024-07-26 09:06:00.378363] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:42.098 [2024-07-26 09:06:00.378687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.098 [2024-07-26 09:06:00.378717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:42.098 [2024-07-26 09:06:00.388215] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:42.098 [2024-07-26 09:06:00.388541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.098 [2024-07-26 09:06:00.388570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:42.098 [2024-07-26 09:06:00.397577] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:42.098 [2024-07-26 09:06:00.397898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.098 [2024-07-26 09:06:00.397927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:42.098 [2024-07-26 09:06:00.405992] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:42.098 [2024-07-26 09:06:00.406340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.098 [2024-07-26 09:06:00.406377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:42.098 [2024-07-26 09:06:00.415832] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:42.098 [2024-07-26 09:06:00.416200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.098 [2024-07-26 09:06:00.416230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:42.098 [2024-07-26 09:06:00.425314] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:42.098 [2024-07-26 09:06:00.425696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.098 [2024-07-26 09:06:00.425726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:42.098 [2024-07-26 09:06:00.435005] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:42.098 [2024-07-26 09:06:00.435317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.098 [2024-07-26 09:06:00.435360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:42.098 [2024-07-26 09:06:00.444511] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:42.098 [2024-07-26 09:06:00.444800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.098 [2024-07-26 09:06:00.444829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:42.098 [2024-07-26 09:06:00.454450] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:42.098 [2024-07-26 09:06:00.454837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.098 [2024-07-26 09:06:00.454880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:42.098 [2024-07-26 09:06:00.464863] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:42.098 [2024-07-26 09:06:00.465248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.098 [2024-07-26 09:06:00.465277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:42.098 [2024-07-26 09:06:00.474592] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:42.098 [2024-07-26 09:06:00.474936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.098 [2024-07-26 09:06:00.474965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:42.098 [2024-07-26 09:06:00.484552] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:42.098 [2024-07-26 09:06:00.484913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.098 [2024-07-26 09:06:00.484942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:42.098 [2024-07-26 09:06:00.494553] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:42.098 [2024-07-26 09:06:00.494896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.098 [2024-07-26 09:06:00.494926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:42.098 [2024-07-26 09:06:00.504539] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:42.098 [2024-07-26 09:06:00.504906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.098 [2024-07-26 09:06:00.504942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:42.098 [2024-07-26 09:06:00.514620] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:42.098 [2024-07-26 09:06:00.514918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.098 [2024-07-26 09:06:00.514947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:42.098 [2024-07-26 09:06:00.524927] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:42.098 [2024-07-26 09:06:00.525304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.098 [2024-07-26 09:06:00.525350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:42.098 [2024-07-26 09:06:00.534631] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:42.098 [2024-07-26 09:06:00.535006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.098 [2024-07-26 09:06:00.535037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:42.098 [2024-07-26 09:06:00.544559] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:42.098 [2024-07-26 09:06:00.544910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.098 [2024-07-26 09:06:00.544939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:42.098 [2024-07-26 09:06:00.554853] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:42.098 [2024-07-26 09:06:00.555227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.098 [2024-07-26 09:06:00.555257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:42.356 [2024-07-26 09:06:00.564206] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:42.356 [2024-07-26 09:06:00.564495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.356 [2024-07-26 09:06:00.564524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:42.356 [2024-07-26 09:06:00.574396] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:42.356 [2024-07-26 09:06:00.574823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.356 [2024-07-26 09:06:00.574880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:42.356 [2024-07-26 09:06:00.583663] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a3f5c0) with pdu=0x2000190fef90 00:32:42.356 [2024-07-26 09:06:00.584050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.356 [2024-07-26 09:06:00.584087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:42.356 00:32:42.356 Latency(us) 00:32:42.356 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:42.356 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:32:42.356 nvme0n1 : 2.01 3161.06 395.13 0.00 0.00 5049.88 2949.12 12621.75 00:32:42.356 =================================================================================================================== 00:32:42.356 Total : 3161.06 395.13 0.00 0.00 5049.88 2949.12 12621.75 00:32:42.356 0 00:32:42.356 09:06:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:32:42.357 09:06:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:32:42.357 09:06:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:32:42.357 | .driver_specific 00:32:42.357 | .nvme_error 00:32:42.357 | .status_code 00:32:42.357 | .command_transient_transport_error' 00:32:42.357 09:06:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:32:42.614 09:06:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 204 > 0 )) 00:32:42.614 09:06:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1112875 00:32:42.614 09:06:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 1112875 ']' 00:32:42.614 09:06:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 1112875 00:32:42.614 09:06:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:32:42.614 09:06:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:42.614 09:06:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1112875 00:32:42.614 09:06:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:32:42.614 09:06:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:32:42.614 09:06:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1112875' 00:32:42.614 killing process with pid 1112875 00:32:42.614 09:06:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 1112875 00:32:42.614 Received shutdown signal, test time was about 2.000000 seconds 00:32:42.614 00:32:42.614 Latency(us) 00:32:42.614 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:42.614 =================================================================================================================== 00:32:42.614 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:42.614 09:06:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 1112875 00:32:42.873 09:06:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 1111511 00:32:42.873 09:06:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 1111511 ']' 00:32:42.873 09:06:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 1111511 00:32:42.873 09:06:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:32:42.873 09:06:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:42.873 09:06:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1111511 00:32:42.873 09:06:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:42.873 09:06:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:42.873 09:06:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1111511' 00:32:42.873 killing process with pid 1111511 00:32:42.873 09:06:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 1111511 00:32:42.874 09:06:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 1111511 00:32:43.132 00:32:43.132 real 0m15.261s 00:32:43.132 user 0m29.572s 00:32:43.132 sys 0m4.347s 00:32:43.132 09:06:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:43.132 09:06:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:43.132 ************************************ 00:32:43.132 END TEST nvmf_digest_error 00:32:43.132 ************************************ 00:32:43.132 09:06:01 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:32:43.132 09:06:01 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:32:43.132 09:06:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:43.132 09:06:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:32:43.132 09:06:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:43.132 09:06:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:32:43.132 09:06:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:43.132 09:06:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:43.132 rmmod nvme_tcp 00:32:43.132 rmmod nvme_fabrics 00:32:43.132 rmmod nvme_keyring 00:32:43.132 09:06:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:43.132 09:06:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:32:43.132 09:06:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:32:43.132 09:06:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 1111511 ']' 00:32:43.132 09:06:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 1111511 00:32:43.132 09:06:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@950 -- # '[' -z 1111511 ']' 00:32:43.132 09:06:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # kill -0 1111511 00:32:43.132 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1111511) - No such process 00:32:43.132 09:06:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@977 -- # echo 'Process with pid 1111511 is not found' 00:32:43.132 Process with pid 1111511 is not found 00:32:43.132 09:06:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:43.132 09:06:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:43.132 09:06:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:43.132 09:06:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:43.132 09:06:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:43.132 09:06:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:43.132 09:06:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:43.132 09:06:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:45.035 09:06:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:45.035 00:32:45.035 real 0m34.775s 00:32:45.035 user 0m59.904s 00:32:45.035 sys 0m10.291s 00:32:45.035 09:06:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:45.035 09:06:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:32:45.035 ************************************ 00:32:45.035 END TEST nvmf_digest 00:32:45.035 ************************************ 00:32:45.293 09:06:03 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:32:45.293 09:06:03 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:32:45.293 09:06:03 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:32:45.293 09:06:03 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:32:45.293 09:06:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:32:45.293 09:06:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:45.293 09:06:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.293 ************************************ 00:32:45.293 START TEST nvmf_bdevperf 00:32:45.293 ************************************ 00:32:45.293 09:06:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:32:45.293 * Looking for test storage... 00:32:45.293 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:45.293 09:06:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:45.293 09:06:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:32:45.293 09:06:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:45.293 09:06:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:45.293 09:06:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:45.293 09:06:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:45.293 09:06:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:45.293 09:06:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:45.293 09:06:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:45.293 09:06:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:45.293 09:06:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:45.293 09:06:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:45.293 09:06:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:45.293 09:06:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:45.293 09:06:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:45.293 09:06:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:45.293 09:06:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:45.293 09:06:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:45.293 09:06:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:45.293 09:06:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:45.293 09:06:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:45.293 09:06:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:45.293 09:06:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:45.293 09:06:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:45.293 09:06:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:45.293 09:06:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:32:45.293 09:06:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:45.293 09:06:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:32:45.293 09:06:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:45.293 09:06:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:45.293 09:06:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:45.293 09:06:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:45.293 09:06:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:45.293 09:06:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:45.293 09:06:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:45.293 09:06:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:45.293 09:06:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:45.293 09:06:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:45.293 09:06:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:32:45.293 09:06:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:45.293 09:06:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:45.293 09:06:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:45.293 09:06:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:45.293 09:06:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:45.293 09:06:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:45.293 09:06:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:45.293 09:06:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:45.293 09:06:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:45.293 09:06:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:45.293 09:06:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:32:45.293 09:06:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:47.228 09:06:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:47.228 09:06:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:32:47.228 09:06:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:47.228 09:06:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:47.228 09:06:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:47.228 09:06:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:47.228 09:06:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:47.228 09:06:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:32:47.228 09:06:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:47.228 09:06:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:32:47.228 09:06:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:32:47.228 09:06:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:32:47.228 09:06:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:32:47.228 09:06:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:32:47.228 09:06:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:32:47.228 09:06:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:47.228 09:06:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:47.228 09:06:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:47.228 09:06:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:47.228 09:06:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:47.228 09:06:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:47.228 09:06:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:47.228 09:06:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:47.228 09:06:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:47.228 09:06:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:47.228 09:06:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:47.228 09:06:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:47.228 09:06:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:47.228 09:06:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:47.228 09:06:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:47.228 09:06:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:47.228 09:06:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:47.228 09:06:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:47.228 09:06:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:32:47.228 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:32:47.228 09:06:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:47.228 09:06:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:47.228 09:06:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:47.228 09:06:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:47.228 09:06:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:47.228 09:06:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:47.228 09:06:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:32:47.228 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:32:47.228 09:06:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:47.228 09:06:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:47.228 09:06:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:47.228 09:06:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:47.228 09:06:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:47.228 09:06:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:47.228 09:06:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:47.228 09:06:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:47.228 09:06:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:47.228 09:06:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:47.228 09:06:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:47.228 09:06:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:47.228 09:06:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:47.228 09:06:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:47.228 09:06:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:47.228 09:06:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:32:47.228 Found net devices under 0000:0a:00.0: cvl_0_0 00:32:47.228 09:06:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:47.228 09:06:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:47.228 09:06:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:47.228 09:06:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:47.228 09:06:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:47.228 09:06:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:47.228 09:06:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:47.228 09:06:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:47.228 09:06:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:32:47.228 Found net devices under 0000:0a:00.1: cvl_0_1 00:32:47.228 09:06:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:47.228 09:06:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:47.228 09:06:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:32:47.228 09:06:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:47.228 09:06:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:47.228 09:06:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:47.228 09:06:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:47.228 09:06:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:47.228 09:06:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:47.228 09:06:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:47.228 09:06:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:47.228 09:06:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:47.228 09:06:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:47.228 09:06:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:47.228 09:06:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:47.228 09:06:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:47.228 09:06:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:47.228 09:06:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:47.228 09:06:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:47.228 09:06:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:47.229 09:06:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:47.229 09:06:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:47.229 09:06:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:47.488 09:06:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:47.488 09:06:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:47.488 09:06:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:47.488 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:47.488 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.234 ms 00:32:47.488 00:32:47.488 --- 10.0.0.2 ping statistics --- 00:32:47.488 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:47.488 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:32:47.488 09:06:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:47.488 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:47.488 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.097 ms 00:32:47.488 00:32:47.488 --- 10.0.0.1 ping statistics --- 00:32:47.488 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:47.488 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:32:47.488 09:06:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:47.488 09:06:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:32:47.488 09:06:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:47.488 09:06:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:47.488 09:06:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:47.488 09:06:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:47.488 09:06:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:47.488 09:06:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:47.488 09:06:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:47.488 09:06:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:32:47.488 09:06:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:32:47.488 09:06:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:47.488 09:06:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:47.488 09:06:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:47.488 09:06:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=1115335 00:32:47.488 09:06:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:32:47.488 09:06:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 1115335 00:32:47.488 09:06:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 1115335 ']' 00:32:47.488 09:06:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:47.488 09:06:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:47.488 09:06:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:47.488 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:47.488 09:06:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:47.488 09:06:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:47.488 [2024-07-26 09:06:05.824179] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:32:47.489 [2024-07-26 09:06:05.824251] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:47.489 EAL: No free 2048 kB hugepages reported on node 1 00:32:47.489 [2024-07-26 09:06:05.863234] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:47.489 [2024-07-26 09:06:05.891447] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:47.748 [2024-07-26 09:06:05.980180] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:47.748 [2024-07-26 09:06:05.980238] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:47.748 [2024-07-26 09:06:05.980266] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:47.748 [2024-07-26 09:06:05.980278] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:47.748 [2024-07-26 09:06:05.980288] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:47.748 [2024-07-26 09:06:05.980382] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:32:47.748 [2024-07-26 09:06:05.980443] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:32:47.748 [2024-07-26 09:06:05.980445] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:47.748 09:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:47.748 09:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:32:47.748 09:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:47.748 09:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:47.748 09:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:47.748 09:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:47.748 09:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:47.748 09:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:47.748 09:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:47.748 [2024-07-26 09:06:06.131391] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:47.748 09:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:47.748 09:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:47.748 09:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:47.748 09:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:47.748 Malloc0 00:32:47.748 09:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:47.748 09:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:47.748 09:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:47.748 09:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:47.748 09:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:47.748 09:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:47.748 09:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:47.748 09:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:47.748 09:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:47.748 09:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:47.748 09:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:47.748 09:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:47.748 [2024-07-26 09:06:06.188975] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:47.748 09:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:47.748 09:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:32:47.748 09:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:32:47.748 09:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:32:47.748 09:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:32:47.748 09:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:32:47.748 09:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:32:47.748 { 00:32:47.748 "params": { 00:32:47.748 "name": "Nvme$subsystem", 00:32:47.748 "trtype": "$TEST_TRANSPORT", 00:32:47.748 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:47.748 "adrfam": "ipv4", 00:32:47.748 "trsvcid": "$NVMF_PORT", 00:32:47.748 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:47.748 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:47.748 "hdgst": ${hdgst:-false}, 00:32:47.748 "ddgst": ${ddgst:-false} 00:32:47.748 }, 00:32:47.748 "method": "bdev_nvme_attach_controller" 00:32:47.748 } 00:32:47.748 EOF 00:32:47.748 )") 00:32:47.748 09:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:32:47.748 09:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:32:47.748 09:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:32:47.748 09:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:32:47.748 "params": { 00:32:47.748 "name": "Nvme1", 00:32:47.748 "trtype": "tcp", 00:32:47.748 "traddr": "10.0.0.2", 00:32:47.748 "adrfam": "ipv4", 00:32:47.748 "trsvcid": "4420", 00:32:47.748 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:47.748 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:47.748 "hdgst": false, 00:32:47.748 "ddgst": false 00:32:47.748 }, 00:32:47.748 "method": "bdev_nvme_attach_controller" 00:32:47.748 }' 00:32:48.008 [2024-07-26 09:06:06.238710] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:32:48.008 [2024-07-26 09:06:06.238797] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1115479 ] 00:32:48.008 EAL: No free 2048 kB hugepages reported on node 1 00:32:48.008 [2024-07-26 09:06:06.271820] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:48.008 [2024-07-26 09:06:06.301163] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:48.008 [2024-07-26 09:06:06.389751] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:48.267 Running I/O for 1 seconds... 00:32:49.203 00:32:49.203 Latency(us) 00:32:49.203 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:49.203 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:49.203 Verification LBA range: start 0x0 length 0x4000 00:32:49.203 Nvme1n1 : 1.01 8294.19 32.40 0.00 0.00 15365.94 1723.35 15243.19 00:32:49.203 =================================================================================================================== 00:32:49.203 Total : 8294.19 32.40 0.00 0.00 15365.94 1723.35 15243.19 00:32:49.463 09:06:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=1115855 00:32:49.463 09:06:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:32:49.463 09:06:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:32:49.463 09:06:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:32:49.463 09:06:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:32:49.463 09:06:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:32:49.463 09:06:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:32:49.463 09:06:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:32:49.463 { 00:32:49.463 "params": { 00:32:49.463 "name": "Nvme$subsystem", 00:32:49.463 "trtype": "$TEST_TRANSPORT", 00:32:49.463 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:49.463 "adrfam": "ipv4", 00:32:49.463 "trsvcid": "$NVMF_PORT", 00:32:49.463 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:49.463 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:49.463 "hdgst": ${hdgst:-false}, 00:32:49.463 "ddgst": ${ddgst:-false} 00:32:49.463 }, 00:32:49.463 "method": "bdev_nvme_attach_controller" 00:32:49.463 } 00:32:49.463 EOF 00:32:49.463 )") 00:32:49.463 09:06:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:32:49.463 09:06:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:32:49.464 09:06:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:32:49.464 09:06:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:32:49.464 "params": { 00:32:49.464 "name": "Nvme1", 00:32:49.464 "trtype": "tcp", 00:32:49.464 "traddr": "10.0.0.2", 00:32:49.464 "adrfam": "ipv4", 00:32:49.464 "trsvcid": "4420", 00:32:49.464 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:49.464 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:49.464 "hdgst": false, 00:32:49.464 "ddgst": false 00:32:49.464 }, 00:32:49.464 "method": "bdev_nvme_attach_controller" 00:32:49.464 }' 00:32:49.464 [2024-07-26 09:06:07.837714] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:32:49.464 [2024-07-26 09:06:07.837789] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1115855 ] 00:32:49.464 EAL: No free 2048 kB hugepages reported on node 1 00:32:49.464 [2024-07-26 09:06:07.870849] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:49.464 [2024-07-26 09:06:07.899112] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:49.723 [2024-07-26 09:06:07.986535] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:49.983 Running I/O for 15 seconds... 00:32:52.519 09:06:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 1115335 00:32:52.519 09:06:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:32:52.519 [2024-07-26 09:06:10.806223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:44728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.519 [2024-07-26 09:06:10.806272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.519 [2024-07-26 09:06:10.806301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:44736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.519 [2024-07-26 09:06:10.806317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.519 [2024-07-26 09:06:10.806336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:44744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.519 [2024-07-26 09:06:10.806370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.519 [2024-07-26 09:06:10.806388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:44752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.519 [2024-07-26 09:06:10.806405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.519 [2024-07-26 09:06:10.806423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:44760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.519 [2024-07-26 09:06:10.806440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.519 [2024-07-26 09:06:10.806459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:44768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.519 [2024-07-26 09:06:10.806474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.519 [2024-07-26 09:06:10.806492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:44776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.519 [2024-07-26 09:06:10.806509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.519 [2024-07-26 09:06:10.806538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:44784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.519 [2024-07-26 09:06:10.806555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.519 [2024-07-26 09:06:10.806574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:44792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.519 [2024-07-26 09:06:10.806590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.519 [2024-07-26 09:06:10.806607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:44800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.519 [2024-07-26 09:06:10.806622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.519 [2024-07-26 09:06:10.806639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:44808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.519 [2024-07-26 09:06:10.806656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.520 [2024-07-26 09:06:10.806676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:44816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.520 [2024-07-26 09:06:10.806693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.520 [2024-07-26 09:06:10.806713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:44824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.520 [2024-07-26 09:06:10.806731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.520 [2024-07-26 09:06:10.806749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:44832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.520 [2024-07-26 09:06:10.806767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.520 [2024-07-26 09:06:10.806787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:44840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.520 [2024-07-26 09:06:10.806804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.520 [2024-07-26 09:06:10.806821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:44848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.520 [2024-07-26 09:06:10.806836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.520 [2024-07-26 09:06:10.806852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:44856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.520 [2024-07-26 09:06:10.806867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.520 [2024-07-26 09:06:10.806884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:44864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.520 [2024-07-26 09:06:10.806899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.520 [2024-07-26 09:06:10.806916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:44872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.520 [2024-07-26 09:06:10.806932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.520 [2024-07-26 09:06:10.806949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:44880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.520 [2024-07-26 09:06:10.806968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.520 [2024-07-26 09:06:10.806986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:44888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.520 [2024-07-26 09:06:10.807001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.520 [2024-07-26 09:06:10.807018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:44896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.520 [2024-07-26 09:06:10.807033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.520 [2024-07-26 09:06:10.807050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:44904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.520 [2024-07-26 09:06:10.807074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.520 [2024-07-26 09:06:10.807092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:44912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.520 [2024-07-26 09:06:10.807123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.520 [2024-07-26 09:06:10.807140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:44920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.520 [2024-07-26 09:06:10.807154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.520 [2024-07-26 09:06:10.807169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:44928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.520 [2024-07-26 09:06:10.807182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.520 [2024-07-26 09:06:10.807197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:44936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.520 [2024-07-26 09:06:10.807210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.520 [2024-07-26 09:06:10.807225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:44944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.520 [2024-07-26 09:06:10.807239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.520 [2024-07-26 09:06:10.807254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:44952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.520 [2024-07-26 09:06:10.807268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.520 [2024-07-26 09:06:10.807282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:44960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.520 [2024-07-26 09:06:10.807296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.520 [2024-07-26 09:06:10.807311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:44968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.520 [2024-07-26 09:06:10.807324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.520 [2024-07-26 09:06:10.807356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:44976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.520 [2024-07-26 09:06:10.807372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.520 [2024-07-26 09:06:10.807393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:44984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.520 [2024-07-26 09:06:10.807409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.520 [2024-07-26 09:06:10.807425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:44992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.520 [2024-07-26 09:06:10.807440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.520 [2024-07-26 09:06:10.807458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:45000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.520 [2024-07-26 09:06:10.807473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.520 [2024-07-26 09:06:10.807489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:45008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.520 [2024-07-26 09:06:10.807504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.520 [2024-07-26 09:06:10.807521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:45016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.520 [2024-07-26 09:06:10.807536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.520 [2024-07-26 09:06:10.807553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:45024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.520 [2024-07-26 09:06:10.807568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.520 [2024-07-26 09:06:10.807584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:45032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.520 [2024-07-26 09:06:10.807599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.520 [2024-07-26 09:06:10.807616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:45040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.520 [2024-07-26 09:06:10.807631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.520 [2024-07-26 09:06:10.807647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:45048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.520 [2024-07-26 09:06:10.807662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.520 [2024-07-26 09:06:10.807679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:45056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.520 [2024-07-26 09:06:10.807693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.520 [2024-07-26 09:06:10.807710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:45064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.520 [2024-07-26 09:06:10.807725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.520 [2024-07-26 09:06:10.807742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:45072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.520 [2024-07-26 09:06:10.807757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.520 [2024-07-26 09:06:10.807773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:45080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.520 [2024-07-26 09:06:10.807792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.520 [2024-07-26 09:06:10.807810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:45088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.520 [2024-07-26 09:06:10.807825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.520 [2024-07-26 09:06:10.807842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:45096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.520 [2024-07-26 09:06:10.807858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.520 [2024-07-26 09:06:10.807874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:45104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.520 [2024-07-26 09:06:10.807889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.520 [2024-07-26 09:06:10.807906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:45112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.520 [2024-07-26 09:06:10.807920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.520 [2024-07-26 09:06:10.807937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:45120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.520 [2024-07-26 09:06:10.807952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.521 [2024-07-26 09:06:10.807970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:45128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.521 [2024-07-26 09:06:10.807985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.521 [2024-07-26 09:06:10.808002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:45136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.521 [2024-07-26 09:06:10.808017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.521 [2024-07-26 09:06:10.808033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:45144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.521 [2024-07-26 09:06:10.808048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.521 [2024-07-26 09:06:10.808073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:45152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.521 [2024-07-26 09:06:10.808090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.521 [2024-07-26 09:06:10.808122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:45160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.521 [2024-07-26 09:06:10.808136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.521 [2024-07-26 09:06:10.808151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:45168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.521 [2024-07-26 09:06:10.808165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.521 [2024-07-26 09:06:10.808180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:45176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.521 [2024-07-26 09:06:10.808194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.521 [2024-07-26 09:06:10.808209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:45184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.521 [2024-07-26 09:06:10.808226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.521 [2024-07-26 09:06:10.808242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:45192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.521 [2024-07-26 09:06:10.808256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.521 [2024-07-26 09:06:10.808271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:45200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.521 [2024-07-26 09:06:10.808284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.521 [2024-07-26 09:06:10.808299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:45208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.521 [2024-07-26 09:06:10.808313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.521 [2024-07-26 09:06:10.808327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:45216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.521 [2024-07-26 09:06:10.808360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.521 [2024-07-26 09:06:10.808377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:45224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.521 [2024-07-26 09:06:10.808391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.521 [2024-07-26 09:06:10.808408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:45232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.521 [2024-07-26 09:06:10.808423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.521 [2024-07-26 09:06:10.808439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:45240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.521 [2024-07-26 09:06:10.808454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.521 [2024-07-26 09:06:10.808471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:45248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.521 [2024-07-26 09:06:10.808486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.521 [2024-07-26 09:06:10.808503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:45256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.521 [2024-07-26 09:06:10.808518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.521 [2024-07-26 09:06:10.808535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:45264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.521 [2024-07-26 09:06:10.808550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.521 [2024-07-26 09:06:10.808566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:45272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.521 [2024-07-26 09:06:10.808581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.521 [2024-07-26 09:06:10.808598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:45280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.521 [2024-07-26 09:06:10.808613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.521 [2024-07-26 09:06:10.808634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:45288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.521 [2024-07-26 09:06:10.808649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.521 [2024-07-26 09:06:10.808666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:45296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.521 [2024-07-26 09:06:10.808681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.521 [2024-07-26 09:06:10.808698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:45304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.521 [2024-07-26 09:06:10.808712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.521 [2024-07-26 09:06:10.808729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:45312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.521 [2024-07-26 09:06:10.808744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.521 [2024-07-26 09:06:10.808760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:45320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.521 [2024-07-26 09:06:10.808775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.521 [2024-07-26 09:06:10.808791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:45328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.521 [2024-07-26 09:06:10.808806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.521 [2024-07-26 09:06:10.808823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:45336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.521 [2024-07-26 09:06:10.808837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.521 [2024-07-26 09:06:10.808854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:45344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.521 [2024-07-26 09:06:10.808869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.521 [2024-07-26 09:06:10.808886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:44584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.521 [2024-07-26 09:06:10.808901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.521 [2024-07-26 09:06:10.808918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:44592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.521 [2024-07-26 09:06:10.808933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.521 [2024-07-26 09:06:10.808949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:45352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.521 [2024-07-26 09:06:10.808964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.521 [2024-07-26 09:06:10.808981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:45360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.521 [2024-07-26 09:06:10.808996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.521 [2024-07-26 09:06:10.809013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:45368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.521 [2024-07-26 09:06:10.809036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.521 [2024-07-26 09:06:10.809053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:45376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.521 [2024-07-26 09:06:10.809078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.521 [2024-07-26 09:06:10.809096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:45384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.521 [2024-07-26 09:06:10.809126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.521 [2024-07-26 09:06:10.809143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:45392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.521 [2024-07-26 09:06:10.809156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.521 [2024-07-26 09:06:10.809171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:45400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.521 [2024-07-26 09:06:10.809185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.521 [2024-07-26 09:06:10.809200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:45408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.521 [2024-07-26 09:06:10.809213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.521 [2024-07-26 09:06:10.809229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:45416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.521 [2024-07-26 09:06:10.809242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.522 [2024-07-26 09:06:10.809257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:45424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.522 [2024-07-26 09:06:10.809271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.522 [2024-07-26 09:06:10.809286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:45432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.522 [2024-07-26 09:06:10.809299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.522 [2024-07-26 09:06:10.809314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:45440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.522 [2024-07-26 09:06:10.809327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.522 [2024-07-26 09:06:10.809360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:45448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.522 [2024-07-26 09:06:10.809375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.522 [2024-07-26 09:06:10.809393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:45456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.522 [2024-07-26 09:06:10.809408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.522 [2024-07-26 09:06:10.809424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:45464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.522 [2024-07-26 09:06:10.809439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.522 [2024-07-26 09:06:10.809455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:45472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.522 [2024-07-26 09:06:10.809474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.522 [2024-07-26 09:06:10.809491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:45480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.522 [2024-07-26 09:06:10.809506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.522 [2024-07-26 09:06:10.809523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:45488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.522 [2024-07-26 09:06:10.809538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.522 [2024-07-26 09:06:10.809555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:45496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.522 [2024-07-26 09:06:10.809570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.522 [2024-07-26 09:06:10.809587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:45504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.522 [2024-07-26 09:06:10.809602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.522 [2024-07-26 09:06:10.809619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:45512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.522 [2024-07-26 09:06:10.809634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.522 [2024-07-26 09:06:10.809651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:45520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.522 [2024-07-26 09:06:10.809666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.522 [2024-07-26 09:06:10.809682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:45528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.522 [2024-07-26 09:06:10.809697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.522 [2024-07-26 09:06:10.809713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:45536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.522 [2024-07-26 09:06:10.809728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.522 [2024-07-26 09:06:10.809745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:45544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.522 [2024-07-26 09:06:10.809760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.522 [2024-07-26 09:06:10.809777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:45552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.522 [2024-07-26 09:06:10.809791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.522 [2024-07-26 09:06:10.809808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:45560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.522 [2024-07-26 09:06:10.809822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.522 [2024-07-26 09:06:10.809839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:45568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.522 [2024-07-26 09:06:10.809854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.522 [2024-07-26 09:06:10.809874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:45576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.522 [2024-07-26 09:06:10.809890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.522 [2024-07-26 09:06:10.809907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:45584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.522 [2024-07-26 09:06:10.809922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.522 [2024-07-26 09:06:10.809939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:45592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.522 [2024-07-26 09:06:10.809954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.522 [2024-07-26 09:06:10.809970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:44600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.522 [2024-07-26 09:06:10.809985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.522 [2024-07-26 09:06:10.810002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:44608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.522 [2024-07-26 09:06:10.810016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.522 [2024-07-26 09:06:10.810033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:44616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.522 [2024-07-26 09:06:10.810048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.522 [2024-07-26 09:06:10.810071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:44624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.522 [2024-07-26 09:06:10.810088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.522 [2024-07-26 09:06:10.810104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:44632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.522 [2024-07-26 09:06:10.810134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.522 [2024-07-26 09:06:10.810149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:44640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.522 [2024-07-26 09:06:10.810162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.522 [2024-07-26 09:06:10.810177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.522 [2024-07-26 09:06:10.810190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.522 [2024-07-26 09:06:10.810204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:44656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.522 [2024-07-26 09:06:10.810218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.522 [2024-07-26 09:06:10.810233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:44664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.522 [2024-07-26 09:06:10.810246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.522 [2024-07-26 09:06:10.810260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:44672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.522 [2024-07-26 09:06:10.810277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.522 [2024-07-26 09:06:10.810293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:44680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.522 [2024-07-26 09:06:10.810306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.522 [2024-07-26 09:06:10.810321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:44688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.522 [2024-07-26 09:06:10.810334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.522 [2024-07-26 09:06:10.810367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:44696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.522 [2024-07-26 09:06:10.810379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.522 [2024-07-26 09:06:10.810393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:44704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.522 [2024-07-26 09:06:10.810405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.522 [2024-07-26 09:06:10.810438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:44712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.522 [2024-07-26 09:06:10.810452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.522 [2024-07-26 09:06:10.810469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:44720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.522 [2024-07-26 09:06:10.810483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.522 [2024-07-26 09:06:10.810499] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a0e60 is same with the state(5) to be set 00:32:52.522 [2024-07-26 09:06:10.810517] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:52.522 [2024-07-26 09:06:10.810530] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:52.523 [2024-07-26 09:06:10.810543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45600 len:8 PRP1 0x0 PRP2 0x0 00:32:52.523 [2024-07-26 09:06:10.810556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.523 [2024-07-26 09:06:10.810620] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x13a0e60 was disconnected and freed. reset controller. 00:32:52.523 [2024-07-26 09:06:10.814417] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:52.523 [2024-07-26 09:06:10.814493] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:52.523 [2024-07-26 09:06:10.815184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.523 [2024-07-26 09:06:10.815216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:52.523 [2024-07-26 09:06:10.815233] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:52.523 [2024-07-26 09:06:10.815486] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:52.523 [2024-07-26 09:06:10.815732] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:52.523 [2024-07-26 09:06:10.815757] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:52.523 [2024-07-26 09:06:10.815782] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:52.523 [2024-07-26 09:06:10.819390] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:52.523 [2024-07-26 09:06:10.828697] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:52.523 [2024-07-26 09:06:10.829145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.523 [2024-07-26 09:06:10.829177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:52.523 [2024-07-26 09:06:10.829194] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:52.523 [2024-07-26 09:06:10.829433] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:52.523 [2024-07-26 09:06:10.829677] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:52.523 [2024-07-26 09:06:10.829700] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:52.523 [2024-07-26 09:06:10.829716] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:52.523 [2024-07-26 09:06:10.833295] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:52.523 [2024-07-26 09:06:10.842565] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:52.523 [2024-07-26 09:06:10.843003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.523 [2024-07-26 09:06:10.843034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:52.523 [2024-07-26 09:06:10.843051] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:52.523 [2024-07-26 09:06:10.843300] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:52.523 [2024-07-26 09:06:10.843543] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:52.523 [2024-07-26 09:06:10.843567] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:52.523 [2024-07-26 09:06:10.843582] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:52.523 [2024-07-26 09:06:10.847158] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:52.523 [2024-07-26 09:06:10.856400] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:52.523 [2024-07-26 09:06:10.856839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.523 [2024-07-26 09:06:10.856871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:52.523 [2024-07-26 09:06:10.856890] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:52.523 [2024-07-26 09:06:10.857152] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:52.523 [2024-07-26 09:06:10.857374] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:52.523 [2024-07-26 09:06:10.857399] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:52.523 [2024-07-26 09:06:10.857414] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:52.523 [2024-07-26 09:06:10.860941] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:52.523 [2024-07-26 09:06:10.870464] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:52.523 [2024-07-26 09:06:10.870930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.523 [2024-07-26 09:06:10.870966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:52.523 [2024-07-26 09:06:10.870985] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:52.523 [2024-07-26 09:06:10.871240] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:52.523 [2024-07-26 09:06:10.871485] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:52.523 [2024-07-26 09:06:10.871505] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:52.523 [2024-07-26 09:06:10.871518] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:52.523 [2024-07-26 09:06:10.875134] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:52.523 [2024-07-26 09:06:10.884486] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:52.523 [2024-07-26 09:06:10.884984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.523 [2024-07-26 09:06:10.885015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:52.523 [2024-07-26 09:06:10.885033] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:52.523 [2024-07-26 09:06:10.885276] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:52.523 [2024-07-26 09:06:10.885516] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:52.523 [2024-07-26 09:06:10.885536] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:52.523 [2024-07-26 09:06:10.885548] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:52.523 [2024-07-26 09:06:10.889053] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:52.523 [2024-07-26 09:06:10.898352] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:52.523 [2024-07-26 09:06:10.898798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.523 [2024-07-26 09:06:10.898827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:52.523 [2024-07-26 09:06:10.898843] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:52.523 [2024-07-26 09:06:10.899104] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:52.523 [2024-07-26 09:06:10.899331] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:52.523 [2024-07-26 09:06:10.899351] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:52.523 [2024-07-26 09:06:10.899365] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:52.523 [2024-07-26 09:06:10.902903] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:52.523 [2024-07-26 09:06:10.912382] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:52.523 [2024-07-26 09:06:10.912816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.523 [2024-07-26 09:06:10.912845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:52.523 [2024-07-26 09:06:10.912861] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:52.523 [2024-07-26 09:06:10.913120] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:52.523 [2024-07-26 09:06:10.913324] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:52.523 [2024-07-26 09:06:10.913344] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:52.523 [2024-07-26 09:06:10.913370] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:52.523 [2024-07-26 09:06:10.916893] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:52.523 [2024-07-26 09:06:10.926017] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:52.523 [2024-07-26 09:06:10.926432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.523 [2024-07-26 09:06:10.926460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:52.523 [2024-07-26 09:06:10.926476] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:52.523 [2024-07-26 09:06:10.926713] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:52.523 [2024-07-26 09:06:10.926956] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:52.523 [2024-07-26 09:06:10.926979] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:52.523 [2024-07-26 09:06:10.926995] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:52.523 [2024-07-26 09:06:10.930606] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:52.523 [2024-07-26 09:06:10.939988] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:52.523 [2024-07-26 09:06:10.940434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.523 [2024-07-26 09:06:10.940466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:52.523 [2024-07-26 09:06:10.940483] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:52.523 [2024-07-26 09:06:10.940722] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:52.524 [2024-07-26 09:06:10.940964] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:52.524 [2024-07-26 09:06:10.940987] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:52.524 [2024-07-26 09:06:10.941003] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:52.524 [2024-07-26 09:06:10.944602] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:52.524 [2024-07-26 09:06:10.953660] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:52.524 [2024-07-26 09:06:10.954046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.524 [2024-07-26 09:06:10.954080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:52.524 [2024-07-26 09:06:10.954097] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:52.524 [2024-07-26 09:06:10.954311] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:52.524 [2024-07-26 09:06:10.954529] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:52.524 [2024-07-26 09:06:10.954551] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:52.524 [2024-07-26 09:06:10.954565] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:52.524 [2024-07-26 09:06:10.957711] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:52.524 [2024-07-26 09:06:10.967170] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:52.524 [2024-07-26 09:06:10.967621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.524 [2024-07-26 09:06:10.967647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:52.524 [2024-07-26 09:06:10.967678] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:52.524 [2024-07-26 09:06:10.967928] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:52.524 [2024-07-26 09:06:10.968186] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:52.524 [2024-07-26 09:06:10.968208] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:52.524 [2024-07-26 09:06:10.968221] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:52.524 [2024-07-26 09:06:10.971802] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:52.784 [2024-07-26 09:06:10.980935] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:52.784 [2024-07-26 09:06:10.981347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.784 [2024-07-26 09:06:10.981377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:52.784 [2024-07-26 09:06:10.981394] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:52.784 [2024-07-26 09:06:10.981632] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:52.784 [2024-07-26 09:06:10.981874] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:52.784 [2024-07-26 09:06:10.981898] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:52.784 [2024-07-26 09:06:10.981914] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:52.784 [2024-07-26 09:06:10.985515] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:52.784 [2024-07-26 09:06:10.994782] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:52.784 [2024-07-26 09:06:10.995218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.784 [2024-07-26 09:06:10.995249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:52.784 [2024-07-26 09:06:10.995267] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:52.784 [2024-07-26 09:06:10.995505] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:52.784 [2024-07-26 09:06:10.995748] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:52.784 [2024-07-26 09:06:10.995772] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:52.784 [2024-07-26 09:06:10.995787] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:52.784 [2024-07-26 09:06:10.999315] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:52.784 [2024-07-26 09:06:11.008809] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:52.784 [2024-07-26 09:06:11.009240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.784 [2024-07-26 09:06:11.009268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:52.784 [2024-07-26 09:06:11.009290] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:52.784 [2024-07-26 09:06:11.009548] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:52.784 [2024-07-26 09:06:11.009792] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:52.784 [2024-07-26 09:06:11.009815] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:52.784 [2024-07-26 09:06:11.009831] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:52.784 [2024-07-26 09:06:11.013435] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:52.784 [2024-07-26 09:06:11.022630] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:52.784 [2024-07-26 09:06:11.023065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.784 [2024-07-26 09:06:11.023111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:52.784 [2024-07-26 09:06:11.023127] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:52.784 [2024-07-26 09:06:11.023366] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:52.784 [2024-07-26 09:06:11.023622] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:52.784 [2024-07-26 09:06:11.023646] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:52.784 [2024-07-26 09:06:11.023661] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:52.784 [2024-07-26 09:06:11.027221] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:52.784 [2024-07-26 09:06:11.036485] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:52.784 [2024-07-26 09:06:11.036913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.784 [2024-07-26 09:06:11.036940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:52.784 [2024-07-26 09:06:11.036955] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:52.784 [2024-07-26 09:06:11.037197] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:52.784 [2024-07-26 09:06:11.037455] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:52.784 [2024-07-26 09:06:11.037479] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:52.784 [2024-07-26 09:06:11.037495] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:52.784 [2024-07-26 09:06:11.041064] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:52.784 [2024-07-26 09:06:11.050343] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:52.784 [2024-07-26 09:06:11.050767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.784 [2024-07-26 09:06:11.050798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:52.784 [2024-07-26 09:06:11.050815] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:52.784 [2024-07-26 09:06:11.051053] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:52.784 [2024-07-26 09:06:11.051318] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:52.784 [2024-07-26 09:06:11.051348] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:52.784 [2024-07-26 09:06:11.051365] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:52.784 [2024-07-26 09:06:11.054932] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:52.784 [2024-07-26 09:06:11.064200] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:52.784 [2024-07-26 09:06:11.064625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.784 [2024-07-26 09:06:11.064656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:52.784 [2024-07-26 09:06:11.064674] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:52.784 [2024-07-26 09:06:11.064912] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:52.784 [2024-07-26 09:06:11.065167] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:52.784 [2024-07-26 09:06:11.065192] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:52.784 [2024-07-26 09:06:11.065208] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:52.784 [2024-07-26 09:06:11.068779] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:52.784 [2024-07-26 09:06:11.078052] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:52.784 [2024-07-26 09:06:11.078461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.784 [2024-07-26 09:06:11.078492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:52.784 [2024-07-26 09:06:11.078510] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:52.784 [2024-07-26 09:06:11.078748] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:52.784 [2024-07-26 09:06:11.078991] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:52.784 [2024-07-26 09:06:11.079014] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:52.784 [2024-07-26 09:06:11.079029] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:52.784 [2024-07-26 09:06:11.082605] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:52.784 [2024-07-26 09:06:11.092088] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:52.784 [2024-07-26 09:06:11.092486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.784 [2024-07-26 09:06:11.092517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:52.784 [2024-07-26 09:06:11.092534] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:52.784 [2024-07-26 09:06:11.092772] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:52.784 [2024-07-26 09:06:11.093015] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:52.784 [2024-07-26 09:06:11.093039] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:52.784 [2024-07-26 09:06:11.093054] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:52.784 [2024-07-26 09:06:11.096635] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:52.785 [2024-07-26 09:06:11.106123] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:52.785 [2024-07-26 09:06:11.106550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.785 [2024-07-26 09:06:11.106580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:52.785 [2024-07-26 09:06:11.106598] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:52.785 [2024-07-26 09:06:11.106836] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:52.785 [2024-07-26 09:06:11.107091] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:52.785 [2024-07-26 09:06:11.107115] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:52.785 [2024-07-26 09:06:11.107131] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:52.785 [2024-07-26 09:06:11.110698] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:52.785 [2024-07-26 09:06:11.119974] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:52.785 [2024-07-26 09:06:11.120396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.785 [2024-07-26 09:06:11.120427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:52.785 [2024-07-26 09:06:11.120445] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:52.785 [2024-07-26 09:06:11.120683] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:52.785 [2024-07-26 09:06:11.120926] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:52.785 [2024-07-26 09:06:11.120949] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:52.785 [2024-07-26 09:06:11.120965] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:52.785 [2024-07-26 09:06:11.124542] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:52.785 [2024-07-26 09:06:11.133809] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:52.785 [2024-07-26 09:06:11.134217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.785 [2024-07-26 09:06:11.134247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:52.785 [2024-07-26 09:06:11.134265] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:52.785 [2024-07-26 09:06:11.134503] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:52.785 [2024-07-26 09:06:11.134746] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:52.785 [2024-07-26 09:06:11.134770] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:52.785 [2024-07-26 09:06:11.134785] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:52.785 [2024-07-26 09:06:11.138358] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:52.785 [2024-07-26 09:06:11.147868] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:52.785 [2024-07-26 09:06:11.148276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.785 [2024-07-26 09:06:11.148308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:52.785 [2024-07-26 09:06:11.148331] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:52.785 [2024-07-26 09:06:11.148571] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:52.785 [2024-07-26 09:06:11.148814] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:52.785 [2024-07-26 09:06:11.148838] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:52.785 [2024-07-26 09:06:11.148853] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:52.785 [2024-07-26 09:06:11.152431] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:52.785 [2024-07-26 09:06:11.161699] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:52.785 [2024-07-26 09:06:11.162148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.785 [2024-07-26 09:06:11.162180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:52.785 [2024-07-26 09:06:11.162197] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:52.785 [2024-07-26 09:06:11.162436] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:52.785 [2024-07-26 09:06:11.162679] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:52.785 [2024-07-26 09:06:11.162703] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:52.785 [2024-07-26 09:06:11.162718] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:52.785 [2024-07-26 09:06:11.166297] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:52.785 [2024-07-26 09:06:11.175566] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:52.785 [2024-07-26 09:06:11.175974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.785 [2024-07-26 09:06:11.176005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:52.785 [2024-07-26 09:06:11.176022] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:52.785 [2024-07-26 09:06:11.176272] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:52.785 [2024-07-26 09:06:11.176515] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:52.785 [2024-07-26 09:06:11.176539] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:52.785 [2024-07-26 09:06:11.176554] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:52.785 [2024-07-26 09:06:11.180130] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:52.785 [2024-07-26 09:06:11.189601] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:52.785 [2024-07-26 09:06:11.190022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.785 [2024-07-26 09:06:11.190052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:52.785 [2024-07-26 09:06:11.190080] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:52.785 [2024-07-26 09:06:11.190320] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:52.785 [2024-07-26 09:06:11.190564] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:52.785 [2024-07-26 09:06:11.190587] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:52.785 [2024-07-26 09:06:11.190608] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:52.785 [2024-07-26 09:06:11.194180] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:52.785 [2024-07-26 09:06:11.203442] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:52.785 [2024-07-26 09:06:11.203885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.785 [2024-07-26 09:06:11.203915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:52.785 [2024-07-26 09:06:11.203933] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:52.785 [2024-07-26 09:06:11.204184] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:52.785 [2024-07-26 09:06:11.204429] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:52.785 [2024-07-26 09:06:11.204453] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:52.785 [2024-07-26 09:06:11.204468] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:52.785 [2024-07-26 09:06:11.208033] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:52.785 [2024-07-26 09:06:11.217317] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:52.785 [2024-07-26 09:06:11.217758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.785 [2024-07-26 09:06:11.217790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:52.785 [2024-07-26 09:06:11.217807] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:52.785 [2024-07-26 09:06:11.218045] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:52.785 [2024-07-26 09:06:11.218300] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:52.785 [2024-07-26 09:06:11.218324] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:52.785 [2024-07-26 09:06:11.218339] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:52.785 [2024-07-26 09:06:11.221906] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:52.785 [2024-07-26 09:06:11.231176] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:52.785 [2024-07-26 09:06:11.231577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.785 [2024-07-26 09:06:11.231608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:52.785 [2024-07-26 09:06:11.231625] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:52.785 [2024-07-26 09:06:11.231863] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:52.785 [2024-07-26 09:06:11.232119] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:52.785 [2024-07-26 09:06:11.232143] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:52.785 [2024-07-26 09:06:11.232159] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:52.785 [2024-07-26 09:06:11.235726] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.045 [2024-07-26 09:06:11.245212] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.045 [2024-07-26 09:06:11.245661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.045 [2024-07-26 09:06:11.245692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:53.045 [2024-07-26 09:06:11.245710] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:53.045 [2024-07-26 09:06:11.245947] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:53.045 [2024-07-26 09:06:11.246202] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.045 [2024-07-26 09:06:11.246227] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.045 [2024-07-26 09:06:11.246242] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.045 [2024-07-26 09:06:11.249814] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.045 [2024-07-26 09:06:11.259089] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.045 [2024-07-26 09:06:11.259528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.045 [2024-07-26 09:06:11.259559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:53.045 [2024-07-26 09:06:11.259576] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:53.045 [2024-07-26 09:06:11.259815] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:53.045 [2024-07-26 09:06:11.260069] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.045 [2024-07-26 09:06:11.260093] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.045 [2024-07-26 09:06:11.260108] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.045 [2024-07-26 09:06:11.263674] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.045 [2024-07-26 09:06:11.272948] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.045 [2024-07-26 09:06:11.273353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.045 [2024-07-26 09:06:11.273384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:53.045 [2024-07-26 09:06:11.273402] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:53.045 [2024-07-26 09:06:11.273639] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:53.046 [2024-07-26 09:06:11.273883] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.046 [2024-07-26 09:06:11.273906] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.046 [2024-07-26 09:06:11.273922] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.046 [2024-07-26 09:06:11.277637] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.046 [2024-07-26 09:06:11.286906] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.046 [2024-07-26 09:06:11.287343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.046 [2024-07-26 09:06:11.287374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:53.046 [2024-07-26 09:06:11.287391] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:53.046 [2024-07-26 09:06:11.287635] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:53.046 [2024-07-26 09:06:11.287878] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.046 [2024-07-26 09:06:11.287902] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.046 [2024-07-26 09:06:11.287918] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.046 [2024-07-26 09:06:11.291497] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.046 [2024-07-26 09:06:11.300762] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.046 [2024-07-26 09:06:11.301159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.046 [2024-07-26 09:06:11.301190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:53.046 [2024-07-26 09:06:11.301207] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:53.046 [2024-07-26 09:06:11.301445] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:53.046 [2024-07-26 09:06:11.301688] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.046 [2024-07-26 09:06:11.301712] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.046 [2024-07-26 09:06:11.301727] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.046 [2024-07-26 09:06:11.305300] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.046 [2024-07-26 09:06:11.314777] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.046 [2024-07-26 09:06:11.315223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.046 [2024-07-26 09:06:11.315254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:53.046 [2024-07-26 09:06:11.315272] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:53.046 [2024-07-26 09:06:11.315509] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:53.046 [2024-07-26 09:06:11.315752] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.046 [2024-07-26 09:06:11.315776] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.046 [2024-07-26 09:06:11.315791] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.046 [2024-07-26 09:06:11.319382] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.046 [2024-07-26 09:06:11.328655] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.046 [2024-07-26 09:06:11.329066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.046 [2024-07-26 09:06:11.329097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:53.046 [2024-07-26 09:06:11.329114] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:53.046 [2024-07-26 09:06:11.329352] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:53.046 [2024-07-26 09:06:11.329596] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.046 [2024-07-26 09:06:11.329620] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.046 [2024-07-26 09:06:11.329640] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.046 [2024-07-26 09:06:11.333215] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.046 [2024-07-26 09:06:11.342688] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.046 [2024-07-26 09:06:11.343113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.046 [2024-07-26 09:06:11.343144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:53.046 [2024-07-26 09:06:11.343161] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:53.046 [2024-07-26 09:06:11.343400] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:53.046 [2024-07-26 09:06:11.343642] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.046 [2024-07-26 09:06:11.343666] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.046 [2024-07-26 09:06:11.343682] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.046 [2024-07-26 09:06:11.347262] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.046 [2024-07-26 09:06:11.356528] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.046 [2024-07-26 09:06:11.356927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.046 [2024-07-26 09:06:11.356957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:53.046 [2024-07-26 09:06:11.356974] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:53.046 [2024-07-26 09:06:11.357224] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:53.046 [2024-07-26 09:06:11.357468] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.046 [2024-07-26 09:06:11.357492] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.046 [2024-07-26 09:06:11.357507] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.046 [2024-07-26 09:06:11.361081] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.046 [2024-07-26 09:06:11.370557] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.046 [2024-07-26 09:06:11.370994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.046 [2024-07-26 09:06:11.371025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:53.046 [2024-07-26 09:06:11.371042] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:53.046 [2024-07-26 09:06:11.371290] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:53.046 [2024-07-26 09:06:11.371534] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.046 [2024-07-26 09:06:11.371557] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.046 [2024-07-26 09:06:11.371573] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.046 [2024-07-26 09:06:11.375148] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.046 [2024-07-26 09:06:11.384534] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.046 [2024-07-26 09:06:11.384945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.046 [2024-07-26 09:06:11.384983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:53.046 [2024-07-26 09:06:11.385002] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:53.046 [2024-07-26 09:06:11.385254] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:53.046 [2024-07-26 09:06:11.385498] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.046 [2024-07-26 09:06:11.385522] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.046 [2024-07-26 09:06:11.385537] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.046 [2024-07-26 09:06:11.389113] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.046 [2024-07-26 09:06:11.398382] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.046 [2024-07-26 09:06:11.398813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.046 [2024-07-26 09:06:11.398845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:53.046 [2024-07-26 09:06:11.398862] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:53.046 [2024-07-26 09:06:11.399112] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:53.046 [2024-07-26 09:06:11.399357] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.046 [2024-07-26 09:06:11.399381] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.046 [2024-07-26 09:06:11.399396] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.046 [2024-07-26 09:06:11.402961] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.046 [2024-07-26 09:06:11.412230] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.046 [2024-07-26 09:06:11.412669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.046 [2024-07-26 09:06:11.412699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:53.046 [2024-07-26 09:06:11.412716] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:53.046 [2024-07-26 09:06:11.412954] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:53.047 [2024-07-26 09:06:11.413210] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.047 [2024-07-26 09:06:11.413235] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.047 [2024-07-26 09:06:11.413251] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.047 [2024-07-26 09:06:11.416817] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.047 [2024-07-26 09:06:11.426104] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.047 [2024-07-26 09:06:11.426507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.047 [2024-07-26 09:06:11.426539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:53.047 [2024-07-26 09:06:11.426556] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:53.047 [2024-07-26 09:06:11.426794] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:53.047 [2024-07-26 09:06:11.427044] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.047 [2024-07-26 09:06:11.427081] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.047 [2024-07-26 09:06:11.427098] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.047 [2024-07-26 09:06:11.430667] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.047 [2024-07-26 09:06:11.439932] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.047 [2024-07-26 09:06:11.440364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.047 [2024-07-26 09:06:11.440395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:53.047 [2024-07-26 09:06:11.440413] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:53.047 [2024-07-26 09:06:11.440651] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:53.047 [2024-07-26 09:06:11.440894] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.047 [2024-07-26 09:06:11.440917] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.047 [2024-07-26 09:06:11.440932] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.047 [2024-07-26 09:06:11.444511] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.047 [2024-07-26 09:06:11.453779] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.047 [2024-07-26 09:06:11.454187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.047 [2024-07-26 09:06:11.454217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:53.047 [2024-07-26 09:06:11.454235] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:53.047 [2024-07-26 09:06:11.454473] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:53.047 [2024-07-26 09:06:11.454716] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.047 [2024-07-26 09:06:11.454739] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.047 [2024-07-26 09:06:11.454755] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.047 [2024-07-26 09:06:11.458330] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.047 [2024-07-26 09:06:11.467823] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.047 [2024-07-26 09:06:11.468262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.047 [2024-07-26 09:06:11.468293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:53.047 [2024-07-26 09:06:11.468311] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:53.047 [2024-07-26 09:06:11.468548] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:53.047 [2024-07-26 09:06:11.468791] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.047 [2024-07-26 09:06:11.468815] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.047 [2024-07-26 09:06:11.468830] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.047 [2024-07-26 09:06:11.472411] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.047 [2024-07-26 09:06:11.481673] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.047 [2024-07-26 09:06:11.482106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.047 [2024-07-26 09:06:11.482137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:53.047 [2024-07-26 09:06:11.482155] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:53.047 [2024-07-26 09:06:11.482393] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:53.047 [2024-07-26 09:06:11.482636] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.047 [2024-07-26 09:06:11.482659] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.047 [2024-07-26 09:06:11.482675] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.047 [2024-07-26 09:06:11.486254] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.047 [2024-07-26 09:06:11.495526] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.047 [2024-07-26 09:06:11.495950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.047 [2024-07-26 09:06:11.495981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:53.047 [2024-07-26 09:06:11.495998] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:53.047 [2024-07-26 09:06:11.496245] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:53.047 [2024-07-26 09:06:11.496489] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.047 [2024-07-26 09:06:11.496513] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.047 [2024-07-26 09:06:11.496528] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.047 [2024-07-26 09:06:11.500105] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.307 [2024-07-26 09:06:11.509379] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.307 [2024-07-26 09:06:11.509803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.307 [2024-07-26 09:06:11.509833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:53.307 [2024-07-26 09:06:11.509850] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:53.307 [2024-07-26 09:06:11.510099] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:53.307 [2024-07-26 09:06:11.510343] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.307 [2024-07-26 09:06:11.510367] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.307 [2024-07-26 09:06:11.510382] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.307 [2024-07-26 09:06:11.513947] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.307 [2024-07-26 09:06:11.523257] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.307 [2024-07-26 09:06:11.523700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.307 [2024-07-26 09:06:11.523742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:53.307 [2024-07-26 09:06:11.523781] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:53.307 [2024-07-26 09:06:11.524020] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:53.307 [2024-07-26 09:06:11.524274] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.307 [2024-07-26 09:06:11.524299] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.307 [2024-07-26 09:06:11.524314] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.307 [2024-07-26 09:06:11.527883] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.307 [2024-07-26 09:06:11.537161] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.307 [2024-07-26 09:06:11.537580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.307 [2024-07-26 09:06:11.537611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:53.307 [2024-07-26 09:06:11.537628] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:53.307 [2024-07-26 09:06:11.537866] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:53.307 [2024-07-26 09:06:11.538120] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.307 [2024-07-26 09:06:11.538145] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.308 [2024-07-26 09:06:11.538160] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.308 [2024-07-26 09:06:11.541728] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.308 [2024-07-26 09:06:11.551013] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.308 [2024-07-26 09:06:11.551444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.308 [2024-07-26 09:06:11.551475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:53.308 [2024-07-26 09:06:11.551492] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:53.308 [2024-07-26 09:06:11.551730] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:53.308 [2024-07-26 09:06:11.551972] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.308 [2024-07-26 09:06:11.551996] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.308 [2024-07-26 09:06:11.552012] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.308 [2024-07-26 09:06:11.555590] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.308 [2024-07-26 09:06:11.564864] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.308 [2024-07-26 09:06:11.565274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.308 [2024-07-26 09:06:11.565305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:53.308 [2024-07-26 09:06:11.565323] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:53.308 [2024-07-26 09:06:11.565562] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:53.308 [2024-07-26 09:06:11.565805] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.308 [2024-07-26 09:06:11.565834] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.308 [2024-07-26 09:06:11.565852] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.308 [2024-07-26 09:06:11.569433] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.308 [2024-07-26 09:06:11.578701] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.308 [2024-07-26 09:06:11.579106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.308 [2024-07-26 09:06:11.579137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:53.308 [2024-07-26 09:06:11.579154] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:53.308 [2024-07-26 09:06:11.579393] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:53.308 [2024-07-26 09:06:11.579636] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.308 [2024-07-26 09:06:11.579660] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.308 [2024-07-26 09:06:11.579675] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.308 [2024-07-26 09:06:11.583253] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.308 [2024-07-26 09:06:11.592741] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.308 [2024-07-26 09:06:11.593146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.308 [2024-07-26 09:06:11.593177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:53.308 [2024-07-26 09:06:11.593195] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:53.308 [2024-07-26 09:06:11.593433] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:53.308 [2024-07-26 09:06:11.593675] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.308 [2024-07-26 09:06:11.593700] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.308 [2024-07-26 09:06:11.593715] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.308 [2024-07-26 09:06:11.597292] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.308 [2024-07-26 09:06:11.606775] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.308 [2024-07-26 09:06:11.607181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.308 [2024-07-26 09:06:11.607212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:53.308 [2024-07-26 09:06:11.607229] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:53.308 [2024-07-26 09:06:11.607467] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:53.308 [2024-07-26 09:06:11.607709] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.308 [2024-07-26 09:06:11.607733] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.308 [2024-07-26 09:06:11.607749] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.308 [2024-07-26 09:06:11.611329] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.308 [2024-07-26 09:06:11.620830] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.308 [2024-07-26 09:06:11.621239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.308 [2024-07-26 09:06:11.621270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:53.308 [2024-07-26 09:06:11.621287] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:53.308 [2024-07-26 09:06:11.621525] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:53.308 [2024-07-26 09:06:11.621768] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.308 [2024-07-26 09:06:11.621791] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.308 [2024-07-26 09:06:11.621806] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.308 [2024-07-26 09:06:11.625378] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.308 [2024-07-26 09:06:11.634852] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.308 [2024-07-26 09:06:11.635293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.308 [2024-07-26 09:06:11.635323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:53.308 [2024-07-26 09:06:11.635340] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:53.308 [2024-07-26 09:06:11.635578] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:53.308 [2024-07-26 09:06:11.635821] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.308 [2024-07-26 09:06:11.635844] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.308 [2024-07-26 09:06:11.635859] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.308 [2024-07-26 09:06:11.639433] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.308 [2024-07-26 09:06:11.648711] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.308 [2024-07-26 09:06:11.649119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.308 [2024-07-26 09:06:11.649151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:53.308 [2024-07-26 09:06:11.649168] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:53.308 [2024-07-26 09:06:11.649407] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:53.308 [2024-07-26 09:06:11.649650] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.308 [2024-07-26 09:06:11.649674] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.308 [2024-07-26 09:06:11.649689] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.308 [2024-07-26 09:06:11.653263] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.308 [2024-07-26 09:06:11.662744] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.308 [2024-07-26 09:06:11.663189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.308 [2024-07-26 09:06:11.663220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:53.308 [2024-07-26 09:06:11.663237] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:53.308 [2024-07-26 09:06:11.663481] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:53.308 [2024-07-26 09:06:11.663724] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.308 [2024-07-26 09:06:11.663748] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.308 [2024-07-26 09:06:11.663764] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.308 [2024-07-26 09:06:11.667342] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.308 [2024-07-26 09:06:11.676607] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.308 [2024-07-26 09:06:11.677020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.308 [2024-07-26 09:06:11.677051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:53.308 [2024-07-26 09:06:11.677080] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:53.308 [2024-07-26 09:06:11.677320] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:53.309 [2024-07-26 09:06:11.677563] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.309 [2024-07-26 09:06:11.677587] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.309 [2024-07-26 09:06:11.677602] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.309 [2024-07-26 09:06:11.681177] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.309 [2024-07-26 09:06:11.690652] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.309 [2024-07-26 09:06:11.691091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.309 [2024-07-26 09:06:11.691122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:53.309 [2024-07-26 09:06:11.691139] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:53.309 [2024-07-26 09:06:11.691377] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:53.309 [2024-07-26 09:06:11.691620] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.309 [2024-07-26 09:06:11.691643] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.309 [2024-07-26 09:06:11.691659] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.309 [2024-07-26 09:06:11.695237] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.309 [2024-07-26 09:06:11.704501] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.309 [2024-07-26 09:06:11.704926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.309 [2024-07-26 09:06:11.704956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:53.309 [2024-07-26 09:06:11.704973] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:53.309 [2024-07-26 09:06:11.705223] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:53.309 [2024-07-26 09:06:11.705468] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.309 [2024-07-26 09:06:11.705491] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.309 [2024-07-26 09:06:11.705512] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.309 [2024-07-26 09:06:11.709088] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.309 [2024-07-26 09:06:11.718351] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.309 [2024-07-26 09:06:11.718775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.309 [2024-07-26 09:06:11.718805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:53.309 [2024-07-26 09:06:11.718822] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:53.309 [2024-07-26 09:06:11.719069] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:53.309 [2024-07-26 09:06:11.719323] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.309 [2024-07-26 09:06:11.719348] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.309 [2024-07-26 09:06:11.719364] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.309 [2024-07-26 09:06:11.722929] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.309 [2024-07-26 09:06:11.732199] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.309 [2024-07-26 09:06:11.732599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.309 [2024-07-26 09:06:11.732630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:53.309 [2024-07-26 09:06:11.732647] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:53.309 [2024-07-26 09:06:11.732885] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:53.309 [2024-07-26 09:06:11.733141] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.309 [2024-07-26 09:06:11.733165] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.309 [2024-07-26 09:06:11.733181] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.309 [2024-07-26 09:06:11.736747] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.309 [2024-07-26 09:06:11.746226] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.309 [2024-07-26 09:06:11.746658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.309 [2024-07-26 09:06:11.746688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:53.309 [2024-07-26 09:06:11.746705] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:53.309 [2024-07-26 09:06:11.746944] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:53.309 [2024-07-26 09:06:11.747203] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.309 [2024-07-26 09:06:11.747227] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.309 [2024-07-26 09:06:11.747242] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.309 [2024-07-26 09:06:11.750807] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.309 [2024-07-26 09:06:11.760077] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.309 [2024-07-26 09:06:11.760506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.309 [2024-07-26 09:06:11.760536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:53.309 [2024-07-26 09:06:11.760553] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:53.309 [2024-07-26 09:06:11.760791] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:53.309 [2024-07-26 09:06:11.761035] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.309 [2024-07-26 09:06:11.761068] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.309 [2024-07-26 09:06:11.761086] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.309 [2024-07-26 09:06:11.764655] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.570 [2024-07-26 09:06:11.773921] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.570 [2024-07-26 09:06:11.774356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.570 [2024-07-26 09:06:11.774387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:53.570 [2024-07-26 09:06:11.774404] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:53.570 [2024-07-26 09:06:11.774642] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:53.570 [2024-07-26 09:06:11.774885] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.570 [2024-07-26 09:06:11.774909] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.570 [2024-07-26 09:06:11.774924] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.570 [2024-07-26 09:06:11.778502] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.571 [2024-07-26 09:06:11.787760] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.571 [2024-07-26 09:06:11.788233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.571 [2024-07-26 09:06:11.788283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:53.571 [2024-07-26 09:06:11.788300] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:53.571 [2024-07-26 09:06:11.788538] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:53.571 [2024-07-26 09:06:11.788781] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.571 [2024-07-26 09:06:11.788805] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.571 [2024-07-26 09:06:11.788821] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.571 [2024-07-26 09:06:11.792396] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.571 [2024-07-26 09:06:11.801660] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.571 [2024-07-26 09:06:11.802072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.571 [2024-07-26 09:06:11.802103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:53.571 [2024-07-26 09:06:11.802120] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:53.571 [2024-07-26 09:06:11.802364] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:53.571 [2024-07-26 09:06:11.802607] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.571 [2024-07-26 09:06:11.802631] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.571 [2024-07-26 09:06:11.802646] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.571 [2024-07-26 09:06:11.806221] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.571 [2024-07-26 09:06:11.815695] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.571 [2024-07-26 09:06:11.816077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.571 [2024-07-26 09:06:11.816106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:53.571 [2024-07-26 09:06:11.816123] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:53.571 [2024-07-26 09:06:11.816361] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:53.571 [2024-07-26 09:06:11.816604] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.571 [2024-07-26 09:06:11.816628] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.571 [2024-07-26 09:06:11.816643] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.571 [2024-07-26 09:06:11.820234] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.571 [2024-07-26 09:06:11.829710] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.571 [2024-07-26 09:06:11.830134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.571 [2024-07-26 09:06:11.830165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:53.571 [2024-07-26 09:06:11.830183] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:53.571 [2024-07-26 09:06:11.830421] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:53.571 [2024-07-26 09:06:11.830665] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.571 [2024-07-26 09:06:11.830688] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.571 [2024-07-26 09:06:11.830703] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.571 [2024-07-26 09:06:11.834491] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.571 [2024-07-26 09:06:11.843549] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.571 [2024-07-26 09:06:11.843987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.571 [2024-07-26 09:06:11.844018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:53.571 [2024-07-26 09:06:11.844035] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:53.571 [2024-07-26 09:06:11.844284] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:53.571 [2024-07-26 09:06:11.844527] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.571 [2024-07-26 09:06:11.844551] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.571 [2024-07-26 09:06:11.844572] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.571 [2024-07-26 09:06:11.848151] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.571 [2024-07-26 09:06:11.857413] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.571 [2024-07-26 09:06:11.857838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.571 [2024-07-26 09:06:11.857869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:53.571 [2024-07-26 09:06:11.857886] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:53.571 [2024-07-26 09:06:11.858137] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:53.571 [2024-07-26 09:06:11.858381] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.571 [2024-07-26 09:06:11.858405] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.571 [2024-07-26 09:06:11.858420] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.571 [2024-07-26 09:06:11.861986] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.571 [2024-07-26 09:06:11.871259] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.571 [2024-07-26 09:06:11.871654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.571 [2024-07-26 09:06:11.871685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:53.571 [2024-07-26 09:06:11.871702] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:53.571 [2024-07-26 09:06:11.871940] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:53.571 [2024-07-26 09:06:11.872194] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.571 [2024-07-26 09:06:11.872218] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.571 [2024-07-26 09:06:11.872233] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.571 [2024-07-26 09:06:11.875800] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.571 [2024-07-26 09:06:11.885285] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.571 [2024-07-26 09:06:11.885733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.571 [2024-07-26 09:06:11.885763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:53.571 [2024-07-26 09:06:11.885780] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:53.571 [2024-07-26 09:06:11.886019] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:53.571 [2024-07-26 09:06:11.886270] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.571 [2024-07-26 09:06:11.886295] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.571 [2024-07-26 09:06:11.886310] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.571 [2024-07-26 09:06:11.889872] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.571 [2024-07-26 09:06:11.899154] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.571 [2024-07-26 09:06:11.899574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.571 [2024-07-26 09:06:11.899610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:53.571 [2024-07-26 09:06:11.899629] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:53.571 [2024-07-26 09:06:11.899866] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:53.571 [2024-07-26 09:06:11.900118] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.571 [2024-07-26 09:06:11.900143] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.571 [2024-07-26 09:06:11.900158] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.571 [2024-07-26 09:06:11.903722] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.571 [2024-07-26 09:06:11.912991] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.571 [2024-07-26 09:06:11.913429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.571 [2024-07-26 09:06:11.913459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:53.571 [2024-07-26 09:06:11.913476] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:53.571 [2024-07-26 09:06:11.913714] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:53.571 [2024-07-26 09:06:11.913957] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.571 [2024-07-26 09:06:11.913981] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.571 [2024-07-26 09:06:11.913996] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.572 [2024-07-26 09:06:11.917573] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.572 [2024-07-26 09:06:11.926851] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.572 [2024-07-26 09:06:11.927289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.572 [2024-07-26 09:06:11.927319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:53.572 [2024-07-26 09:06:11.927337] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:53.572 [2024-07-26 09:06:11.927575] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:53.572 [2024-07-26 09:06:11.927819] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.572 [2024-07-26 09:06:11.927842] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.572 [2024-07-26 09:06:11.927857] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.572 [2024-07-26 09:06:11.931434] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.572 [2024-07-26 09:06:11.940700] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.572 [2024-07-26 09:06:11.941139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.572 [2024-07-26 09:06:11.941169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:53.572 [2024-07-26 09:06:11.941186] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:53.572 [2024-07-26 09:06:11.941425] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:53.572 [2024-07-26 09:06:11.941676] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.572 [2024-07-26 09:06:11.941700] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.572 [2024-07-26 09:06:11.941715] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.572 [2024-07-26 09:06:11.945294] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.572 [2024-07-26 09:06:11.954564] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.572 [2024-07-26 09:06:11.954997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.572 [2024-07-26 09:06:11.955027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:53.572 [2024-07-26 09:06:11.955045] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:53.572 [2024-07-26 09:06:11.955292] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:53.572 [2024-07-26 09:06:11.955536] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.572 [2024-07-26 09:06:11.955560] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.572 [2024-07-26 09:06:11.955575] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.572 [2024-07-26 09:06:11.959152] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.572 [2024-07-26 09:06:11.968414] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.572 [2024-07-26 09:06:11.968813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.572 [2024-07-26 09:06:11.968843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:53.572 [2024-07-26 09:06:11.968860] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:53.572 [2024-07-26 09:06:11.969111] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:53.572 [2024-07-26 09:06:11.969355] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.572 [2024-07-26 09:06:11.969379] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.572 [2024-07-26 09:06:11.969396] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.572 [2024-07-26 09:06:11.972961] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.572 [2024-07-26 09:06:11.982444] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.572 [2024-07-26 09:06:11.982870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.572 [2024-07-26 09:06:11.982901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:53.572 [2024-07-26 09:06:11.982918] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:53.572 [2024-07-26 09:06:11.983168] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:53.572 [2024-07-26 09:06:11.983412] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.572 [2024-07-26 09:06:11.983436] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.572 [2024-07-26 09:06:11.983451] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.572 [2024-07-26 09:06:11.987021] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.572 [2024-07-26 09:06:11.996290] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.572 [2024-07-26 09:06:11.996717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.572 [2024-07-26 09:06:11.996747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:53.572 [2024-07-26 09:06:11.996764] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:53.572 [2024-07-26 09:06:11.997002] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:53.572 [2024-07-26 09:06:11.997258] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.572 [2024-07-26 09:06:11.997282] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.572 [2024-07-26 09:06:11.997297] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.572 [2024-07-26 09:06:12.000864] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.572 [2024-07-26 09:06:12.010141] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.572 [2024-07-26 09:06:12.010541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.572 [2024-07-26 09:06:12.010572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:53.572 [2024-07-26 09:06:12.010589] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:53.572 [2024-07-26 09:06:12.010827] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:53.572 [2024-07-26 09:06:12.011080] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.572 [2024-07-26 09:06:12.011107] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.572 [2024-07-26 09:06:12.011123] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.572 [2024-07-26 09:06:12.014687] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.572 [2024-07-26 09:06:12.023973] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.572 [2024-07-26 09:06:12.024417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.572 [2024-07-26 09:06:12.024448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:53.572 [2024-07-26 09:06:12.024465] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:53.572 [2024-07-26 09:06:12.024703] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:53.572 [2024-07-26 09:06:12.024946] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.572 [2024-07-26 09:06:12.024970] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.572 [2024-07-26 09:06:12.024985] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.572 [2024-07-26 09:06:12.028563] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.834 [2024-07-26 09:06:12.037830] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.834 [2024-07-26 09:06:12.038238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.834 [2024-07-26 09:06:12.038269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:53.834 [2024-07-26 09:06:12.038292] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:53.834 [2024-07-26 09:06:12.038530] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:53.834 [2024-07-26 09:06:12.038773] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.834 [2024-07-26 09:06:12.038797] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.834 [2024-07-26 09:06:12.038813] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.834 [2024-07-26 09:06:12.042392] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.834 [2024-07-26 09:06:12.051679] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.834 [2024-07-26 09:06:12.052112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.834 [2024-07-26 09:06:12.052144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:53.834 [2024-07-26 09:06:12.052162] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:53.834 [2024-07-26 09:06:12.052401] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:53.834 [2024-07-26 09:06:12.052644] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.834 [2024-07-26 09:06:12.052667] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.834 [2024-07-26 09:06:12.052682] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.834 [2024-07-26 09:06:12.056271] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.834 [2024-07-26 09:06:12.065570] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.834 [2024-07-26 09:06:12.065971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.834 [2024-07-26 09:06:12.066004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:53.834 [2024-07-26 09:06:12.066021] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:53.834 [2024-07-26 09:06:12.066270] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:53.834 [2024-07-26 09:06:12.066515] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.834 [2024-07-26 09:06:12.066538] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.834 [2024-07-26 09:06:12.066554] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.834 [2024-07-26 09:06:12.070132] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.834 [2024-07-26 09:06:12.079409] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.834 [2024-07-26 09:06:12.079837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.834 [2024-07-26 09:06:12.079867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:53.834 [2024-07-26 09:06:12.079885] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:53.834 [2024-07-26 09:06:12.080134] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:53.834 [2024-07-26 09:06:12.080379] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.834 [2024-07-26 09:06:12.080409] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.834 [2024-07-26 09:06:12.080425] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.834 [2024-07-26 09:06:12.083998] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.834 [2024-07-26 09:06:12.093295] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.834 [2024-07-26 09:06:12.093730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.834 [2024-07-26 09:06:12.093761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:53.834 [2024-07-26 09:06:12.093778] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:53.834 [2024-07-26 09:06:12.094016] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:53.834 [2024-07-26 09:06:12.094268] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.834 [2024-07-26 09:06:12.094292] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.834 [2024-07-26 09:06:12.094309] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.834 [2024-07-26 09:06:12.097877] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.834 [2024-07-26 09:06:12.107149] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.834 [2024-07-26 09:06:12.107571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.835 [2024-07-26 09:06:12.107602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:53.835 [2024-07-26 09:06:12.107620] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:53.835 [2024-07-26 09:06:12.107858] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:53.835 [2024-07-26 09:06:12.108111] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.835 [2024-07-26 09:06:12.108135] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.835 [2024-07-26 09:06:12.108150] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.835 [2024-07-26 09:06:12.111740] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.835 [2024-07-26 09:06:12.121052] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.835 [2024-07-26 09:06:12.121504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.835 [2024-07-26 09:06:12.121535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:53.835 [2024-07-26 09:06:12.121553] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:53.835 [2024-07-26 09:06:12.121792] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:53.835 [2024-07-26 09:06:12.122034] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.835 [2024-07-26 09:06:12.122068] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.835 [2024-07-26 09:06:12.122086] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.835 [2024-07-26 09:06:12.125655] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.835 [2024-07-26 09:06:12.134965] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.835 [2024-07-26 09:06:12.135388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.835 [2024-07-26 09:06:12.135420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:53.835 [2024-07-26 09:06:12.135438] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:53.835 [2024-07-26 09:06:12.135677] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:53.835 [2024-07-26 09:06:12.135920] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.835 [2024-07-26 09:06:12.135945] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.835 [2024-07-26 09:06:12.135959] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.835 [2024-07-26 09:06:12.139553] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.835 [2024-07-26 09:06:12.148842] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.835 [2024-07-26 09:06:12.149262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.835 [2024-07-26 09:06:12.149293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:53.835 [2024-07-26 09:06:12.149310] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:53.835 [2024-07-26 09:06:12.149548] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:53.835 [2024-07-26 09:06:12.149792] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.835 [2024-07-26 09:06:12.149815] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.835 [2024-07-26 09:06:12.149830] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.835 [2024-07-26 09:06:12.153416] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.835 [2024-07-26 09:06:12.162700] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.835 [2024-07-26 09:06:12.163136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.835 [2024-07-26 09:06:12.163167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:53.835 [2024-07-26 09:06:12.163185] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:53.835 [2024-07-26 09:06:12.163423] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:53.835 [2024-07-26 09:06:12.163666] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.835 [2024-07-26 09:06:12.163689] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.835 [2024-07-26 09:06:12.163704] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.835 [2024-07-26 09:06:12.167284] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.835 [2024-07-26 09:06:12.176552] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.835 [2024-07-26 09:06:12.176982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.835 [2024-07-26 09:06:12.177013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:53.835 [2024-07-26 09:06:12.177030] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:53.835 [2024-07-26 09:06:12.177285] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:53.835 [2024-07-26 09:06:12.177529] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.835 [2024-07-26 09:06:12.177553] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.835 [2024-07-26 09:06:12.177568] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.835 [2024-07-26 09:06:12.181142] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.835 [2024-07-26 09:06:12.190416] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.835 [2024-07-26 09:06:12.190820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.835 [2024-07-26 09:06:12.190851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:53.835 [2024-07-26 09:06:12.190869] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:53.835 [2024-07-26 09:06:12.191119] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:53.835 [2024-07-26 09:06:12.191364] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.835 [2024-07-26 09:06:12.191387] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.835 [2024-07-26 09:06:12.191402] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.835 [2024-07-26 09:06:12.194969] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.835 [2024-07-26 09:06:12.204456] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.835 [2024-07-26 09:06:12.204889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.835 [2024-07-26 09:06:12.204919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:53.835 [2024-07-26 09:06:12.204937] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:53.835 [2024-07-26 09:06:12.205188] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:53.835 [2024-07-26 09:06:12.205432] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.835 [2024-07-26 09:06:12.205456] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.835 [2024-07-26 09:06:12.205472] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.835 [2024-07-26 09:06:12.209042] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.835 [2024-07-26 09:06:12.218322] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.835 [2024-07-26 09:06:12.218757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.835 [2024-07-26 09:06:12.218787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:53.835 [2024-07-26 09:06:12.218805] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:53.835 [2024-07-26 09:06:12.219042] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:53.835 [2024-07-26 09:06:12.219296] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.835 [2024-07-26 09:06:12.219321] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.835 [2024-07-26 09:06:12.219344] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.835 [2024-07-26 09:06:12.222925] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.835 [2024-07-26 09:06:12.232202] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.835 [2024-07-26 09:06:12.232639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.835 [2024-07-26 09:06:12.232670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:53.835 [2024-07-26 09:06:12.232687] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:53.835 [2024-07-26 09:06:12.232925] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:53.835 [2024-07-26 09:06:12.233182] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.835 [2024-07-26 09:06:12.233206] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.835 [2024-07-26 09:06:12.233222] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.835 [2024-07-26 09:06:12.236830] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.835 [2024-07-26 09:06:12.246102] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.835 [2024-07-26 09:06:12.246537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.836 [2024-07-26 09:06:12.246568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:53.836 [2024-07-26 09:06:12.246585] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:53.836 [2024-07-26 09:06:12.246823] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:53.836 [2024-07-26 09:06:12.247086] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.836 [2024-07-26 09:06:12.247110] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.836 [2024-07-26 09:06:12.247126] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.836 [2024-07-26 09:06:12.250693] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.836 [2024-07-26 09:06:12.259959] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.836 [2024-07-26 09:06:12.260378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.836 [2024-07-26 09:06:12.260409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:53.836 [2024-07-26 09:06:12.260426] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:53.836 [2024-07-26 09:06:12.260664] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:53.836 [2024-07-26 09:06:12.260907] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.836 [2024-07-26 09:06:12.260931] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.836 [2024-07-26 09:06:12.260946] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.836 [2024-07-26 09:06:12.264528] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.836 [2024-07-26 09:06:12.273800] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.836 [2024-07-26 09:06:12.274240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.836 [2024-07-26 09:06:12.274271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:53.836 [2024-07-26 09:06:12.274288] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:53.836 [2024-07-26 09:06:12.274526] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:53.836 [2024-07-26 09:06:12.274769] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.836 [2024-07-26 09:06:12.274793] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.836 [2024-07-26 09:06:12.274808] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.836 [2024-07-26 09:06:12.278391] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.836 [2024-07-26 09:06:12.287667] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.836 [2024-07-26 09:06:12.288092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.836 [2024-07-26 09:06:12.288123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:53.836 [2024-07-26 09:06:12.288141] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:53.836 [2024-07-26 09:06:12.288379] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:53.836 [2024-07-26 09:06:12.288622] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.836 [2024-07-26 09:06:12.288646] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.836 [2024-07-26 09:06:12.288661] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.836 [2024-07-26 09:06:12.292252] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.097 [2024-07-26 09:06:12.301535] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.097 [2024-07-26 09:06:12.301940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.097 [2024-07-26 09:06:12.301970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:54.097 [2024-07-26 09:06:12.301988] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:54.097 [2024-07-26 09:06:12.302238] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:54.097 [2024-07-26 09:06:12.302482] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.097 [2024-07-26 09:06:12.302506] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.097 [2024-07-26 09:06:12.302521] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.097 [2024-07-26 09:06:12.306100] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.097 [2024-07-26 09:06:12.315371] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.097 [2024-07-26 09:06:12.315800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.097 [2024-07-26 09:06:12.315830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:54.097 [2024-07-26 09:06:12.315847] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:54.097 [2024-07-26 09:06:12.316097] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:54.097 [2024-07-26 09:06:12.316347] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.097 [2024-07-26 09:06:12.316371] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.097 [2024-07-26 09:06:12.316386] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.097 [2024-07-26 09:06:12.319956] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.097 [2024-07-26 09:06:12.329256] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.097 [2024-07-26 09:06:12.329658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.097 [2024-07-26 09:06:12.329688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:54.097 [2024-07-26 09:06:12.329706] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:54.097 [2024-07-26 09:06:12.329943] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:54.097 [2024-07-26 09:06:12.330198] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.097 [2024-07-26 09:06:12.330223] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.097 [2024-07-26 09:06:12.330238] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.097 [2024-07-26 09:06:12.333805] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.097 [2024-07-26 09:06:12.343294] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.097 [2024-07-26 09:06:12.343730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.097 [2024-07-26 09:06:12.343760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:54.097 [2024-07-26 09:06:12.343777] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:54.097 [2024-07-26 09:06:12.344015] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:54.097 [2024-07-26 09:06:12.344269] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.097 [2024-07-26 09:06:12.344293] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.097 [2024-07-26 09:06:12.344309] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.097 [2024-07-26 09:06:12.347882] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.097 [2024-07-26 09:06:12.357160] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.097 [2024-07-26 09:06:12.357584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.097 [2024-07-26 09:06:12.357615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:54.097 [2024-07-26 09:06:12.357632] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:54.097 [2024-07-26 09:06:12.357870] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:54.097 [2024-07-26 09:06:12.358126] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.097 [2024-07-26 09:06:12.358150] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.097 [2024-07-26 09:06:12.358166] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.097 [2024-07-26 09:06:12.361741] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.097 [2024-07-26 09:06:12.371012] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.097 [2024-07-26 09:06:12.371451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.097 [2024-07-26 09:06:12.371482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:54.097 [2024-07-26 09:06:12.371499] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:54.097 [2024-07-26 09:06:12.371738] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:54.097 [2024-07-26 09:06:12.371980] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.097 [2024-07-26 09:06:12.372004] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.097 [2024-07-26 09:06:12.372020] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.097 [2024-07-26 09:06:12.375600] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.097 [2024-07-26 09:06:12.384874] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.097 [2024-07-26 09:06:12.385311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.097 [2024-07-26 09:06:12.385342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:54.097 [2024-07-26 09:06:12.385360] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:54.097 [2024-07-26 09:06:12.385597] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:54.097 [2024-07-26 09:06:12.385841] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.097 [2024-07-26 09:06:12.385864] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.097 [2024-07-26 09:06:12.385880] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.097 [2024-07-26 09:06:12.389460] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.097 [2024-07-26 09:06:12.398729] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.097 [2024-07-26 09:06:12.399155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.098 [2024-07-26 09:06:12.399185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:54.098 [2024-07-26 09:06:12.399202] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:54.098 [2024-07-26 09:06:12.399440] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:54.098 [2024-07-26 09:06:12.399683] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.098 [2024-07-26 09:06:12.399706] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.098 [2024-07-26 09:06:12.399721] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.098 [2024-07-26 09:06:12.403303] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.098 [2024-07-26 09:06:12.412583] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.098 [2024-07-26 09:06:12.413005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.098 [2024-07-26 09:06:12.413040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:54.098 [2024-07-26 09:06:12.413069] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:54.098 [2024-07-26 09:06:12.413311] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:54.098 [2024-07-26 09:06:12.413554] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.098 [2024-07-26 09:06:12.413578] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.098 [2024-07-26 09:06:12.413593] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.098 [2024-07-26 09:06:12.417173] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.098 [2024-07-26 09:06:12.426460] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.098 [2024-07-26 09:06:12.426891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.098 [2024-07-26 09:06:12.426922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:54.098 [2024-07-26 09:06:12.426940] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:54.098 [2024-07-26 09:06:12.427191] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:54.098 [2024-07-26 09:06:12.427435] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.098 [2024-07-26 09:06:12.427459] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.098 [2024-07-26 09:06:12.427474] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.098 [2024-07-26 09:06:12.431043] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.098 [2024-07-26 09:06:12.440326] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.098 [2024-07-26 09:06:12.440725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.098 [2024-07-26 09:06:12.440755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:54.098 [2024-07-26 09:06:12.440772] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:54.098 [2024-07-26 09:06:12.441010] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:54.098 [2024-07-26 09:06:12.441265] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.098 [2024-07-26 09:06:12.441289] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.098 [2024-07-26 09:06:12.441304] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.098 [2024-07-26 09:06:12.444874] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.098 [2024-07-26 09:06:12.454370] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.098 [2024-07-26 09:06:12.454768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.098 [2024-07-26 09:06:12.454799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:54.098 [2024-07-26 09:06:12.454817] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:54.098 [2024-07-26 09:06:12.455054] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:54.098 [2024-07-26 09:06:12.455316] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.098 [2024-07-26 09:06:12.455341] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.098 [2024-07-26 09:06:12.455356] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.098 [2024-07-26 09:06:12.458925] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.098 [2024-07-26 09:06:12.468407] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.098 [2024-07-26 09:06:12.468848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.098 [2024-07-26 09:06:12.468878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:54.098 [2024-07-26 09:06:12.468895] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:54.098 [2024-07-26 09:06:12.469146] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:54.098 [2024-07-26 09:06:12.469390] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.098 [2024-07-26 09:06:12.469414] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.098 [2024-07-26 09:06:12.469429] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.098 [2024-07-26 09:06:12.472998] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.098 [2024-07-26 09:06:12.482279] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.098 [2024-07-26 09:06:12.482686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.098 [2024-07-26 09:06:12.482717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:54.098 [2024-07-26 09:06:12.482735] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:54.098 [2024-07-26 09:06:12.482972] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:54.098 [2024-07-26 09:06:12.483228] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.098 [2024-07-26 09:06:12.483253] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.098 [2024-07-26 09:06:12.483268] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.098 [2024-07-26 09:06:12.486838] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.098 [2024-07-26 09:06:12.496113] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.098 [2024-07-26 09:06:12.496514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.098 [2024-07-26 09:06:12.496545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:54.098 [2024-07-26 09:06:12.496562] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:54.098 [2024-07-26 09:06:12.496800] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:54.098 [2024-07-26 09:06:12.497043] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.098 [2024-07-26 09:06:12.497078] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.098 [2024-07-26 09:06:12.497095] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.098 [2024-07-26 09:06:12.500661] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.098 [2024-07-26 09:06:12.510149] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.098 [2024-07-26 09:06:12.510565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.098 [2024-07-26 09:06:12.510596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:54.098 [2024-07-26 09:06:12.510614] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:54.098 [2024-07-26 09:06:12.510852] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:54.098 [2024-07-26 09:06:12.511109] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.098 [2024-07-26 09:06:12.511133] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.098 [2024-07-26 09:06:12.511149] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.098 [2024-07-26 09:06:12.514715] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.098 [2024-07-26 09:06:12.524003] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.098 [2024-07-26 09:06:12.524445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.098 [2024-07-26 09:06:12.524476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:54.098 [2024-07-26 09:06:12.524493] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:54.098 [2024-07-26 09:06:12.524730] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:54.098 [2024-07-26 09:06:12.524973] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.098 [2024-07-26 09:06:12.524997] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.099 [2024-07-26 09:06:12.525012] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.099 [2024-07-26 09:06:12.528593] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.099 [2024-07-26 09:06:12.537877] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.099 [2024-07-26 09:06:12.538288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.099 [2024-07-26 09:06:12.538319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:54.099 [2024-07-26 09:06:12.538336] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:54.099 [2024-07-26 09:06:12.538574] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:54.099 [2024-07-26 09:06:12.538817] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.099 [2024-07-26 09:06:12.538840] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.099 [2024-07-26 09:06:12.538855] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.099 [2024-07-26 09:06:12.542449] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.099 [2024-07-26 09:06:12.551736] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.099 [2024-07-26 09:06:12.552139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.099 [2024-07-26 09:06:12.552171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:54.099 [2024-07-26 09:06:12.552194] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:54.099 [2024-07-26 09:06:12.552434] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:54.099 [2024-07-26 09:06:12.552676] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.099 [2024-07-26 09:06:12.552700] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.099 [2024-07-26 09:06:12.552715] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.360 [2024-07-26 09:06:12.556303] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.360 [2024-07-26 09:06:12.565581] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.360 [2024-07-26 09:06:12.566006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.360 [2024-07-26 09:06:12.566037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:54.360 [2024-07-26 09:06:12.566054] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:54.360 [2024-07-26 09:06:12.566305] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:54.360 [2024-07-26 09:06:12.566549] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.360 [2024-07-26 09:06:12.566573] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.360 [2024-07-26 09:06:12.566588] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.360 [2024-07-26 09:06:12.570162] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.360 [2024-07-26 09:06:12.579434] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.360 [2024-07-26 09:06:12.579865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.360 [2024-07-26 09:06:12.579895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:54.360 [2024-07-26 09:06:12.579912] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:54.360 [2024-07-26 09:06:12.580163] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:54.360 [2024-07-26 09:06:12.580407] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.360 [2024-07-26 09:06:12.580431] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.360 [2024-07-26 09:06:12.580446] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.360 [2024-07-26 09:06:12.584016] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.360 [2024-07-26 09:06:12.593290] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.360 [2024-07-26 09:06:12.593685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.360 [2024-07-26 09:06:12.593715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:54.360 [2024-07-26 09:06:12.593732] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:54.360 [2024-07-26 09:06:12.593971] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:54.360 [2024-07-26 09:06:12.594226] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.360 [2024-07-26 09:06:12.594257] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.360 [2024-07-26 09:06:12.594273] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.360 [2024-07-26 09:06:12.597841] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.360 [2024-07-26 09:06:12.607324] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.360 [2024-07-26 09:06:12.607753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.360 [2024-07-26 09:06:12.607783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:54.360 [2024-07-26 09:06:12.607801] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:54.360 [2024-07-26 09:06:12.608039] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:54.360 [2024-07-26 09:06:12.608292] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.360 [2024-07-26 09:06:12.608317] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.360 [2024-07-26 09:06:12.608332] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.360 [2024-07-26 09:06:12.611899] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.360 [2024-07-26 09:06:12.621201] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.360 [2024-07-26 09:06:12.621629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.360 [2024-07-26 09:06:12.621660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:54.360 [2024-07-26 09:06:12.621677] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:54.360 [2024-07-26 09:06:12.621915] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:54.360 [2024-07-26 09:06:12.622170] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.360 [2024-07-26 09:06:12.622194] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.360 [2024-07-26 09:06:12.622210] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.360 [2024-07-26 09:06:12.625792] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.360 [2024-07-26 09:06:12.635150] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.360 [2024-07-26 09:06:12.635561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.360 [2024-07-26 09:06:12.635594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:54.360 [2024-07-26 09:06:12.635612] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:54.360 [2024-07-26 09:06:12.635850] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:54.360 [2024-07-26 09:06:12.636107] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.360 [2024-07-26 09:06:12.636131] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.360 [2024-07-26 09:06:12.636147] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.360 [2024-07-26 09:06:12.639722] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.361 [2024-07-26 09:06:12.649018] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.361 [2024-07-26 09:06:12.649466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.361 [2024-07-26 09:06:12.649497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:54.361 [2024-07-26 09:06:12.649514] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:54.361 [2024-07-26 09:06:12.649753] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:54.361 [2024-07-26 09:06:12.649996] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.361 [2024-07-26 09:06:12.650019] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.361 [2024-07-26 09:06:12.650035] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.361 [2024-07-26 09:06:12.653633] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.361 [2024-07-26 09:06:12.662928] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.361 [2024-07-26 09:06:12.663377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.361 [2024-07-26 09:06:12.663408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:54.361 [2024-07-26 09:06:12.663426] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:54.361 [2024-07-26 09:06:12.663664] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:54.361 [2024-07-26 09:06:12.663907] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.361 [2024-07-26 09:06:12.663931] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.361 [2024-07-26 09:06:12.663946] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.361 [2024-07-26 09:06:12.667529] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.361 [2024-07-26 09:06:12.676820] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.361 [2024-07-26 09:06:12.677226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.361 [2024-07-26 09:06:12.677258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:54.361 [2024-07-26 09:06:12.677275] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:54.361 [2024-07-26 09:06:12.677513] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:54.361 [2024-07-26 09:06:12.677756] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.361 [2024-07-26 09:06:12.677780] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.361 [2024-07-26 09:06:12.677795] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.361 [2024-07-26 09:06:12.681393] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.361 [2024-07-26 09:06:12.690683] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.361 [2024-07-26 09:06:12.691111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.361 [2024-07-26 09:06:12.691144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:54.361 [2024-07-26 09:06:12.691162] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:54.361 [2024-07-26 09:06:12.691406] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:54.361 [2024-07-26 09:06:12.691649] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.361 [2024-07-26 09:06:12.691673] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.361 [2024-07-26 09:06:12.691688] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.361 [2024-07-26 09:06:12.695262] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.361 [2024-07-26 09:06:12.704540] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.361 [2024-07-26 09:06:12.704970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.361 [2024-07-26 09:06:12.705001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:54.361 [2024-07-26 09:06:12.705018] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:54.361 [2024-07-26 09:06:12.705266] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:54.361 [2024-07-26 09:06:12.705510] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.361 [2024-07-26 09:06:12.705534] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.361 [2024-07-26 09:06:12.705549] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.361 [2024-07-26 09:06:12.709129] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.361 [2024-07-26 09:06:12.718421] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.361 [2024-07-26 09:06:12.718845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.361 [2024-07-26 09:06:12.718893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:54.361 [2024-07-26 09:06:12.718911] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:54.361 [2024-07-26 09:06:12.719163] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:54.361 [2024-07-26 09:06:12.719407] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.361 [2024-07-26 09:06:12.719431] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.361 [2024-07-26 09:06:12.719446] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.361 [2024-07-26 09:06:12.723037] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.361 [2024-07-26 09:06:12.732336] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.361 [2024-07-26 09:06:12.732758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.361 [2024-07-26 09:06:12.732789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:54.361 [2024-07-26 09:06:12.732806] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:54.361 [2024-07-26 09:06:12.733044] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:54.361 [2024-07-26 09:06:12.733297] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.361 [2024-07-26 09:06:12.733321] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.361 [2024-07-26 09:06:12.733343] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.361 [2024-07-26 09:06:12.736916] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.361 [2024-07-26 09:06:12.746204] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.361 [2024-07-26 09:06:12.746672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.361 [2024-07-26 09:06:12.746702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:54.361 [2024-07-26 09:06:12.746719] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:54.361 [2024-07-26 09:06:12.746957] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:54.361 [2024-07-26 09:06:12.747218] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.361 [2024-07-26 09:06:12.747243] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.361 [2024-07-26 09:06:12.747258] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.361 [2024-07-26 09:06:12.750854] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.361 [2024-07-26 09:06:12.760145] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.361 [2024-07-26 09:06:12.760579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.361 [2024-07-26 09:06:12.760610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:54.361 [2024-07-26 09:06:12.760627] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:54.361 [2024-07-26 09:06:12.760865] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:54.361 [2024-07-26 09:06:12.761119] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.361 [2024-07-26 09:06:12.761144] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.361 [2024-07-26 09:06:12.761159] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.361 [2024-07-26 09:06:12.764731] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.361 [2024-07-26 09:06:12.774017] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.361 [2024-07-26 09:06:12.774424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.361 [2024-07-26 09:06:12.774454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:54.361 [2024-07-26 09:06:12.774472] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:54.361 [2024-07-26 09:06:12.774709] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:54.361 [2024-07-26 09:06:12.774951] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.361 [2024-07-26 09:06:12.774975] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.362 [2024-07-26 09:06:12.774990] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.362 [2024-07-26 09:06:12.778569] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.362 [2024-07-26 09:06:12.787854] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.362 [2024-07-26 09:06:12.788293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.362 [2024-07-26 09:06:12.788330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:54.362 [2024-07-26 09:06:12.788348] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:54.362 [2024-07-26 09:06:12.788586] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:54.362 [2024-07-26 09:06:12.788829] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.362 [2024-07-26 09:06:12.788852] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.362 [2024-07-26 09:06:12.788868] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.362 [2024-07-26 09:06:12.792459] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.362 [2024-07-26 09:06:12.801738] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.362 [2024-07-26 09:06:12.802161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.362 [2024-07-26 09:06:12.802192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:54.362 [2024-07-26 09:06:12.802210] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:54.362 [2024-07-26 09:06:12.802448] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:54.362 [2024-07-26 09:06:12.802690] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.362 [2024-07-26 09:06:12.802714] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.362 [2024-07-26 09:06:12.802729] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.362 [2024-07-26 09:06:12.806310] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.362 [2024-07-26 09:06:12.815636] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.362 [2024-07-26 09:06:12.816033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.362 [2024-07-26 09:06:12.816074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:54.362 [2024-07-26 09:06:12.816095] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:54.362 [2024-07-26 09:06:12.816333] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:54.362 [2024-07-26 09:06:12.816575] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.362 [2024-07-26 09:06:12.816598] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.362 [2024-07-26 09:06:12.816614] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.623 [2024-07-26 09:06:12.820202] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.623 [2024-07-26 09:06:12.829505] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.623 [2024-07-26 09:06:12.829933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.623 [2024-07-26 09:06:12.829963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:54.623 [2024-07-26 09:06:12.829981] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:54.624 [2024-07-26 09:06:12.830230] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:54.624 [2024-07-26 09:06:12.830480] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.624 [2024-07-26 09:06:12.830504] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.624 [2024-07-26 09:06:12.830519] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.624 [2024-07-26 09:06:12.834098] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.624 [2024-07-26 09:06:12.843374] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.624 [2024-07-26 09:06:12.843799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.624 [2024-07-26 09:06:12.843846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:54.624 [2024-07-26 09:06:12.843863] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:54.624 [2024-07-26 09:06:12.844113] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:54.624 [2024-07-26 09:06:12.844356] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.624 [2024-07-26 09:06:12.844380] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.624 [2024-07-26 09:06:12.844395] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.624 [2024-07-26 09:06:12.847968] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.624 [2024-07-26 09:06:12.857234] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.624 [2024-07-26 09:06:12.857633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.624 [2024-07-26 09:06:12.857664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:54.624 [2024-07-26 09:06:12.857681] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:54.624 [2024-07-26 09:06:12.857919] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:54.624 [2024-07-26 09:06:12.858175] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.624 [2024-07-26 09:06:12.858200] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.624 [2024-07-26 09:06:12.858216] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.624 [2024-07-26 09:06:12.861781] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.624 [2024-07-26 09:06:12.871268] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.624 [2024-07-26 09:06:12.871690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.624 [2024-07-26 09:06:12.871721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:54.624 [2024-07-26 09:06:12.871738] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:54.624 [2024-07-26 09:06:12.871976] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:54.624 [2024-07-26 09:06:12.872231] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.624 [2024-07-26 09:06:12.872255] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.624 [2024-07-26 09:06:12.872271] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.624 [2024-07-26 09:06:12.875848] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.624 [2024-07-26 09:06:12.885147] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.624 [2024-07-26 09:06:12.885587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.624 [2024-07-26 09:06:12.885618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:54.624 [2024-07-26 09:06:12.885635] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:54.624 [2024-07-26 09:06:12.885873] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:54.624 [2024-07-26 09:06:12.886131] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.624 [2024-07-26 09:06:12.886156] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.624 [2024-07-26 09:06:12.886171] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.624 [2024-07-26 09:06:12.889747] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.624 [2024-07-26 09:06:12.899023] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.624 [2024-07-26 09:06:12.899456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.624 [2024-07-26 09:06:12.899486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:54.624 [2024-07-26 09:06:12.899503] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:54.624 [2024-07-26 09:06:12.899741] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:54.624 [2024-07-26 09:06:12.899984] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.624 [2024-07-26 09:06:12.900008] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.624 [2024-07-26 09:06:12.900023] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.624 [2024-07-26 09:06:12.903605] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.624 [2024-07-26 09:06:12.912879] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.624 [2024-07-26 09:06:12.913317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.624 [2024-07-26 09:06:12.913347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:54.624 [2024-07-26 09:06:12.913365] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:54.624 [2024-07-26 09:06:12.913603] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:54.624 [2024-07-26 09:06:12.913845] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.624 [2024-07-26 09:06:12.913869] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.624 [2024-07-26 09:06:12.913884] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.624 [2024-07-26 09:06:12.917468] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.624 [2024-07-26 09:06:12.926759] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.624 [2024-07-26 09:06:12.927180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.624 [2024-07-26 09:06:12.927211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:54.624 [2024-07-26 09:06:12.927235] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:54.624 [2024-07-26 09:06:12.927474] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:54.624 [2024-07-26 09:06:12.927717] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.624 [2024-07-26 09:06:12.927741] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.624 [2024-07-26 09:06:12.927756] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.624 [2024-07-26 09:06:12.931338] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.624 [2024-07-26 09:06:12.940608] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.624 [2024-07-26 09:06:12.941077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.624 [2024-07-26 09:06:12.941124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:54.624 [2024-07-26 09:06:12.941142] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:54.624 [2024-07-26 09:06:12.941380] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:54.624 [2024-07-26 09:06:12.941623] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.624 [2024-07-26 09:06:12.941647] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.624 [2024-07-26 09:06:12.941662] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.624 [2024-07-26 09:06:12.945245] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.624 [2024-07-26 09:06:12.954520] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.624 [2024-07-26 09:06:12.954953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.624 [2024-07-26 09:06:12.954984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:54.624 [2024-07-26 09:06:12.955001] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:54.624 [2024-07-26 09:06:12.955251] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:54.624 [2024-07-26 09:06:12.955495] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.624 [2024-07-26 09:06:12.955519] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.624 [2024-07-26 09:06:12.955534] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.624 [2024-07-26 09:06:12.959111] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.625 [2024-07-26 09:06:12.968381] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.625 [2024-07-26 09:06:12.968805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.625 [2024-07-26 09:06:12.968835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:54.625 [2024-07-26 09:06:12.968853] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:54.625 [2024-07-26 09:06:12.969102] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:54.625 [2024-07-26 09:06:12.969346] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.625 [2024-07-26 09:06:12.969375] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.625 [2024-07-26 09:06:12.969391] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.625 [2024-07-26 09:06:12.972955] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.625 [2024-07-26 09:06:12.982223] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.625 [2024-07-26 09:06:12.982625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.625 [2024-07-26 09:06:12.982655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:54.625 [2024-07-26 09:06:12.982673] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:54.625 [2024-07-26 09:06:12.982911] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:54.625 [2024-07-26 09:06:12.983167] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.625 [2024-07-26 09:06:12.983191] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.625 [2024-07-26 09:06:12.983207] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.625 [2024-07-26 09:06:12.986776] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.625 [2024-07-26 09:06:12.996261] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.625 [2024-07-26 09:06:12.996688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.625 [2024-07-26 09:06:12.996718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:54.625 [2024-07-26 09:06:12.996736] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:54.625 [2024-07-26 09:06:12.996974] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:54.625 [2024-07-26 09:06:12.997229] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.625 [2024-07-26 09:06:12.997253] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.625 [2024-07-26 09:06:12.997269] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.625 [2024-07-26 09:06:13.000839] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.625 [2024-07-26 09:06:13.010122] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.625 [2024-07-26 09:06:13.010520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.625 [2024-07-26 09:06:13.010551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:54.625 [2024-07-26 09:06:13.010568] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:54.625 [2024-07-26 09:06:13.010805] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:54.625 [2024-07-26 09:06:13.011048] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.625 [2024-07-26 09:06:13.011082] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.625 [2024-07-26 09:06:13.011098] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.625 [2024-07-26 09:06:13.014664] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.625 [2024-07-26 09:06:13.024175] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.625 [2024-07-26 09:06:13.024588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.625 [2024-07-26 09:06:13.024618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:54.625 [2024-07-26 09:06:13.024636] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:54.625 [2024-07-26 09:06:13.024873] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:54.625 [2024-07-26 09:06:13.025126] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.625 [2024-07-26 09:06:13.025150] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.625 [2024-07-26 09:06:13.025166] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.625 [2024-07-26 09:06:13.028726] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.625 [2024-07-26 09:06:13.038200] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.625 [2024-07-26 09:06:13.038632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.625 [2024-07-26 09:06:13.038663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:54.625 [2024-07-26 09:06:13.038680] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:54.625 [2024-07-26 09:06:13.038917] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:54.625 [2024-07-26 09:06:13.039169] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.625 [2024-07-26 09:06:13.039194] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.625 [2024-07-26 09:06:13.039209] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.625 [2024-07-26 09:06:13.042778] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.625 [2024-07-26 09:06:13.052048] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.625 [2024-07-26 09:06:13.052483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.625 [2024-07-26 09:06:13.052513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:54.625 [2024-07-26 09:06:13.052531] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:54.625 [2024-07-26 09:06:13.052769] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:54.625 [2024-07-26 09:06:13.053012] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.625 [2024-07-26 09:06:13.053036] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.625 [2024-07-26 09:06:13.053051] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.625 [2024-07-26 09:06:13.056629] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.625 [2024-07-26 09:06:13.065917] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.625 [2024-07-26 09:06:13.066316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.625 [2024-07-26 09:06:13.066347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:54.625 [2024-07-26 09:06:13.066370] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:54.625 [2024-07-26 09:06:13.066609] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:54.625 [2024-07-26 09:06:13.066852] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.625 [2024-07-26 09:06:13.066876] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.625 [2024-07-26 09:06:13.066891] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.625 [2024-07-26 09:06:13.070495] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.625 [2024-07-26 09:06:13.079781] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.625 [2024-07-26 09:06:13.080219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.625 [2024-07-26 09:06:13.080250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:54.625 [2024-07-26 09:06:13.080268] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:54.625 [2024-07-26 09:06:13.080505] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:54.625 [2024-07-26 09:06:13.080748] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.625 [2024-07-26 09:06:13.080771] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.625 [2024-07-26 09:06:13.080787] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.887 [2024-07-26 09:06:13.084370] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.887 [2024-07-26 09:06:13.093654] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.887 [2024-07-26 09:06:13.094051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.887 [2024-07-26 09:06:13.094090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:54.887 [2024-07-26 09:06:13.094108] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:54.887 [2024-07-26 09:06:13.094346] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:54.887 [2024-07-26 09:06:13.094589] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.887 [2024-07-26 09:06:13.094612] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.887 [2024-07-26 09:06:13.094627] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.887 [2024-07-26 09:06:13.098212] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.887 [2024-07-26 09:06:13.107692] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.887 [2024-07-26 09:06:13.108118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.887 [2024-07-26 09:06:13.108149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:54.887 [2024-07-26 09:06:13.108166] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:54.887 [2024-07-26 09:06:13.108404] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:54.887 [2024-07-26 09:06:13.108647] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.887 [2024-07-26 09:06:13.108676] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.887 [2024-07-26 09:06:13.108692] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.887 [2024-07-26 09:06:13.112277] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.887 [2024-07-26 09:06:13.121555] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.888 [2024-07-26 09:06:13.121991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.888 [2024-07-26 09:06:13.122021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:54.888 [2024-07-26 09:06:13.122038] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:54.888 [2024-07-26 09:06:13.122287] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:54.888 [2024-07-26 09:06:13.122531] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.888 [2024-07-26 09:06:13.122555] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.888 [2024-07-26 09:06:13.122570] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.888 [2024-07-26 09:06:13.126160] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.888 [2024-07-26 09:06:13.135493] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.888 [2024-07-26 09:06:13.135930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.888 [2024-07-26 09:06:13.135961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:54.888 [2024-07-26 09:06:13.135978] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:54.888 [2024-07-26 09:06:13.136229] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:54.888 [2024-07-26 09:06:13.136473] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.888 [2024-07-26 09:06:13.136498] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.888 [2024-07-26 09:06:13.136513] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.888 [2024-07-26 09:06:13.140096] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.888 [2024-07-26 09:06:13.149384] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.888 [2024-07-26 09:06:13.149786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.888 [2024-07-26 09:06:13.149817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:54.888 [2024-07-26 09:06:13.149835] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:54.888 [2024-07-26 09:06:13.150084] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:54.888 [2024-07-26 09:06:13.150328] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.888 [2024-07-26 09:06:13.150352] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.888 [2024-07-26 09:06:13.150368] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.888 [2024-07-26 09:06:13.153934] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.888 [2024-07-26 09:06:13.163412] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.888 [2024-07-26 09:06:13.163852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.888 [2024-07-26 09:06:13.163883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:54.888 [2024-07-26 09:06:13.163900] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:54.888 [2024-07-26 09:06:13.164149] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:54.888 [2024-07-26 09:06:13.164393] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.888 [2024-07-26 09:06:13.164417] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.888 [2024-07-26 09:06:13.164433] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.888 [2024-07-26 09:06:13.168001] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.888 [2024-07-26 09:06:13.177278] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.888 [2024-07-26 09:06:13.177702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.888 [2024-07-26 09:06:13.177733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:54.888 [2024-07-26 09:06:13.177750] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:54.888 [2024-07-26 09:06:13.177988] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:54.888 [2024-07-26 09:06:13.178242] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.888 [2024-07-26 09:06:13.178266] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.888 [2024-07-26 09:06:13.178281] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.888 [2024-07-26 09:06:13.181848] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.888 [2024-07-26 09:06:13.191123] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.888 [2024-07-26 09:06:13.191531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.888 [2024-07-26 09:06:13.191562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:54.888 [2024-07-26 09:06:13.191579] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:54.888 [2024-07-26 09:06:13.191817] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:54.888 [2024-07-26 09:06:13.192070] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.888 [2024-07-26 09:06:13.192094] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.888 [2024-07-26 09:06:13.192109] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.888 [2024-07-26 09:06:13.195679] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.888 [2024-07-26 09:06:13.205162] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.888 [2024-07-26 09:06:13.205570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.888 [2024-07-26 09:06:13.205600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:54.888 [2024-07-26 09:06:13.205617] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:54.888 [2024-07-26 09:06:13.205860] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:54.888 [2024-07-26 09:06:13.206116] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.888 [2024-07-26 09:06:13.206140] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.888 [2024-07-26 09:06:13.206156] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.888 [2024-07-26 09:06:13.209740] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.888 [2024-07-26 09:06:13.219020] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.888 [2024-07-26 09:06:13.219466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.888 [2024-07-26 09:06:13.219497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:54.888 [2024-07-26 09:06:13.219514] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:54.888 [2024-07-26 09:06:13.219752] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:54.888 [2024-07-26 09:06:13.219995] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.888 [2024-07-26 09:06:13.220019] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.888 [2024-07-26 09:06:13.220034] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.888 [2024-07-26 09:06:13.223611] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.888 [2024-07-26 09:06:13.232903] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.888 [2024-07-26 09:06:13.233332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.888 [2024-07-26 09:06:13.233362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:54.888 [2024-07-26 09:06:13.233380] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:54.888 [2024-07-26 09:06:13.233619] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:54.888 [2024-07-26 09:06:13.233861] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.888 [2024-07-26 09:06:13.233885] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.888 [2024-07-26 09:06:13.233901] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.888 [2024-07-26 09:06:13.237478] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.888 [2024-07-26 09:06:13.246744] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.888 [2024-07-26 09:06:13.247173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.888 [2024-07-26 09:06:13.247204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:54.888 [2024-07-26 09:06:13.247223] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:54.888 [2024-07-26 09:06:13.247461] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:54.888 [2024-07-26 09:06:13.247705] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.888 [2024-07-26 09:06:13.247729] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.888 [2024-07-26 09:06:13.247750] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.888 [2024-07-26 09:06:13.251331] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.889 [2024-07-26 09:06:13.260762] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.889 [2024-07-26 09:06:13.261173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.889 [2024-07-26 09:06:13.261204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:54.889 [2024-07-26 09:06:13.261221] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:54.889 [2024-07-26 09:06:13.261460] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:54.889 [2024-07-26 09:06:13.261703] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.889 [2024-07-26 09:06:13.261726] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.889 [2024-07-26 09:06:13.261742] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.889 [2024-07-26 09:06:13.265321] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.889 [2024-07-26 09:06:13.274600] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.889 [2024-07-26 09:06:13.275026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.889 [2024-07-26 09:06:13.275056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:54.889 [2024-07-26 09:06:13.275087] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:54.889 [2024-07-26 09:06:13.275326] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:54.889 [2024-07-26 09:06:13.275569] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.889 [2024-07-26 09:06:13.275592] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.889 [2024-07-26 09:06:13.275608] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.889 [2024-07-26 09:06:13.279184] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.889 [2024-07-26 09:06:13.288465] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.889 [2024-07-26 09:06:13.288903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.889 [2024-07-26 09:06:13.288934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:54.889 [2024-07-26 09:06:13.288952] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:54.889 [2024-07-26 09:06:13.289208] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:54.889 [2024-07-26 09:06:13.289452] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.889 [2024-07-26 09:06:13.289475] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.889 [2024-07-26 09:06:13.289491] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.889 [2024-07-26 09:06:13.293066] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.889 [2024-07-26 09:06:13.302336] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.889 [2024-07-26 09:06:13.302777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.889 [2024-07-26 09:06:13.302813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:54.889 [2024-07-26 09:06:13.302831] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:54.889 [2024-07-26 09:06:13.303081] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:54.889 [2024-07-26 09:06:13.303325] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.889 [2024-07-26 09:06:13.303349] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.889 [2024-07-26 09:06:13.303364] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.889 [2024-07-26 09:06:13.306936] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.889 [2024-07-26 09:06:13.316208] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.889 [2024-07-26 09:06:13.316634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.889 [2024-07-26 09:06:13.316664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:54.889 [2024-07-26 09:06:13.316682] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:54.889 [2024-07-26 09:06:13.316919] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:54.889 [2024-07-26 09:06:13.317173] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.889 [2024-07-26 09:06:13.317198] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.889 [2024-07-26 09:06:13.317214] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.889 [2024-07-26 09:06:13.320778] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.889 [2024-07-26 09:06:13.330073] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.889 [2024-07-26 09:06:13.330501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.889 [2024-07-26 09:06:13.330531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:54.889 [2024-07-26 09:06:13.330548] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:54.889 [2024-07-26 09:06:13.330787] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:54.889 [2024-07-26 09:06:13.331029] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.889 [2024-07-26 09:06:13.331053] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.889 [2024-07-26 09:06:13.331080] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.889 [2024-07-26 09:06:13.334645] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.889 [2024-07-26 09:06:13.343912] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.889 [2024-07-26 09:06:13.344330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.889 [2024-07-26 09:06:13.344361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:54.889 [2024-07-26 09:06:13.344378] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:54.889 [2024-07-26 09:06:13.344616] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:54.889 [2024-07-26 09:06:13.344869] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.889 [2024-07-26 09:06:13.344893] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.889 [2024-07-26 09:06:13.344908] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.149 [2024-07-26 09:06:13.348492] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.149 [2024-07-26 09:06:13.357759] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.149 [2024-07-26 09:06:13.358172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.149 [2024-07-26 09:06:13.358203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:55.149 [2024-07-26 09:06:13.358220] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:55.149 [2024-07-26 09:06:13.358458] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:55.149 [2024-07-26 09:06:13.358701] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.149 [2024-07-26 09:06:13.358724] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.149 [2024-07-26 09:06:13.358739] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.149 [2024-07-26 09:06:13.362315] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.149 [2024-07-26 09:06:13.371790] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.149 [2024-07-26 09:06:13.372232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.149 [2024-07-26 09:06:13.372263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:55.149 [2024-07-26 09:06:13.372281] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:55.149 [2024-07-26 09:06:13.372518] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:55.149 [2024-07-26 09:06:13.372761] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.149 [2024-07-26 09:06:13.372785] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.149 [2024-07-26 09:06:13.372800] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.149 [2024-07-26 09:06:13.376377] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.149 [2024-07-26 09:06:13.385637] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.149 [2024-07-26 09:06:13.386035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.149 [2024-07-26 09:06:13.386072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:55.149 [2024-07-26 09:06:13.386091] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:55.149 [2024-07-26 09:06:13.386329] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:55.149 [2024-07-26 09:06:13.386571] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.149 [2024-07-26 09:06:13.386594] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.149 [2024-07-26 09:06:13.386610] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.149 [2024-07-26 09:06:13.390189] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.149 [2024-07-26 09:06:13.399662] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.149 [2024-07-26 09:06:13.400092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.149 [2024-07-26 09:06:13.400123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:55.149 [2024-07-26 09:06:13.400141] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:55.149 [2024-07-26 09:06:13.400379] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:55.149 [2024-07-26 09:06:13.400621] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.149 [2024-07-26 09:06:13.400645] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.149 [2024-07-26 09:06:13.400660] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.149 [2024-07-26 09:06:13.404239] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.149 [2024-07-26 09:06:13.413501] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.149 [2024-07-26 09:06:13.413907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.149 [2024-07-26 09:06:13.413937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:55.149 [2024-07-26 09:06:13.413954] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:55.149 [2024-07-26 09:06:13.414205] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:55.149 [2024-07-26 09:06:13.414449] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.149 [2024-07-26 09:06:13.414473] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.149 [2024-07-26 09:06:13.414488] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.149 [2024-07-26 09:06:13.418054] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.150 [2024-07-26 09:06:13.427549] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.150 [2024-07-26 09:06:13.427954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.150 [2024-07-26 09:06:13.427985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:55.150 [2024-07-26 09:06:13.428002] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:55.150 [2024-07-26 09:06:13.428252] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:55.150 [2024-07-26 09:06:13.428496] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.150 [2024-07-26 09:06:13.428520] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.150 [2024-07-26 09:06:13.428535] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.150 [2024-07-26 09:06:13.432110] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.150 [2024-07-26 09:06:13.441583] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.150 [2024-07-26 09:06:13.442005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.150 [2024-07-26 09:06:13.442035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:55.150 [2024-07-26 09:06:13.442067] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:55.150 [2024-07-26 09:06:13.442309] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:55.150 [2024-07-26 09:06:13.442552] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.150 [2024-07-26 09:06:13.442577] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.150 [2024-07-26 09:06:13.442592] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.150 [2024-07-26 09:06:13.446164] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.150 [2024-07-26 09:06:13.455485] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.150 [2024-07-26 09:06:13.455882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.150 [2024-07-26 09:06:13.455913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:55.150 [2024-07-26 09:06:13.455930] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:55.150 [2024-07-26 09:06:13.456181] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:55.150 [2024-07-26 09:06:13.456425] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.150 [2024-07-26 09:06:13.456449] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.150 [2024-07-26 09:06:13.456464] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.150 [2024-07-26 09:06:13.460030] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.150 [2024-07-26 09:06:13.469513] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.150 [2024-07-26 09:06:13.469918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.150 [2024-07-26 09:06:13.469949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:55.150 [2024-07-26 09:06:13.469967] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:55.150 [2024-07-26 09:06:13.470218] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:55.150 [2024-07-26 09:06:13.470462] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.150 [2024-07-26 09:06:13.470486] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.150 [2024-07-26 09:06:13.470501] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.150 [2024-07-26 09:06:13.474072] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.150 [2024-07-26 09:06:13.483541] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.150 [2024-07-26 09:06:13.483941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.150 [2024-07-26 09:06:13.483972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:55.150 [2024-07-26 09:06:13.483990] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:55.150 [2024-07-26 09:06:13.484239] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:55.150 [2024-07-26 09:06:13.484484] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.150 [2024-07-26 09:06:13.484513] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.150 [2024-07-26 09:06:13.484529] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.150 [2024-07-26 09:06:13.488103] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.150 [2024-07-26 09:06:13.497574] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.150 [2024-07-26 09:06:13.498001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.150 [2024-07-26 09:06:13.498031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:55.150 [2024-07-26 09:06:13.498048] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:55.150 [2024-07-26 09:06:13.498297] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:55.150 [2024-07-26 09:06:13.498541] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.150 [2024-07-26 09:06:13.498565] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.150 [2024-07-26 09:06:13.498580] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.150 [2024-07-26 09:06:13.502154] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.150 [2024-07-26 09:06:13.511412] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.150 [2024-07-26 09:06:13.511844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.150 [2024-07-26 09:06:13.511874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:55.150 [2024-07-26 09:06:13.511891] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:55.150 [2024-07-26 09:06:13.512142] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:55.150 [2024-07-26 09:06:13.512386] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.150 [2024-07-26 09:06:13.512409] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.150 [2024-07-26 09:06:13.512425] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.150 [2024-07-26 09:06:13.515989] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.150 [2024-07-26 09:06:13.525259] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.150 [2024-07-26 09:06:13.525684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.150 [2024-07-26 09:06:13.525714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:55.150 [2024-07-26 09:06:13.525732] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:55.150 [2024-07-26 09:06:13.525979] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:55.150 [2024-07-26 09:06:13.526236] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.150 [2024-07-26 09:06:13.526260] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.150 [2024-07-26 09:06:13.526275] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.150 [2024-07-26 09:06:13.529842] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.150 [2024-07-26 09:06:13.539120] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.150 [2024-07-26 09:06:13.539547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.150 [2024-07-26 09:06:13.539578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:55.150 [2024-07-26 09:06:13.539596] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:55.150 [2024-07-26 09:06:13.539833] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:55.150 [2024-07-26 09:06:13.540087] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.150 [2024-07-26 09:06:13.540112] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.150 [2024-07-26 09:06:13.540127] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.150 [2024-07-26 09:06:13.543692] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.150 [2024-07-26 09:06:13.552957] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.150 [2024-07-26 09:06:13.553382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.150 [2024-07-26 09:06:13.553414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:55.150 [2024-07-26 09:06:13.553432] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:55.150 [2024-07-26 09:06:13.553670] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:55.150 [2024-07-26 09:06:13.553914] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.150 [2024-07-26 09:06:13.553938] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.150 [2024-07-26 09:06:13.553953] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.150 [2024-07-26 09:06:13.557530] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.150 [2024-07-26 09:06:13.567003] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.150 [2024-07-26 09:06:13.567437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.150 [2024-07-26 09:06:13.567467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:55.150 [2024-07-26 09:06:13.567485] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:55.150 [2024-07-26 09:06:13.567721] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:55.150 [2024-07-26 09:06:13.567964] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.150 [2024-07-26 09:06:13.567988] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.150 [2024-07-26 09:06:13.568003] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.150 [2024-07-26 09:06:13.571579] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.150 [2024-07-26 09:06:13.580847] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.150 [2024-07-26 09:06:13.581287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.150 [2024-07-26 09:06:13.581318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:55.150 [2024-07-26 09:06:13.581336] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:55.150 [2024-07-26 09:06:13.581580] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:55.150 [2024-07-26 09:06:13.581823] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.150 [2024-07-26 09:06:13.581847] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.150 [2024-07-26 09:06:13.581862] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.150 [2024-07-26 09:06:13.585436] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.150 [2024-07-26 09:06:13.594701] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.150 [2024-07-26 09:06:13.595107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.150 [2024-07-26 09:06:13.595138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:55.150 [2024-07-26 09:06:13.595155] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:55.151 [2024-07-26 09:06:13.595393] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:55.151 [2024-07-26 09:06:13.595635] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.151 [2024-07-26 09:06:13.595659] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.151 [2024-07-26 09:06:13.595674] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.151 [2024-07-26 09:06:13.599251] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.411 [2024-07-26 09:06:13.608728] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.411 [2024-07-26 09:06:13.609152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.411 [2024-07-26 09:06:13.609183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:55.411 [2024-07-26 09:06:13.609200] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:55.411 [2024-07-26 09:06:13.609438] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:55.411 [2024-07-26 09:06:13.609681] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.411 [2024-07-26 09:06:13.609705] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.411 [2024-07-26 09:06:13.609720] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.411 [2024-07-26 09:06:13.613301] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.411 [2024-07-26 09:06:13.622560] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.411 [2024-07-26 09:06:13.622976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.411 [2024-07-26 09:06:13.623006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:55.411 [2024-07-26 09:06:13.623023] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:55.411 [2024-07-26 09:06:13.623271] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:55.411 [2024-07-26 09:06:13.623515] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.411 [2024-07-26 09:06:13.623539] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.411 [2024-07-26 09:06:13.623561] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.411 [2024-07-26 09:06:13.627148] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.411 [2024-07-26 09:06:13.636405] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.411 [2024-07-26 09:06:13.636830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.411 [2024-07-26 09:06:13.636860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:55.411 [2024-07-26 09:06:13.636877] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:55.411 [2024-07-26 09:06:13.637125] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:55.411 [2024-07-26 09:06:13.637369] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.411 [2024-07-26 09:06:13.637393] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.411 [2024-07-26 09:06:13.637409] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.411 [2024-07-26 09:06:13.640973] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.411 [2024-07-26 09:06:13.650247] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.411 [2024-07-26 09:06:13.650671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.411 [2024-07-26 09:06:13.650702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:55.411 [2024-07-26 09:06:13.650719] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:55.411 [2024-07-26 09:06:13.650956] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:55.411 [2024-07-26 09:06:13.651210] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.411 [2024-07-26 09:06:13.651244] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.411 [2024-07-26 09:06:13.651260] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.411 [2024-07-26 09:06:13.654833] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.411 [2024-07-26 09:06:13.664101] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.411 [2024-07-26 09:06:13.664527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.411 [2024-07-26 09:06:13.664557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:55.411 [2024-07-26 09:06:13.664575] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:55.411 [2024-07-26 09:06:13.664813] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:55.411 [2024-07-26 09:06:13.665056] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.411 [2024-07-26 09:06:13.665090] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.411 [2024-07-26 09:06:13.665106] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.411 [2024-07-26 09:06:13.668671] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.411 [2024-07-26 09:06:13.677933] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.411 [2024-07-26 09:06:13.678380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.411 [2024-07-26 09:06:13.678411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:55.411 [2024-07-26 09:06:13.678429] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:55.411 [2024-07-26 09:06:13.678666] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:55.411 [2024-07-26 09:06:13.678909] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.411 [2024-07-26 09:06:13.678934] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.411 [2024-07-26 09:06:13.678949] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.411 [2024-07-26 09:06:13.682524] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.411 [2024-07-26 09:06:13.691792] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.411 [2024-07-26 09:06:13.692207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.411 [2024-07-26 09:06:13.692237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:55.412 [2024-07-26 09:06:13.692254] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:55.412 [2024-07-26 09:06:13.692492] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:55.412 [2024-07-26 09:06:13.692735] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.412 [2024-07-26 09:06:13.692758] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.412 [2024-07-26 09:06:13.692774] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.412 [2024-07-26 09:06:13.696351] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.412 [2024-07-26 09:06:13.705823] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.412 [2024-07-26 09:06:13.706257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.412 [2024-07-26 09:06:13.706288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:55.412 [2024-07-26 09:06:13.706306] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:55.412 [2024-07-26 09:06:13.706544] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:55.412 [2024-07-26 09:06:13.706786] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.412 [2024-07-26 09:06:13.706810] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.412 [2024-07-26 09:06:13.706825] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.412 [2024-07-26 09:06:13.710402] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.412 [2024-07-26 09:06:13.719664] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.412 [2024-07-26 09:06:13.720096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.412 [2024-07-26 09:06:13.720127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:55.412 [2024-07-26 09:06:13.720144] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:55.412 [2024-07-26 09:06:13.720388] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:55.412 [2024-07-26 09:06:13.720631] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.412 [2024-07-26 09:06:13.720654] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.412 [2024-07-26 09:06:13.720669] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.412 [2024-07-26 09:06:13.724248] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.412 [2024-07-26 09:06:13.733524] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.412 [2024-07-26 09:06:13.733930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.412 [2024-07-26 09:06:13.733961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:55.412 [2024-07-26 09:06:13.733979] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:55.412 [2024-07-26 09:06:13.734227] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:55.412 [2024-07-26 09:06:13.734472] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.412 [2024-07-26 09:06:13.734495] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.412 [2024-07-26 09:06:13.734510] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.412 [2024-07-26 09:06:13.738081] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.412 [2024-07-26 09:06:13.747559] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.412 [2024-07-26 09:06:13.747985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.412 [2024-07-26 09:06:13.748016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:55.412 [2024-07-26 09:06:13.748033] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:55.412 [2024-07-26 09:06:13.748285] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:55.412 [2024-07-26 09:06:13.748529] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.412 [2024-07-26 09:06:13.748553] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.412 [2024-07-26 09:06:13.748568] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.412 [2024-07-26 09:06:13.752140] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.412 [2024-07-26 09:06:13.761403] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.412 [2024-07-26 09:06:13.761849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.412 [2024-07-26 09:06:13.761880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:55.412 [2024-07-26 09:06:13.761897] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:55.412 [2024-07-26 09:06:13.762146] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:55.412 [2024-07-26 09:06:13.762390] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.412 [2024-07-26 09:06:13.762414] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.412 [2024-07-26 09:06:13.762434] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.412 [2024-07-26 09:06:13.766001] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.412 [2024-07-26 09:06:13.775291] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.412 [2024-07-26 09:06:13.775740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.412 [2024-07-26 09:06:13.775771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:55.412 [2024-07-26 09:06:13.775788] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:55.412 [2024-07-26 09:06:13.776026] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:55.412 [2024-07-26 09:06:13.776278] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.412 [2024-07-26 09:06:13.776303] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.412 [2024-07-26 09:06:13.776318] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.412 [2024-07-26 09:06:13.779881] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.412 [2024-07-26 09:06:13.789165] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.412 [2024-07-26 09:06:13.789590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.412 [2024-07-26 09:06:13.789621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:55.412 [2024-07-26 09:06:13.789638] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:55.412 [2024-07-26 09:06:13.789876] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:55.412 [2024-07-26 09:06:13.790130] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.412 [2024-07-26 09:06:13.790156] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.412 [2024-07-26 09:06:13.790171] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.412 [2024-07-26 09:06:13.793740] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.412 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 1115335 Killed "${NVMF_APP[@]}" "$@" 00:32:55.412 09:06:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:32:55.412 09:06:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:32:55.412 09:06:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:55.412 09:06:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:55.412 09:06:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:55.412 [2024-07-26 09:06:13.803015] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.412 [2024-07-26 09:06:13.803422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.412 [2024-07-26 09:06:13.803453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:55.412 [2024-07-26 09:06:13.803471] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:55.412 [2024-07-26 09:06:13.803708] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:55.412 [2024-07-26 09:06:13.803951] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.412 [2024-07-26 09:06:13.803981] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.412 [2024-07-26 09:06:13.803997] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.412 09:06:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=1116807 00:32:55.412 09:06:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:32:55.412 09:06:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 1116807 00:32:55.412 09:06:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 1116807 ']' 00:32:55.412 09:06:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:55.412 09:06:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:55.412 09:06:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:55.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:55.413 09:06:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:55.413 09:06:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:55.413 [2024-07-26 09:06:13.807596] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.413 [2024-07-26 09:06:13.816870] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.413 [2024-07-26 09:06:13.817295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.413 [2024-07-26 09:06:13.817325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:55.413 [2024-07-26 09:06:13.817342] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:55.413 [2024-07-26 09:06:13.817579] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:55.413 [2024-07-26 09:06:13.817822] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.413 [2024-07-26 09:06:13.817846] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.413 [2024-07-26 09:06:13.817862] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.413 [2024-07-26 09:06:13.821438] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.413 [2024-07-26 09:06:13.830506] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.413 [2024-07-26 09:06:13.830927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.413 [2024-07-26 09:06:13.830954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:55.413 [2024-07-26 09:06:13.830970] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:55.413 [2024-07-26 09:06:13.831193] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:55.413 [2024-07-26 09:06:13.831425] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.413 [2024-07-26 09:06:13.831459] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.413 [2024-07-26 09:06:13.831472] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.413 [2024-07-26 09:06:13.834641] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.413 [2024-07-26 09:06:13.843833] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.413 [2024-07-26 09:06:13.844233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.413 [2024-07-26 09:06:13.844262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:55.413 [2024-07-26 09:06:13.844278] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:55.413 [2024-07-26 09:06:13.844520] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:55.413 [2024-07-26 09:06:13.844733] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.413 [2024-07-26 09:06:13.844752] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.413 [2024-07-26 09:06:13.844764] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.413 [2024-07-26 09:06:13.847996] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.413 [2024-07-26 09:06:13.855746] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:32:55.413 [2024-07-26 09:06:13.855827] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:55.413 [2024-07-26 09:06:13.857217] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.413 [2024-07-26 09:06:13.857652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.413 [2024-07-26 09:06:13.857681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:55.413 [2024-07-26 09:06:13.857697] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:55.413 [2024-07-26 09:06:13.857951] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:55.413 [2024-07-26 09:06:13.858181] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.413 [2024-07-26 09:06:13.858203] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.413 [2024-07-26 09:06:13.858217] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.413 [2024-07-26 09:06:13.861262] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.673 [2024-07-26 09:06:13.870951] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.673 [2024-07-26 09:06:13.871419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.673 [2024-07-26 09:06:13.871447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:55.673 [2024-07-26 09:06:13.871462] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:55.673 [2024-07-26 09:06:13.871696] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:55.673 [2024-07-26 09:06:13.871897] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.673 [2024-07-26 09:06:13.871917] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.673 [2024-07-26 09:06:13.871930] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.673 [2024-07-26 09:06:13.875052] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.673 [2024-07-26 09:06:13.884432] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.673 [2024-07-26 09:06:13.884877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.673 [2024-07-26 09:06:13.884914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:55.673 [2024-07-26 09:06:13.884932] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:55.673 [2024-07-26 09:06:13.885185] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:55.673 [2024-07-26 09:06:13.885406] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.673 [2024-07-26 09:06:13.885426] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.673 [2024-07-26 09:06:13.885439] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.673 [2024-07-26 09:06:13.888476] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.673 [2024-07-26 09:06:13.897911] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.673 EAL: No free 2048 kB hugepages reported on node 1 00:32:55.673 [2024-07-26 09:06:13.898333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.673 [2024-07-26 09:06:13.898376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:55.673 [2024-07-26 09:06:13.898392] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:55.673 [2024-07-26 09:06:13.898625] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:55.673 [2024-07-26 09:06:13.898825] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.673 [2024-07-26 09:06:13.898844] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.673 [2024-07-26 09:06:13.898857] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.673 [2024-07-26 09:06:13.902112] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.673 [2024-07-26 09:06:13.902370] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:55.673 [2024-07-26 09:06:13.911496] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.673 [2024-07-26 09:06:13.911901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.673 [2024-07-26 09:06:13.911929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:55.673 [2024-07-26 09:06:13.911945] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:55.673 [2024-07-26 09:06:13.912167] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:55.673 [2024-07-26 09:06:13.912423] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.673 [2024-07-26 09:06:13.912444] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.673 [2024-07-26 09:06:13.912457] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.673 [2024-07-26 09:06:13.915622] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.673 [2024-07-26 09:06:13.925073] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.673 [2024-07-26 09:06:13.925454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.673 [2024-07-26 09:06:13.925482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:55.673 [2024-07-26 09:06:13.925498] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:55.673 [2024-07-26 09:06:13.925754] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:55.673 [2024-07-26 09:06:13.925959] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.673 [2024-07-26 09:06:13.925980] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.673 [2024-07-26 09:06:13.925993] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.673 [2024-07-26 09:06:13.929231] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.673 [2024-07-26 09:06:13.931780] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:55.673 [2024-07-26 09:06:13.938575] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.673 [2024-07-26 09:06:13.939112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.673 [2024-07-26 09:06:13.939150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:55.673 [2024-07-26 09:06:13.939169] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:55.673 [2024-07-26 09:06:13.939414] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:55.673 [2024-07-26 09:06:13.939626] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.673 [2024-07-26 09:06:13.939647] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.673 [2024-07-26 09:06:13.939664] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.673 [2024-07-26 09:06:13.942816] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.673 [2024-07-26 09:06:13.952130] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.673 [2024-07-26 09:06:13.952673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.673 [2024-07-26 09:06:13.952714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:55.673 [2024-07-26 09:06:13.952734] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:55.673 [2024-07-26 09:06:13.952966] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:55.673 [2024-07-26 09:06:13.953205] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.673 [2024-07-26 09:06:13.953228] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.673 [2024-07-26 09:06:13.953247] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.673 [2024-07-26 09:06:13.956400] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.673 [2024-07-26 09:06:13.965648] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.673 [2024-07-26 09:06:13.966069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.673 [2024-07-26 09:06:13.966099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:55.673 [2024-07-26 09:06:13.966115] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:55.673 [2024-07-26 09:06:13.966347] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:55.673 [2024-07-26 09:06:13.966571] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.673 [2024-07-26 09:06:13.966602] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.673 [2024-07-26 09:06:13.966617] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.673 [2024-07-26 09:06:13.969752] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.673 [2024-07-26 09:06:13.979198] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.673 [2024-07-26 09:06:13.979740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.673 [2024-07-26 09:06:13.979773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:55.673 [2024-07-26 09:06:13.979791] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:55.673 [2024-07-26 09:06:13.980038] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:55.673 [2024-07-26 09:06:13.980273] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.673 [2024-07-26 09:06:13.980296] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.673 [2024-07-26 09:06:13.980312] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.673 [2024-07-26 09:06:13.983464] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.673 [2024-07-26 09:06:13.992748] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.673 [2024-07-26 09:06:13.993363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.674 [2024-07-26 09:06:13.993406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:55.674 [2024-07-26 09:06:13.993426] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:55.674 [2024-07-26 09:06:13.993677] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:55.674 [2024-07-26 09:06:13.993889] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.674 [2024-07-26 09:06:13.993910] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.674 [2024-07-26 09:06:13.993928] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.674 [2024-07-26 09:06:13.997134] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.674 [2024-07-26 09:06:14.006262] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.674 [2024-07-26 09:06:14.006730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.674 [2024-07-26 09:06:14.006759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:55.674 [2024-07-26 09:06:14.006775] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:55.674 [2024-07-26 09:06:14.007020] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:55.674 [2024-07-26 09:06:14.007265] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.674 [2024-07-26 09:06:14.007287] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.674 [2024-07-26 09:06:14.007302] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.674 [2024-07-26 09:06:14.010452] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.674 [2024-07-26 09:06:14.019717] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.674 [2024-07-26 09:06:14.020133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.674 [2024-07-26 09:06:14.020163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:55.674 [2024-07-26 09:06:14.020180] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:55.674 [2024-07-26 09:06:14.020410] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:55.674 [2024-07-26 09:06:14.020617] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.674 [2024-07-26 09:06:14.020637] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.674 [2024-07-26 09:06:14.020651] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.674 [2024-07-26 09:06:14.023780] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.674 [2024-07-26 09:06:14.024106] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:55.674 [2024-07-26 09:06:14.024157] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:55.674 [2024-07-26 09:06:14.024171] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:55.674 [2024-07-26 09:06:14.024183] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:55.674 [2024-07-26 09:06:14.024194] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:55.674 [2024-07-26 09:06:14.024248] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:32:55.674 [2024-07-26 09:06:14.024307] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:32:55.674 [2024-07-26 09:06:14.024309] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:55.674 [2024-07-26 09:06:14.033324] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.674 [2024-07-26 09:06:14.033881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.674 [2024-07-26 09:06:14.033924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:55.674 [2024-07-26 09:06:14.033944] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:55.674 [2024-07-26 09:06:14.034180] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:55.674 [2024-07-26 09:06:14.034407] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.674 [2024-07-26 09:06:14.034430] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.674 [2024-07-26 09:06:14.034448] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.674 [2024-07-26 09:06:14.037664] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.674 [2024-07-26 09:06:14.046974] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.674 [2024-07-26 09:06:14.047584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.674 [2024-07-26 09:06:14.047626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:55.674 [2024-07-26 09:06:14.047648] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:55.674 [2024-07-26 09:06:14.047874] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:55.674 [2024-07-26 09:06:14.048121] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.674 [2024-07-26 09:06:14.048156] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.674 [2024-07-26 09:06:14.048175] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.674 [2024-07-26 09:06:14.051389] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.674 [2024-07-26 09:06:14.060517] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.674 [2024-07-26 09:06:14.061087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.674 [2024-07-26 09:06:14.061134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:55.674 [2024-07-26 09:06:14.061155] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:55.674 [2024-07-26 09:06:14.061399] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:55.674 [2024-07-26 09:06:14.061617] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.674 [2024-07-26 09:06:14.061639] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.674 [2024-07-26 09:06:14.061658] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.674 [2024-07-26 09:06:14.064773] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.674 [2024-07-26 09:06:14.073969] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.674 [2024-07-26 09:06:14.074582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.674 [2024-07-26 09:06:14.074626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:55.674 [2024-07-26 09:06:14.074647] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:55.674 [2024-07-26 09:06:14.074873] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:55.674 [2024-07-26 09:06:14.075117] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.674 [2024-07-26 09:06:14.075140] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.674 [2024-07-26 09:06:14.075158] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.674 [2024-07-26 09:06:14.078525] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.674 [2024-07-26 09:06:14.087683] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.674 [2024-07-26 09:06:14.088144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.674 [2024-07-26 09:06:14.088180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:55.674 [2024-07-26 09:06:14.088200] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:55.674 [2024-07-26 09:06:14.088437] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:55.674 [2024-07-26 09:06:14.088654] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.674 [2024-07-26 09:06:14.088675] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.674 [2024-07-26 09:06:14.088691] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.674 [2024-07-26 09:06:14.091932] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.674 [2024-07-26 09:06:14.101161] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.674 [2024-07-26 09:06:14.101680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.674 [2024-07-26 09:06:14.101721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:55.674 [2024-07-26 09:06:14.101742] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:55.674 [2024-07-26 09:06:14.101977] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:55.674 [2024-07-26 09:06:14.102203] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.674 [2024-07-26 09:06:14.102226] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.674 [2024-07-26 09:06:14.102243] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.674 [2024-07-26 09:06:14.105405] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.674 [2024-07-26 09:06:14.114745] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.674 [2024-07-26 09:06:14.115137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.674 [2024-07-26 09:06:14.115166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:55.674 [2024-07-26 09:06:14.115183] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:55.674 [2024-07-26 09:06:14.115415] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:55.675 [2024-07-26 09:06:14.115628] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.675 [2024-07-26 09:06:14.115648] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.675 [2024-07-26 09:06:14.115663] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.675 [2024-07-26 09:06:14.118825] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.675 [2024-07-26 09:06:14.128358] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.675 [2024-07-26 09:06:14.128740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.675 [2024-07-26 09:06:14.128767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:55.675 [2024-07-26 09:06:14.128783] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:55.675 [2024-07-26 09:06:14.128997] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:55.675 [2024-07-26 09:06:14.129227] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.675 [2024-07-26 09:06:14.129250] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.675 [2024-07-26 09:06:14.129265] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.934 [2024-07-26 09:06:14.132546] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.934 09:06:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:55.934 09:06:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:32:55.934 09:06:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:55.934 09:06:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:55.934 09:06:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:55.934 [2024-07-26 09:06:14.141891] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.934 [2024-07-26 09:06:14.142272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.934 [2024-07-26 09:06:14.142302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:55.934 [2024-07-26 09:06:14.142318] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:55.934 [2024-07-26 09:06:14.142532] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:55.934 [2024-07-26 09:06:14.142759] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.934 [2024-07-26 09:06:14.142780] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.934 [2024-07-26 09:06:14.142794] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.934 [2024-07-26 09:06:14.146020] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.934 [2024-07-26 09:06:14.155454] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.934 09:06:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:55.934 09:06:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:55.934 [2024-07-26 09:06:14.155844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.934 [2024-07-26 09:06:14.155873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:55.934 [2024-07-26 09:06:14.155890] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:55.934 09:06:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:55.934 09:06:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:55.934 [2024-07-26 09:06:14.156115] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:55.934 [2024-07-26 09:06:14.156336] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.934 [2024-07-26 09:06:14.156372] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.934 [2024-07-26 09:06:14.156385] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.934 [2024-07-26 09:06:14.158268] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:55.934 [2024-07-26 09:06:14.159675] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.934 [2024-07-26 09:06:14.169010] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.934 [2024-07-26 09:06:14.169410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.934 [2024-07-26 09:06:14.169438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:55.934 [2024-07-26 09:06:14.169454] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:55.934 [2024-07-26 09:06:14.169682] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:55.934 [2024-07-26 09:06:14.169903] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.934 [2024-07-26 09:06:14.169923] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.934 [2024-07-26 09:06:14.169936] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.934 [2024-07-26 09:06:14.173138] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.934 09:06:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:55.934 09:06:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:55.934 09:06:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:55.934 09:06:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:55.934 [2024-07-26 09:06:14.182677] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.934 [2024-07-26 09:06:14.183072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.934 [2024-07-26 09:06:14.183100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:55.934 [2024-07-26 09:06:14.183117] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:55.934 [2024-07-26 09:06:14.183332] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:55.934 [2024-07-26 09:06:14.183561] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.934 [2024-07-26 09:06:14.183582] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.934 [2024-07-26 09:06:14.183596] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.934 [2024-07-26 09:06:14.186856] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.934 [2024-07-26 09:06:14.196193] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.934 [2024-07-26 09:06:14.196742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.934 [2024-07-26 09:06:14.196782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:55.934 [2024-07-26 09:06:14.196802] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:55.934 [2024-07-26 09:06:14.197043] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:55.934 [2024-07-26 09:06:14.197291] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.934 [2024-07-26 09:06:14.197314] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.934 [2024-07-26 09:06:14.197332] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.934 Malloc0 00:32:55.934 09:06:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:55.934 [2024-07-26 09:06:14.200556] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.934 09:06:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:55.934 09:06:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:55.934 09:06:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:55.934 09:06:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:55.934 09:06:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:55.934 09:06:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:55.934 09:06:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:55.934 [2024-07-26 09:06:14.209939] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.934 [2024-07-26 09:06:14.210338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.934 [2024-07-26 09:06:14.210366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116eb50 with addr=10.0.0.2, port=4420 00:32:55.934 [2024-07-26 09:06:14.210389] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116eb50 is same with the state(5) to be set 00:32:55.934 [2024-07-26 09:06:14.210617] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116eb50 (9): Bad file descriptor 00:32:55.934 [2024-07-26 09:06:14.210830] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.934 [2024-07-26 09:06:14.210851] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.934 [2024-07-26 09:06:14.210864] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.934 [2024-07-26 09:06:14.214149] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.934 09:06:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:55.934 09:06:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:55.935 09:06:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:55.935 09:06:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:55.935 [2024-07-26 09:06:14.219903] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:55.935 [2024-07-26 09:06:14.223562] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.935 09:06:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:55.935 09:06:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 1115855 00:32:55.935 [2024-07-26 09:06:14.258476] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:33:05.912 00:33:05.912 Latency(us) 00:33:05.912 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:05.912 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:33:05.912 Verification LBA range: start 0x0 length 0x4000 00:33:05.912 Nvme1n1 : 15.02 6710.84 26.21 8552.05 0.00 8361.21 952.70 20000.62 00:33:05.912 =================================================================================================================== 00:33:05.912 Total : 6710.84 26.21 8552.05 0.00 8361.21 952.70 20000.62 00:33:05.912 09:06:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:33:05.912 09:06:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:05.912 09:06:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:05.912 09:06:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:05.912 09:06:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:05.912 09:06:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:33:05.912 09:06:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:33:05.912 09:06:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:05.912 09:06:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:33:05.912 09:06:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:05.912 09:06:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:33:05.912 09:06:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:05.912 09:06:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:05.912 rmmod nvme_tcp 00:33:05.912 rmmod nvme_fabrics 00:33:05.912 rmmod nvme_keyring 00:33:05.912 09:06:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:05.912 09:06:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:33:05.912 09:06:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:33:05.912 09:06:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 1116807 ']' 00:33:05.912 09:06:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 1116807 00:33:05.912 09:06:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@950 -- # '[' -z 1116807 ']' 00:33:05.912 09:06:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # kill -0 1116807 00:33:05.912 09:06:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # uname 00:33:05.912 09:06:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:05.912 09:06:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1116807 00:33:05.912 09:06:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:33:05.912 09:06:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:33:05.912 09:06:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1116807' 00:33:05.912 killing process with pid 1116807 00:33:05.912 09:06:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@969 -- # kill 1116807 00:33:05.913 09:06:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@974 -- # wait 1116807 00:33:05.913 09:06:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:05.913 09:06:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:05.913 09:06:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:05.913 09:06:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:05.913 09:06:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:05.913 09:06:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:05.913 09:06:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:05.913 09:06:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:07.822 09:06:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:07.822 00:33:07.822 real 0m22.426s 00:33:07.822 user 1m0.025s 00:33:07.822 sys 0m4.198s 00:33:07.822 09:06:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:07.822 09:06:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:07.822 ************************************ 00:33:07.822 END TEST nvmf_bdevperf 00:33:07.822 ************************************ 00:33:07.822 09:06:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:33:07.822 09:06:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:33:07.822 09:06:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:07.822 09:06:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.822 ************************************ 00:33:07.822 START TEST nvmf_target_disconnect 00:33:07.822 ************************************ 00:33:07.822 09:06:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:33:07.822 * Looking for test storage... 00:33:07.822 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:07.822 09:06:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:07.822 09:06:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:33:07.822 09:06:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:07.822 09:06:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:07.822 09:06:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:07.822 09:06:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:07.822 09:06:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:07.822 09:06:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:07.822 09:06:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:07.822 09:06:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:07.822 09:06:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:07.822 09:06:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:07.822 09:06:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:07.822 09:06:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:07.822 09:06:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:07.822 09:06:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:07.822 09:06:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:07.822 09:06:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:07.822 09:06:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:07.822 09:06:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:07.822 09:06:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:07.822 09:06:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:07.822 09:06:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:07.822 09:06:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:07.822 09:06:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:07.822 09:06:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:33:07.822 09:06:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:07.822 09:06:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:33:07.822 09:06:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:07.822 09:06:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:07.822 09:06:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:07.822 09:06:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:07.822 09:06:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:07.822 09:06:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:07.822 09:06:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:07.822 09:06:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:07.822 09:06:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:33:07.823 09:06:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:33:07.823 09:06:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:33:07.823 09:06:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:33:07.823 09:06:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:33:07.823 09:06:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:07.823 09:06:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:07.823 09:06:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:07.823 09:06:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:07.823 09:06:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:07.823 09:06:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:07.823 09:06:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:07.823 09:06:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:33:07.823 09:06:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:07.823 09:06:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:33:07.823 09:06:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:33:09.727 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:09.727 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:33:09.727 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:09.727 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:09.727 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:09.727 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:09.727 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:09.727 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:33:09.727 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:09.727 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:33:09.727 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:33:09.727 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:33:09.728 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:33:09.728 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:33:09.728 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:33:09.728 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:09.728 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:09.728 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:09.728 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:09.728 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:09.728 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:09.728 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:09.728 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:09.728 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:09.728 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:09.728 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:09.728 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:09.728 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:33:09.728 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:33:09.728 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:33:09.728 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:33:09.728 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:09.728 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:09.728 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:33:09.728 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:33:09.728 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:09.728 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:09.728 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:09.728 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:09.728 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:09.728 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:09.728 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:33:09.728 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:33:09.728 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:09.728 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:09.728 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:09.728 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:09.728 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:09.728 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:09.728 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:33:09.728 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:33:09.728 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:09.728 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:09.728 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:09.728 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:09.728 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:09.728 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:09.728 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:09.728 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:33:09.728 Found net devices under 0000:0a:00.0: cvl_0_0 00:33:09.728 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:09.728 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:09.728 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:09.728 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:09.728 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:09.728 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:09.728 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:09.728 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:09.728 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:33:09.728 Found net devices under 0000:0a:00.1: cvl_0_1 00:33:09.728 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:09.728 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:09.728 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:33:09.728 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:09.728 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:33:09.728 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:33:09.728 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:09.728 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:09.728 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:09.728 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:33:09.728 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:09.728 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:09.728 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:33:09.728 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:09.728 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:09.728 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:33:09.728 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:33:09.728 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:33:09.728 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:09.728 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:09.728 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:09.728 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:33:09.728 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:09.728 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:09.728 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:09.728 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:33:09.728 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:09.728 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.215 ms 00:33:09.728 00:33:09.728 --- 10.0.0.2 ping statistics --- 00:33:09.728 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:09.728 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:33:09.728 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:09.728 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:09.728 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.083 ms 00:33:09.728 00:33:09.728 --- 10.0.0.1 ping statistics --- 00:33:09.728 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:09.728 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:33:09.728 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:09.728 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:33:09.728 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:09.728 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:09.728 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:33:09.728 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:33:09.728 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:09.728 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:33:09.728 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:33:10.015 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:33:10.015 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:33:10.015 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:10.015 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:33:10.015 ************************************ 00:33:10.015 START TEST nvmf_target_disconnect_tc1 00:33:10.015 ************************************ 00:33:10.015 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc1 00:33:10.016 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:10.016 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # local es=0 00:33:10.016 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:10.016 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:33:10.016 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:10.016 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:33:10.016 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:10.016 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:33:10.016 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:10.016 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:33:10.016 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:33:10.016 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:10.016 EAL: No free 2048 kB hugepages reported on node 1 00:33:10.016 [2024-07-26 09:06:28.287707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.016 [2024-07-26 09:06:28.287794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ad03e0 with addr=10.0.0.2, port=4420 00:33:10.016 [2024-07-26 09:06:28.287827] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:33:10.016 [2024-07-26 09:06:28.287865] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:33:10.016 [2024-07-26 09:06:28.287878] nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:33:10.016 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:33:10.016 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:33:10.016 Initializing NVMe Controllers 00:33:10.016 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # es=1 00:33:10.016 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:33:10.016 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:33:10.016 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:33:10.016 00:33:10.016 real 0m0.085s 00:33:10.016 user 0m0.033s 00:33:10.016 sys 0m0.052s 00:33:10.016 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:10.016 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:33:10.016 ************************************ 00:33:10.016 END TEST nvmf_target_disconnect_tc1 00:33:10.016 ************************************ 00:33:10.016 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:33:10.016 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:33:10.016 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:10.016 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:33:10.016 ************************************ 00:33:10.016 START TEST nvmf_target_disconnect_tc2 00:33:10.016 ************************************ 00:33:10.016 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc2 00:33:10.016 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:33:10.016 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:33:10.016 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:10.016 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:10.016 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:10.016 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1119946 00:33:10.016 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:33:10.016 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1119946 00:33:10.016 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 1119946 ']' 00:33:10.016 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:10.016 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:10.016 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:10.016 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:10.016 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:10.016 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:10.016 [2024-07-26 09:06:28.401103] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:33:10.016 [2024-07-26 09:06:28.401176] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:10.016 EAL: No free 2048 kB hugepages reported on node 1 00:33:10.016 [2024-07-26 09:06:28.437968] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:33:10.275 [2024-07-26 09:06:28.466640] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:10.275 [2024-07-26 09:06:28.553223] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:10.275 [2024-07-26 09:06:28.553276] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:10.275 [2024-07-26 09:06:28.553306] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:10.275 [2024-07-26 09:06:28.553319] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:10.275 [2024-07-26 09:06:28.553330] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:10.275 [2024-07-26 09:06:28.553470] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:33:10.275 [2024-07-26 09:06:28.553544] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:33:10.275 [2024-07-26 09:06:28.553607] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:33:10.275 [2024-07-26 09:06:28.553610] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:33:10.275 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:10.275 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:33:10.275 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:10.276 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:10.276 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:10.276 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:10.276 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:10.276 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:10.276 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:10.276 Malloc0 00:33:10.276 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:10.276 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:33:10.276 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:10.276 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:10.276 [2024-07-26 09:06:28.733029] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:10.536 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:10.536 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:10.536 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:10.536 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:10.536 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:10.536 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:10.536 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:10.536 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:10.536 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:10.536 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:10.536 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:10.536 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:10.536 [2024-07-26 09:06:28.761324] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:10.536 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:10.536 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:10.536 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:10.536 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:10.536 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:10.536 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=1120085 00:33:10.536 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:33:10.536 09:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:10.536 EAL: No free 2048 kB hugepages reported on node 1 00:33:12.452 09:06:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 1119946 00:33:12.452 09:06:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:33:12.452 Write completed with error (sct=0, sc=8) 00:33:12.452 starting I/O failed 00:33:12.452 Write completed with error (sct=0, sc=8) 00:33:12.452 starting I/O failed 00:33:12.452 Write completed with error (sct=0, sc=8) 00:33:12.452 starting I/O failed 00:33:12.452 Write completed with error (sct=0, sc=8) 00:33:12.452 starting I/O failed 00:33:12.452 Read completed with error (sct=0, sc=8) 00:33:12.452 starting I/O failed 00:33:12.452 Read completed with error (sct=0, sc=8) 00:33:12.452 starting I/O failed 00:33:12.452 Read completed with error (sct=0, sc=8) 00:33:12.452 starting I/O failed 00:33:12.452 Read completed with error (sct=0, sc=8) 00:33:12.452 starting I/O failed 00:33:12.452 Read completed with error (sct=0, sc=8) 00:33:12.452 starting I/O failed 00:33:12.452 Read completed with error (sct=0, sc=8) 00:33:12.452 starting I/O failed 00:33:12.452 Write completed with error (sct=0, sc=8) 00:33:12.452 starting I/O failed 00:33:12.452 Read completed with error (sct=0, sc=8) 00:33:12.452 starting I/O failed 00:33:12.452 Write completed with error (sct=0, sc=8) 00:33:12.452 starting I/O failed 00:33:12.452 Read completed with error (sct=0, sc=8) 00:33:12.452 starting I/O failed 00:33:12.452 Read completed with error (sct=0, sc=8) 00:33:12.452 starting I/O failed 00:33:12.452 Read completed with error (sct=0, sc=8) 00:33:12.452 starting I/O failed 00:33:12.452 Read completed with error (sct=0, sc=8) 00:33:12.452 starting I/O failed 00:33:12.452 Write completed with error (sct=0, sc=8) 00:33:12.452 starting I/O failed 00:33:12.452 Read completed with error (sct=0, sc=8) 00:33:12.452 starting I/O failed 00:33:12.452 Read completed with error (sct=0, sc=8) 00:33:12.452 starting I/O failed 00:33:12.452 Write completed with error (sct=0, sc=8) 00:33:12.452 starting I/O failed 00:33:12.452 Read completed with error (sct=0, sc=8) 00:33:12.452 starting I/O failed 00:33:12.452 Read completed with error (sct=0, sc=8) 00:33:12.452 starting I/O failed 00:33:12.452 Write completed with error (sct=0, sc=8) 00:33:12.452 starting I/O failed 00:33:12.452 Write completed with error (sct=0, sc=8) 00:33:12.452 starting I/O failed 00:33:12.452 Read completed with error (sct=0, sc=8) 00:33:12.452 starting I/O failed 00:33:12.452 Write completed with error (sct=0, sc=8) 00:33:12.452 starting I/O failed 00:33:12.452 Read completed with error (sct=0, sc=8) 00:33:12.452 starting I/O failed 00:33:12.452 Write completed with error (sct=0, sc=8) 00:33:12.452 starting I/O failed 00:33:12.452 Write completed with error (sct=0, sc=8) 00:33:12.452 starting I/O failed 00:33:12.452 Read completed with error (sct=0, sc=8) 00:33:12.452 starting I/O failed 00:33:12.452 Write completed with error (sct=0, sc=8) 00:33:12.452 starting I/O failed 00:33:12.452 Read completed with error (sct=0, sc=8) 00:33:12.452 starting I/O failed 00:33:12.452 Read completed with error (sct=0, sc=8) 00:33:12.452 starting I/O failed 00:33:12.452 Read completed with error (sct=0, sc=8) 00:33:12.452 starting I/O failed 00:33:12.452 Read completed with error (sct=0, sc=8) 00:33:12.452 starting I/O failed 00:33:12.452 [2024-07-26 09:06:30.787348] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:12.452 Read completed with error (sct=0, sc=8) 00:33:12.452 starting I/O failed 00:33:12.452 Read completed with error (sct=0, sc=8) 00:33:12.452 starting I/O failed 00:33:12.452 Read completed with error (sct=0, sc=8) 00:33:12.452 starting I/O failed 00:33:12.452 Read completed with error (sct=0, sc=8) 00:33:12.452 starting I/O failed 00:33:12.452 Read completed with error (sct=0, sc=8) 00:33:12.452 starting I/O failed 00:33:12.452 Read completed with error (sct=0, sc=8) 00:33:12.452 starting I/O failed 00:33:12.452 Read completed with error (sct=0, sc=8) 00:33:12.452 starting I/O failed 00:33:12.452 Read completed with error (sct=0, sc=8) 00:33:12.452 starting I/O failed 00:33:12.452 Read completed with error (sct=0, sc=8) 00:33:12.452 starting I/O failed 00:33:12.452 Read completed with error (sct=0, sc=8) 00:33:12.452 starting I/O failed 00:33:12.452 Read completed with error (sct=0, sc=8) 00:33:12.452 starting I/O failed 00:33:12.452 Write completed with error (sct=0, sc=8) 00:33:12.452 starting I/O failed 00:33:12.452 Write completed with error (sct=0, sc=8) 00:33:12.452 starting I/O failed 00:33:12.452 Read completed with error (sct=0, sc=8) 00:33:12.452 starting I/O failed 00:33:12.452 Read completed with error (sct=0, sc=8) 00:33:12.452 starting I/O failed 00:33:12.452 Read completed with error (sct=0, sc=8) 00:33:12.452 starting I/O failed 00:33:12.452 Read completed with error (sct=0, sc=8) 00:33:12.452 starting I/O failed 00:33:12.452 Write completed with error (sct=0, sc=8) 00:33:12.452 starting I/O failed 00:33:12.452 Write completed with error (sct=0, sc=8) 00:33:12.452 starting I/O failed 00:33:12.452 Write completed with error (sct=0, sc=8) 00:33:12.452 starting I/O failed 00:33:12.452 Write completed with error (sct=0, sc=8) 00:33:12.452 starting I/O failed 00:33:12.452 Write completed with error (sct=0, sc=8) 00:33:12.452 starting I/O failed 00:33:12.452 Read completed with error (sct=0, sc=8) 00:33:12.452 starting I/O failed 00:33:12.452 Read completed with error (sct=0, sc=8) 00:33:12.452 starting I/O failed 00:33:12.452 Write completed with error (sct=0, sc=8) 00:33:12.452 starting I/O failed 00:33:12.452 Write completed with error (sct=0, sc=8) 00:33:12.452 starting I/O failed 00:33:12.452 Write completed with error (sct=0, sc=8) 00:33:12.452 starting I/O failed 00:33:12.452 Write completed with error (sct=0, sc=8) 00:33:12.452 starting I/O failed 00:33:12.452 [2024-07-26 09:06:30.787701] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:12.452 Read completed with error (sct=0, sc=8) 00:33:12.452 starting I/O failed 00:33:12.452 Read completed with error (sct=0, sc=8) 00:33:12.452 starting I/O failed 00:33:12.452 Read completed with error (sct=0, sc=8) 00:33:12.452 starting I/O failed 00:33:12.452 Read completed with error (sct=0, sc=8) 00:33:12.452 starting I/O failed 00:33:12.452 Read completed with error (sct=0, sc=8) 00:33:12.452 starting I/O failed 00:33:12.452 Read completed with error (sct=0, sc=8) 00:33:12.452 starting I/O failed 00:33:12.452 Read completed with error (sct=0, sc=8) 00:33:12.452 starting I/O failed 00:33:12.452 Read completed with error (sct=0, sc=8) 00:33:12.452 starting I/O failed 00:33:12.452 Read completed with error (sct=0, sc=8) 00:33:12.452 starting I/O failed 00:33:12.452 Read completed with error (sct=0, sc=8) 00:33:12.452 starting I/O failed 00:33:12.452 Read completed with error (sct=0, sc=8) 00:33:12.452 starting I/O failed 00:33:12.452 Read completed with error (sct=0, sc=8) 00:33:12.452 starting I/O failed 00:33:12.452 Read completed with error (sct=0, sc=8) 00:33:12.452 starting I/O failed 00:33:12.452 Read completed with error (sct=0, sc=8) 00:33:12.452 starting I/O failed 00:33:12.452 Read completed with error (sct=0, sc=8) 00:33:12.452 starting I/O failed 00:33:12.452 Read completed with error (sct=0, sc=8) 00:33:12.452 starting I/O failed 00:33:12.452 Read completed with error (sct=0, sc=8) 00:33:12.452 starting I/O failed 00:33:12.452 Read completed with error (sct=0, sc=8) 00:33:12.452 starting I/O failed 00:33:12.452 Write completed with error (sct=0, sc=8) 00:33:12.452 starting I/O failed 00:33:12.452 Write completed with error (sct=0, sc=8) 00:33:12.452 starting I/O failed 00:33:12.452 Read completed with error (sct=0, sc=8) 00:33:12.452 starting I/O failed 00:33:12.452 Write completed with error (sct=0, sc=8) 00:33:12.452 starting I/O failed 00:33:12.452 Write completed with error (sct=0, sc=8) 00:33:12.452 starting I/O failed 00:33:12.452 Write completed with error (sct=0, sc=8) 00:33:12.452 starting I/O failed 00:33:12.452 Read completed with error (sct=0, sc=8) 00:33:12.452 starting I/O failed 00:33:12.452 Read completed with error (sct=0, sc=8) 00:33:12.452 starting I/O failed 00:33:12.452 Write completed with error (sct=0, sc=8) 00:33:12.452 starting I/O failed 00:33:12.452 Write completed with error (sct=0, sc=8) 00:33:12.452 starting I/O failed 00:33:12.452 Read completed with error (sct=0, sc=8) 00:33:12.452 starting I/O failed 00:33:12.452 Read completed with error (sct=0, sc=8) 00:33:12.452 starting I/O failed 00:33:12.452 Read completed with error (sct=0, sc=8) 00:33:12.452 starting I/O failed 00:33:12.452 Read completed with error (sct=0, sc=8) 00:33:12.452 starting I/O failed 00:33:12.452 [2024-07-26 09:06:30.788070] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:12.452 Read completed with error (sct=0, sc=8) 00:33:12.452 starting I/O failed 00:33:12.452 Read completed with error (sct=0, sc=8) 00:33:12.452 starting I/O failed 00:33:12.452 Read completed with error (sct=0, sc=8) 00:33:12.452 starting I/O failed 00:33:12.452 Read completed with error (sct=0, sc=8) 00:33:12.452 starting I/O failed 00:33:12.452 Read completed with error (sct=0, sc=8) 00:33:12.453 starting I/O failed 00:33:12.453 Read completed with error (sct=0, sc=8) 00:33:12.453 starting I/O failed 00:33:12.453 Read completed with error (sct=0, sc=8) 00:33:12.453 starting I/O failed 00:33:12.453 Read completed with error (sct=0, sc=8) 00:33:12.453 starting I/O failed 00:33:12.453 Read completed with error (sct=0, sc=8) 00:33:12.453 starting I/O failed 00:33:12.453 Read completed with error (sct=0, sc=8) 00:33:12.453 starting I/O failed 00:33:12.453 Read completed with error (sct=0, sc=8) 00:33:12.453 starting I/O failed 00:33:12.453 Read completed with error (sct=0, sc=8) 00:33:12.453 starting I/O failed 00:33:12.453 Read completed with error (sct=0, sc=8) 00:33:12.453 starting I/O failed 00:33:12.453 Read completed with error (sct=0, sc=8) 00:33:12.453 starting I/O failed 00:33:12.453 Write completed with error (sct=0, sc=8) 00:33:12.453 starting I/O failed 00:33:12.453 Read completed with error (sct=0, sc=8) 00:33:12.453 starting I/O failed 00:33:12.453 Write completed with error (sct=0, sc=8) 00:33:12.453 starting I/O failed 00:33:12.453 Read completed with error (sct=0, sc=8) 00:33:12.453 starting I/O failed 00:33:12.453 Read completed with error (sct=0, sc=8) 00:33:12.453 starting I/O failed 00:33:12.453 Read completed with error (sct=0, sc=8) 00:33:12.453 starting I/O failed 00:33:12.453 Read completed with error (sct=0, sc=8) 00:33:12.453 starting I/O failed 00:33:12.453 Write completed with error (sct=0, sc=8) 00:33:12.453 starting I/O failed 00:33:12.453 Read completed with error (sct=0, sc=8) 00:33:12.453 starting I/O failed 00:33:12.453 Read completed with error (sct=0, sc=8) 00:33:12.453 starting I/O failed 00:33:12.453 Read completed with error (sct=0, sc=8) 00:33:12.453 starting I/O failed 00:33:12.453 Write completed with error (sct=0, sc=8) 00:33:12.453 starting I/O failed 00:33:12.453 Read completed with error (sct=0, sc=8) 00:33:12.453 starting I/O failed 00:33:12.453 Read completed with error (sct=0, sc=8) 00:33:12.453 starting I/O failed 00:33:12.453 Read completed with error (sct=0, sc=8) 00:33:12.453 starting I/O failed 00:33:12.453 Read completed with error (sct=0, sc=8) 00:33:12.453 starting I/O failed 00:33:12.453 Read completed with error (sct=0, sc=8) 00:33:12.453 starting I/O failed 00:33:12.453 Write completed with error (sct=0, sc=8) 00:33:12.453 starting I/O failed 00:33:12.453 [2024-07-26 09:06:30.788397] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:12.453 [2024-07-26 09:06:30.788672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.453 [2024-07-26 09:06:30.788723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:12.453 qpair failed and we were unable to recover it. 00:33:12.453 [2024-07-26 09:06:30.788957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.453 [2024-07-26 09:06:30.789008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:12.453 qpair failed and we were unable to recover it. 00:33:12.453 [2024-07-26 09:06:30.789205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.453 [2024-07-26 09:06:30.789231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:12.453 qpair failed and we were unable to recover it. 00:33:12.453 [2024-07-26 09:06:30.789364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.453 [2024-07-26 09:06:30.789389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:12.453 qpair failed and we were unable to recover it. 00:33:12.453 [2024-07-26 09:06:30.789516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.453 [2024-07-26 09:06:30.789542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:12.453 qpair failed and we were unable to recover it. 00:33:12.453 [2024-07-26 09:06:30.789713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.453 [2024-07-26 09:06:30.789738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:12.453 qpair failed and we were unable to recover it. 00:33:12.453 [2024-07-26 09:06:30.789862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.453 [2024-07-26 09:06:30.789887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:12.453 qpair failed and we were unable to recover it. 00:33:12.453 [2024-07-26 09:06:30.790040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.453 [2024-07-26 09:06:30.790070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:12.453 qpair failed and we were unable to recover it. 00:33:12.453 [2024-07-26 09:06:30.790196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.453 [2024-07-26 09:06:30.790222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:12.453 qpair failed and we were unable to recover it. 00:33:12.453 [2024-07-26 09:06:30.790339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.453 [2024-07-26 09:06:30.790366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:12.453 qpair failed and we were unable to recover it. 00:33:12.453 [2024-07-26 09:06:30.790555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.453 [2024-07-26 09:06:30.790580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:12.453 qpair failed and we were unable to recover it. 00:33:12.453 [2024-07-26 09:06:30.790733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.453 [2024-07-26 09:06:30.790775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:12.453 qpair failed and we were unable to recover it. 00:33:12.453 [2024-07-26 09:06:30.790917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.453 [2024-07-26 09:06:30.790946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:12.453 qpair failed and we were unable to recover it. 00:33:12.453 [2024-07-26 09:06:30.791141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.453 [2024-07-26 09:06:30.791186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:12.453 qpair failed and we were unable to recover it. 00:33:12.453 [2024-07-26 09:06:30.791322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.453 [2024-07-26 09:06:30.791349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:12.453 qpair failed and we were unable to recover it. 00:33:12.453 [2024-07-26 09:06:30.791503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.453 [2024-07-26 09:06:30.791530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:12.453 qpair failed and we were unable to recover it. 00:33:12.453 [2024-07-26 09:06:30.791710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.453 [2024-07-26 09:06:30.791736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:12.453 qpair failed and we were unable to recover it. 00:33:12.453 [2024-07-26 09:06:30.791870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.453 [2024-07-26 09:06:30.791911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.453 qpair failed and we were unable to recover it. 00:33:12.453 [2024-07-26 09:06:30.792106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.453 [2024-07-26 09:06:30.792135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.453 qpair failed and we were unable to recover it. 00:33:12.453 [2024-07-26 09:06:30.792312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.453 [2024-07-26 09:06:30.792338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.453 qpair failed and we were unable to recover it. 00:33:12.453 [2024-07-26 09:06:30.792616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.453 [2024-07-26 09:06:30.792666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.453 qpair failed and we were unable to recover it. 00:33:12.453 [2024-07-26 09:06:30.792815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.453 [2024-07-26 09:06:30.792840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.453 qpair failed and we were unable to recover it. 00:33:12.453 [2024-07-26 09:06:30.793019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.453 [2024-07-26 09:06:30.793044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.453 qpair failed and we were unable to recover it. 00:33:12.453 [2024-07-26 09:06:30.793166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.453 [2024-07-26 09:06:30.793192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.453 qpair failed and we were unable to recover it. 00:33:12.453 [2024-07-26 09:06:30.793314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.453 [2024-07-26 09:06:30.793356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.453 qpair failed and we were unable to recover it. 00:33:12.453 [2024-07-26 09:06:30.793508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.453 [2024-07-26 09:06:30.793550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.453 qpair failed and we were unable to recover it. 00:33:12.453 [2024-07-26 09:06:30.793758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.453 [2024-07-26 09:06:30.793784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.453 qpair failed and we were unable to recover it. 00:33:12.453 [2024-07-26 09:06:30.793926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.453 [2024-07-26 09:06:30.793951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.454 qpair failed and we were unable to recover it. 00:33:12.454 [2024-07-26 09:06:30.794128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.454 [2024-07-26 09:06:30.794154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.454 qpair failed and we were unable to recover it. 00:33:12.454 [2024-07-26 09:06:30.794278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.454 [2024-07-26 09:06:30.794303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.454 qpair failed and we were unable to recover it. 00:33:12.454 [2024-07-26 09:06:30.794463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.454 [2024-07-26 09:06:30.794488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.454 qpair failed and we were unable to recover it. 00:33:12.454 [2024-07-26 09:06:30.794672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.454 [2024-07-26 09:06:30.794698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.454 qpair failed and we were unable to recover it. 00:33:12.454 [2024-07-26 09:06:30.794818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.454 [2024-07-26 09:06:30.794844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.454 qpair failed and we were unable to recover it. 00:33:12.454 [2024-07-26 09:06:30.794985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.454 [2024-07-26 09:06:30.795010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.454 qpair failed and we were unable to recover it. 00:33:12.454 [2024-07-26 09:06:30.795161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.454 [2024-07-26 09:06:30.795188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.454 qpair failed and we were unable to recover it. 00:33:12.454 [2024-07-26 09:06:30.795333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.454 [2024-07-26 09:06:30.795358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.454 qpair failed and we were unable to recover it. 00:33:12.454 [2024-07-26 09:06:30.795498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.454 [2024-07-26 09:06:30.795523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.454 qpair failed and we were unable to recover it. 00:33:12.454 [2024-07-26 09:06:30.795672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.454 [2024-07-26 09:06:30.795698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.454 qpair failed and we were unable to recover it. 00:33:12.454 [2024-07-26 09:06:30.795874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.454 [2024-07-26 09:06:30.795902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.454 qpair failed and we were unable to recover it. 00:33:12.454 [2024-07-26 09:06:30.796056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.454 [2024-07-26 09:06:30.796114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.454 qpair failed and we were unable to recover it. 00:33:12.454 [2024-07-26 09:06:30.796242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.454 [2024-07-26 09:06:30.796268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.454 qpair failed and we were unable to recover it. 00:33:12.454 [2024-07-26 09:06:30.796401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.454 [2024-07-26 09:06:30.796429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.454 qpair failed and we were unable to recover it. 00:33:12.454 [2024-07-26 09:06:30.796600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.454 [2024-07-26 09:06:30.796626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.454 qpair failed and we were unable to recover it. 00:33:12.454 [2024-07-26 09:06:30.796750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.454 [2024-07-26 09:06:30.796775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.454 qpair failed and we were unable to recover it. 00:33:12.454 [2024-07-26 09:06:30.796893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.454 [2024-07-26 09:06:30.796918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.454 qpair failed and we were unable to recover it. 00:33:12.454 [2024-07-26 09:06:30.797030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.454 [2024-07-26 09:06:30.797056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.454 qpair failed and we were unable to recover it. 00:33:12.454 [2024-07-26 09:06:30.797176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.454 [2024-07-26 09:06:30.797201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.454 qpair failed and we were unable to recover it. 00:33:12.454 [2024-07-26 09:06:30.797350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.454 [2024-07-26 09:06:30.797376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.454 qpair failed and we were unable to recover it. 00:33:12.454 [2024-07-26 09:06:30.797520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.454 [2024-07-26 09:06:30.797546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.454 qpair failed and we were unable to recover it. 00:33:12.454 [2024-07-26 09:06:30.797683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.454 [2024-07-26 09:06:30.797708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.454 qpair failed and we were unable to recover it. 00:33:12.454 [2024-07-26 09:06:30.797852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.454 [2024-07-26 09:06:30.797877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.454 qpair failed and we were unable to recover it. 00:33:12.454 [2024-07-26 09:06:30.798050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.454 [2024-07-26 09:06:30.798088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.454 qpair failed and we were unable to recover it. 00:33:12.454 [2024-07-26 09:06:30.798209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.454 [2024-07-26 09:06:30.798234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.454 qpair failed and we were unable to recover it. 00:33:12.454 [2024-07-26 09:06:30.798383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.454 [2024-07-26 09:06:30.798409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.454 qpair failed and we were unable to recover it. 00:33:12.454 [2024-07-26 09:06:30.798556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.454 [2024-07-26 09:06:30.798581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.454 qpair failed and we were unable to recover it. 00:33:12.454 [2024-07-26 09:06:30.798732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.454 [2024-07-26 09:06:30.798757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.454 qpair failed and we were unable to recover it. 00:33:12.454 [2024-07-26 09:06:30.798875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.454 [2024-07-26 09:06:30.798900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.454 qpair failed and we were unable to recover it. 00:33:12.454 [2024-07-26 09:06:30.799076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.454 [2024-07-26 09:06:30.799101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.454 qpair failed and we were unable to recover it. 00:33:12.454 [2024-07-26 09:06:30.799212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.454 [2024-07-26 09:06:30.799237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.454 qpair failed and we were unable to recover it. 00:33:12.454 [2024-07-26 09:06:30.799400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.454 [2024-07-26 09:06:30.799428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.454 qpair failed and we were unable to recover it. 00:33:12.454 [2024-07-26 09:06:30.799584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.454 [2024-07-26 09:06:30.799612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.454 qpair failed and we were unable to recover it. 00:33:12.454 [2024-07-26 09:06:30.799867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.454 [2024-07-26 09:06:30.799919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.454 qpair failed and we were unable to recover it. 00:33:12.454 [2024-07-26 09:06:30.800064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.454 [2024-07-26 09:06:30.800095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.454 qpair failed and we were unable to recover it. 00:33:12.454 [2024-07-26 09:06:30.800225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.454 [2024-07-26 09:06:30.800251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.454 qpair failed and we were unable to recover it. 00:33:12.454 [2024-07-26 09:06:30.800397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.454 [2024-07-26 09:06:30.800422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.454 qpair failed and we were unable to recover it. 00:33:12.454 [2024-07-26 09:06:30.800566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.455 [2024-07-26 09:06:30.800591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.455 qpair failed and we were unable to recover it. 00:33:12.455 [2024-07-26 09:06:30.800729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.455 [2024-07-26 09:06:30.800754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.455 qpair failed and we were unable to recover it. 00:33:12.455 [2024-07-26 09:06:30.800896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.455 [2024-07-26 09:06:30.800924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.455 qpair failed and we were unable to recover it. 00:33:12.455 [2024-07-26 09:06:30.801097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.455 [2024-07-26 09:06:30.801123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.455 qpair failed and we were unable to recover it. 00:33:12.455 [2024-07-26 09:06:30.801247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.455 [2024-07-26 09:06:30.801272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.455 qpair failed and we were unable to recover it. 00:33:12.455 [2024-07-26 09:06:30.801383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.455 [2024-07-26 09:06:30.801409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.455 qpair failed and we were unable to recover it. 00:33:12.455 [2024-07-26 09:06:30.801582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.455 [2024-07-26 09:06:30.801608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.455 qpair failed and we were unable to recover it. 00:33:12.455 [2024-07-26 09:06:30.801785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.455 [2024-07-26 09:06:30.801810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.455 qpair failed and we were unable to recover it. 00:33:12.455 [2024-07-26 09:06:30.801955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.455 [2024-07-26 09:06:30.801981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.455 qpair failed and we were unable to recover it. 00:33:12.455 [2024-07-26 09:06:30.802094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.455 [2024-07-26 09:06:30.802120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.455 qpair failed and we were unable to recover it. 00:33:12.455 [2024-07-26 09:06:30.802231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.455 [2024-07-26 09:06:30.802256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.455 qpair failed and we were unable to recover it. 00:33:12.455 [2024-07-26 09:06:30.802440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.455 [2024-07-26 09:06:30.802465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.455 qpair failed and we were unable to recover it. 00:33:12.455 [2024-07-26 09:06:30.802587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.455 [2024-07-26 09:06:30.802612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.455 qpair failed and we were unable to recover it. 00:33:12.455 [2024-07-26 09:06:30.802781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.455 [2024-07-26 09:06:30.802806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.455 qpair failed and we were unable to recover it. 00:33:12.455 [2024-07-26 09:06:30.802998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.455 [2024-07-26 09:06:30.803024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.455 qpair failed and we were unable to recover it. 00:33:12.455 [2024-07-26 09:06:30.803204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.455 [2024-07-26 09:06:30.803230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.455 qpair failed and we were unable to recover it. 00:33:12.455 [2024-07-26 09:06:30.803372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.455 [2024-07-26 09:06:30.803397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.455 qpair failed and we were unable to recover it. 00:33:12.455 [2024-07-26 09:06:30.803576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.455 [2024-07-26 09:06:30.803602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.455 qpair failed and we were unable to recover it. 00:33:12.455 [2024-07-26 09:06:30.803771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.455 [2024-07-26 09:06:30.803797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.455 qpair failed and we were unable to recover it. 00:33:12.455 [2024-07-26 09:06:30.803939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.455 [2024-07-26 09:06:30.803965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.455 qpair failed and we were unable to recover it. 00:33:12.455 [2024-07-26 09:06:30.804139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.455 [2024-07-26 09:06:30.804166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.455 qpair failed and we were unable to recover it. 00:33:12.455 [2024-07-26 09:06:30.804397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.455 [2024-07-26 09:06:30.804422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.455 qpair failed and we were unable to recover it. 00:33:12.455 [2024-07-26 09:06:30.804571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.455 [2024-07-26 09:06:30.804596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.455 qpair failed and we were unable to recover it. 00:33:12.455 [2024-07-26 09:06:30.804745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.455 [2024-07-26 09:06:30.804789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.455 qpair failed and we were unable to recover it. 00:33:12.455 [2024-07-26 09:06:30.804992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.455 [2024-07-26 09:06:30.805018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.455 qpair failed and we were unable to recover it. 00:33:12.455 [2024-07-26 09:06:30.805154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.455 [2024-07-26 09:06:30.805180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.455 qpair failed and we were unable to recover it. 00:33:12.455 [2024-07-26 09:06:30.805395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.455 [2024-07-26 09:06:30.805420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.455 qpair failed and we were unable to recover it. 00:33:12.455 [2024-07-26 09:06:30.805565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.455 [2024-07-26 09:06:30.805595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.455 qpair failed and we were unable to recover it. 00:33:12.455 [2024-07-26 09:06:30.805739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.455 [2024-07-26 09:06:30.805765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.455 qpair failed and we were unable to recover it. 00:33:12.455 [2024-07-26 09:06:30.805917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.455 [2024-07-26 09:06:30.805942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.455 qpair failed and we were unable to recover it. 00:33:12.455 [2024-07-26 09:06:30.806155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.455 [2024-07-26 09:06:30.806180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.455 qpair failed and we were unable to recover it. 00:33:12.455 [2024-07-26 09:06:30.806309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.455 [2024-07-26 09:06:30.806335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.455 qpair failed and we were unable to recover it. 00:33:12.455 [2024-07-26 09:06:30.806446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.455 [2024-07-26 09:06:30.806471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.455 qpair failed and we were unable to recover it. 00:33:12.455 [2024-07-26 09:06:30.806641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.455 [2024-07-26 09:06:30.806666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.455 qpair failed and we were unable to recover it. 00:33:12.455 [2024-07-26 09:06:30.806834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.455 [2024-07-26 09:06:30.806860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.455 qpair failed and we were unable to recover it. 00:33:12.455 [2024-07-26 09:06:30.807005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.455 [2024-07-26 09:06:30.807030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.455 qpair failed and we were unable to recover it. 00:33:12.455 [2024-07-26 09:06:30.807178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.455 [2024-07-26 09:06:30.807204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.455 qpair failed and we were unable to recover it. 00:33:12.455 [2024-07-26 09:06:30.807322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.455 [2024-07-26 09:06:30.807347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.455 qpair failed and we were unable to recover it. 00:33:12.455 [2024-07-26 09:06:30.807488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.456 [2024-07-26 09:06:30.807513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.456 qpair failed and we were unable to recover it. 00:33:12.456 [2024-07-26 09:06:30.807629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.456 [2024-07-26 09:06:30.807655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.456 qpair failed and we were unable to recover it. 00:33:12.456 [2024-07-26 09:06:30.807801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.456 [2024-07-26 09:06:30.807827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.456 qpair failed and we were unable to recover it. 00:33:12.456 [2024-07-26 09:06:30.807974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.456 [2024-07-26 09:06:30.808000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.456 qpair failed and we were unable to recover it. 00:33:12.456 [2024-07-26 09:06:30.808172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.456 [2024-07-26 09:06:30.808199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.456 qpair failed and we were unable to recover it. 00:33:12.456 [2024-07-26 09:06:30.808322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.456 [2024-07-26 09:06:30.808347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.456 qpair failed and we were unable to recover it. 00:33:12.456 [2024-07-26 09:06:30.808459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.456 [2024-07-26 09:06:30.808484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.456 qpair failed and we were unable to recover it. 00:33:12.456 [2024-07-26 09:06:30.808653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.456 [2024-07-26 09:06:30.808678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.456 qpair failed and we were unable to recover it. 00:33:12.456 [2024-07-26 09:06:30.808834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.456 [2024-07-26 09:06:30.808859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.456 qpair failed and we were unable to recover it. 00:33:12.456 [2024-07-26 09:06:30.808968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.456 [2024-07-26 09:06:30.808993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.456 qpair failed and we were unable to recover it. 00:33:12.456 [2024-07-26 09:06:30.809141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.456 [2024-07-26 09:06:30.809167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.456 qpair failed and we were unable to recover it. 00:33:12.456 [2024-07-26 09:06:30.809311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.456 [2024-07-26 09:06:30.809357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.456 qpair failed and we were unable to recover it. 00:33:12.456 [2024-07-26 09:06:30.809510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.456 [2024-07-26 09:06:30.809538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.456 qpair failed and we were unable to recover it. 00:33:12.456 [2024-07-26 09:06:30.809728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.456 [2024-07-26 09:06:30.809753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.456 qpair failed and we were unable to recover it. 00:33:12.456 [2024-07-26 09:06:30.809897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.456 [2024-07-26 09:06:30.809923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.456 qpair failed and we were unable to recover it. 00:33:12.456 [2024-07-26 09:06:30.810065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.456 [2024-07-26 09:06:30.810090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.456 qpair failed and we were unable to recover it. 00:33:12.456 [2024-07-26 09:06:30.810212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.456 [2024-07-26 09:06:30.810242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.456 qpair failed and we were unable to recover it. 00:33:12.456 [2024-07-26 09:06:30.810369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.456 [2024-07-26 09:06:30.810395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.456 qpair failed and we were unable to recover it. 00:33:12.456 [2024-07-26 09:06:30.810565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.456 [2024-07-26 09:06:30.810590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.456 qpair failed and we were unable to recover it. 00:33:12.456 [2024-07-26 09:06:30.810707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.456 [2024-07-26 09:06:30.810733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.456 qpair failed and we were unable to recover it. 00:33:12.456 [2024-07-26 09:06:30.810876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.456 [2024-07-26 09:06:30.810901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.456 qpair failed and we were unable to recover it. 00:33:12.456 [2024-07-26 09:06:30.811019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.456 [2024-07-26 09:06:30.811044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.456 qpair failed and we were unable to recover it. 00:33:12.456 [2024-07-26 09:06:30.811224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.456 [2024-07-26 09:06:30.811250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.456 qpair failed and we were unable to recover it. 00:33:12.456 [2024-07-26 09:06:30.811394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.456 [2024-07-26 09:06:30.811419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.456 qpair failed and we were unable to recover it. 00:33:12.456 [2024-07-26 09:06:30.811576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.456 [2024-07-26 09:06:30.811601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.456 qpair failed and we were unable to recover it. 00:33:12.456 [2024-07-26 09:06:30.811782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.456 [2024-07-26 09:06:30.811807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.456 qpair failed and we were unable to recover it. 00:33:12.456 [2024-07-26 09:06:30.811950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.456 [2024-07-26 09:06:30.811975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.456 qpair failed and we were unable to recover it. 00:33:12.456 [2024-07-26 09:06:30.812184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.456 [2024-07-26 09:06:30.812211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.457 qpair failed and we were unable to recover it. 00:33:12.457 [2024-07-26 09:06:30.812384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.457 [2024-07-26 09:06:30.812409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.457 qpair failed and we were unable to recover it. 00:33:12.457 [2024-07-26 09:06:30.812557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.457 [2024-07-26 09:06:30.812582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.457 qpair failed and we were unable to recover it. 00:33:12.457 [2024-07-26 09:06:30.812708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.457 [2024-07-26 09:06:30.812734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.457 qpair failed and we were unable to recover it. 00:33:12.457 [2024-07-26 09:06:30.812851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.457 [2024-07-26 09:06:30.812876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.457 qpair failed and we were unable to recover it. 00:33:12.457 [2024-07-26 09:06:30.813019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.457 [2024-07-26 09:06:30.813044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.457 qpair failed and we were unable to recover it. 00:33:12.457 [2024-07-26 09:06:30.813200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.457 [2024-07-26 09:06:30.813241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.457 qpair failed and we were unable to recover it. 00:33:12.457 [2024-07-26 09:06:30.813378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.457 [2024-07-26 09:06:30.813404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.457 qpair failed and we were unable to recover it. 00:33:12.457 [2024-07-26 09:06:30.813530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.457 [2024-07-26 09:06:30.813555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.457 qpair failed and we were unable to recover it. 00:33:12.457 [2024-07-26 09:06:30.813696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.457 [2024-07-26 09:06:30.813722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.457 qpair failed and we were unable to recover it. 00:33:12.457 [2024-07-26 09:06:30.813894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.457 [2024-07-26 09:06:30.813919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.457 qpair failed and we were unable to recover it. 00:33:12.457 [2024-07-26 09:06:30.814043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.457 [2024-07-26 09:06:30.814076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.457 qpair failed and we were unable to recover it. 00:33:12.457 [2024-07-26 09:06:30.814214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.457 [2024-07-26 09:06:30.814242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.457 qpair failed and we were unable to recover it. 00:33:12.457 [2024-07-26 09:06:30.814422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.457 [2024-07-26 09:06:30.814448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.457 qpair failed and we were unable to recover it. 00:33:12.457 [2024-07-26 09:06:30.814612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.457 [2024-07-26 09:06:30.814640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.457 qpair failed and we were unable to recover it. 00:33:12.457 [2024-07-26 09:06:30.814805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.457 [2024-07-26 09:06:30.814830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.457 qpair failed and we were unable to recover it. 00:33:12.457 [2024-07-26 09:06:30.814939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.457 [2024-07-26 09:06:30.814965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.457 qpair failed and we were unable to recover it. 00:33:12.457 [2024-07-26 09:06:30.815141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.457 [2024-07-26 09:06:30.815167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.457 qpair failed and we were unable to recover it. 00:33:12.457 [2024-07-26 09:06:30.815309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.457 [2024-07-26 09:06:30.815334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.457 qpair failed and we were unable to recover it. 00:33:12.457 [2024-07-26 09:06:30.815478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.457 [2024-07-26 09:06:30.815503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.457 qpair failed and we were unable to recover it. 00:33:12.457 [2024-07-26 09:06:30.815631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.457 [2024-07-26 09:06:30.815656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.457 qpair failed and we were unable to recover it. 00:33:12.457 [2024-07-26 09:06:30.815770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.457 [2024-07-26 09:06:30.815795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.457 qpair failed and we were unable to recover it. 00:33:12.457 [2024-07-26 09:06:30.815920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.457 [2024-07-26 09:06:30.815946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.457 qpair failed and we were unable to recover it. 00:33:12.457 [2024-07-26 09:06:30.816071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.457 [2024-07-26 09:06:30.816101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.457 qpair failed and we were unable to recover it. 00:33:12.457 [2024-07-26 09:06:30.816251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.457 [2024-07-26 09:06:30.816277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.457 qpair failed and we were unable to recover it. 00:33:12.457 [2024-07-26 09:06:30.816396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.457 [2024-07-26 09:06:30.816421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.457 qpair failed and we were unable to recover it. 00:33:12.457 [2024-07-26 09:06:30.816536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.457 [2024-07-26 09:06:30.816562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.457 qpair failed and we were unable to recover it. 00:33:12.457 [2024-07-26 09:06:30.816685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.457 [2024-07-26 09:06:30.816710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.457 qpair failed and we were unable to recover it. 00:33:12.457 [2024-07-26 09:06:30.816833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.457 [2024-07-26 09:06:30.816858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.457 qpair failed and we were unable to recover it. 00:33:12.457 [2024-07-26 09:06:30.817004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.457 [2024-07-26 09:06:30.817030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.457 qpair failed and we were unable to recover it. 00:33:12.457 [2024-07-26 09:06:30.817173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.457 [2024-07-26 09:06:30.817202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.457 qpair failed and we were unable to recover it. 00:33:12.457 [2024-07-26 09:06:30.817356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.457 [2024-07-26 09:06:30.817381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.457 qpair failed and we were unable to recover it. 00:33:12.457 [2024-07-26 09:06:30.817526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.457 [2024-07-26 09:06:30.817551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.457 qpair failed and we were unable to recover it. 00:33:12.457 [2024-07-26 09:06:30.817677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.457 [2024-07-26 09:06:30.817702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.457 qpair failed and we were unable to recover it. 00:33:12.457 [2024-07-26 09:06:30.817846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.457 [2024-07-26 09:06:30.817871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.457 qpair failed and we were unable to recover it. 00:33:12.457 [2024-07-26 09:06:30.818010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.457 [2024-07-26 09:06:30.818035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.457 qpair failed and we were unable to recover it. 00:33:12.457 [2024-07-26 09:06:30.818187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.457 [2024-07-26 09:06:30.818213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.457 qpair failed and we were unable to recover it. 00:33:12.457 [2024-07-26 09:06:30.818353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.458 [2024-07-26 09:06:30.818378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.458 qpair failed and we were unable to recover it. 00:33:12.458 [2024-07-26 09:06:30.818534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.458 [2024-07-26 09:06:30.818559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.458 qpair failed and we were unable to recover it. 00:33:12.458 [2024-07-26 09:06:30.818681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.458 [2024-07-26 09:06:30.818706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.458 qpair failed and we were unable to recover it. 00:33:12.458 [2024-07-26 09:06:30.818867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.458 [2024-07-26 09:06:30.818905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:12.458 qpair failed and we were unable to recover it. 00:33:12.458 [2024-07-26 09:06:30.819031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.458 [2024-07-26 09:06:30.819069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:12.458 qpair failed and we were unable to recover it. 00:33:12.458 [2024-07-26 09:06:30.819245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.458 [2024-07-26 09:06:30.819271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:12.458 qpair failed and we were unable to recover it. 00:33:12.458 [2024-07-26 09:06:30.819416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.458 [2024-07-26 09:06:30.819459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:12.458 qpair failed and we were unable to recover it. 00:33:12.458 [2024-07-26 09:06:30.819633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.458 [2024-07-26 09:06:30.819681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:12.458 qpair failed and we were unable to recover it. 00:33:12.458 [2024-07-26 09:06:30.819823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.458 [2024-07-26 09:06:30.819870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:12.458 qpair failed and we were unable to recover it. 00:33:12.458 [2024-07-26 09:06:30.819996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.458 [2024-07-26 09:06:30.820023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.458 qpair failed and we were unable to recover it. 00:33:12.458 [2024-07-26 09:06:30.820211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.458 [2024-07-26 09:06:30.820237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.458 qpair failed and we were unable to recover it. 00:33:12.458 [2024-07-26 09:06:30.820380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.458 [2024-07-26 09:06:30.820406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.458 qpair failed and we were unable to recover it. 00:33:12.458 [2024-07-26 09:06:30.820530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.458 [2024-07-26 09:06:30.820556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.458 qpair failed and we were unable to recover it. 00:33:12.458 [2024-07-26 09:06:30.820753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.458 [2024-07-26 09:06:30.820781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.458 qpair failed and we were unable to recover it. 00:33:12.458 [2024-07-26 09:06:30.820934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.458 [2024-07-26 09:06:30.820976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.458 qpair failed and we were unable to recover it. 00:33:12.458 [2024-07-26 09:06:30.821111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.458 [2024-07-26 09:06:30.821137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.458 qpair failed and we were unable to recover it. 00:33:12.458 [2024-07-26 09:06:30.821276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.458 [2024-07-26 09:06:30.821302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.458 qpair failed and we were unable to recover it. 00:33:12.458 [2024-07-26 09:06:30.821447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.458 [2024-07-26 09:06:30.821472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.458 qpair failed and we were unable to recover it. 00:33:12.458 [2024-07-26 09:06:30.821619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.458 [2024-07-26 09:06:30.821644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.458 qpair failed and we were unable to recover it. 00:33:12.458 [2024-07-26 09:06:30.821790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.458 [2024-07-26 09:06:30.821818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.458 qpair failed and we were unable to recover it. 00:33:12.458 [2024-07-26 09:06:30.821995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.458 [2024-07-26 09:06:30.822025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.458 qpair failed and we were unable to recover it. 00:33:12.458 [2024-07-26 09:06:30.822143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.458 [2024-07-26 09:06:30.822169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.458 qpair failed and we were unable to recover it. 00:33:12.458 [2024-07-26 09:06:30.822292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.458 [2024-07-26 09:06:30.822318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.458 qpair failed and we were unable to recover it. 00:33:12.458 [2024-07-26 09:06:30.822484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.458 [2024-07-26 09:06:30.822513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.458 qpair failed and we were unable to recover it. 00:33:12.458 [2024-07-26 09:06:30.822699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.458 [2024-07-26 09:06:30.822727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.458 qpair failed and we were unable to recover it. 00:33:12.458 [2024-07-26 09:06:30.822871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.458 [2024-07-26 09:06:30.822896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.458 qpair failed and we were unable to recover it. 00:33:12.458 [2024-07-26 09:06:30.823015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.458 [2024-07-26 09:06:30.823041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.458 qpair failed and we were unable to recover it. 00:33:12.458 [2024-07-26 09:06:30.823222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.458 [2024-07-26 09:06:30.823261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:12.458 qpair failed and we were unable to recover it. 00:33:12.458 [2024-07-26 09:06:30.823401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.458 [2024-07-26 09:06:30.823446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:12.458 qpair failed and we were unable to recover it. 00:33:12.458 [2024-07-26 09:06:30.823625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.458 [2024-07-26 09:06:30.823652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:12.458 qpair failed and we were unable to recover it. 00:33:12.458 [2024-07-26 09:06:30.823789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.458 [2024-07-26 09:06:30.823815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:12.458 qpair failed and we were unable to recover it. 00:33:12.458 [2024-07-26 09:06:30.823929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.458 [2024-07-26 09:06:30.823955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:12.458 qpair failed and we were unable to recover it. 00:33:12.458 [2024-07-26 09:06:30.824096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.458 [2024-07-26 09:06:30.824124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:12.458 qpair failed and we were unable to recover it. 00:33:12.458 [2024-07-26 09:06:30.824246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.458 [2024-07-26 09:06:30.824273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:12.459 qpair failed and we were unable to recover it. 00:33:12.459 [2024-07-26 09:06:30.824453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.459 [2024-07-26 09:06:30.824497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:12.459 qpair failed and we were unable to recover it. 00:33:12.459 [2024-07-26 09:06:30.824644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.459 [2024-07-26 09:06:30.824671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:12.459 qpair failed and we were unable to recover it. 00:33:12.459 [2024-07-26 09:06:30.824824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.459 [2024-07-26 09:06:30.824850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:12.459 qpair failed and we were unable to recover it. 00:33:12.459 [2024-07-26 09:06:30.825023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.459 [2024-07-26 09:06:30.825049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:12.459 qpair failed and we were unable to recover it. 00:33:12.459 [2024-07-26 09:06:30.825244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.459 [2024-07-26 09:06:30.825273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:12.459 qpair failed and we were unable to recover it. 00:33:12.459 [2024-07-26 09:06:30.825487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.459 [2024-07-26 09:06:30.825514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.459 qpair failed and we were unable to recover it. 00:33:12.459 [2024-07-26 09:06:30.825638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.459 [2024-07-26 09:06:30.825678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.459 qpair failed and we were unable to recover it. 00:33:12.459 [2024-07-26 09:06:30.825847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.459 [2024-07-26 09:06:30.825875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.459 qpair failed and we were unable to recover it. 00:33:12.459 [2024-07-26 09:06:30.826031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.459 [2024-07-26 09:06:30.826057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.459 qpair failed and we were unable to recover it. 00:33:12.459 [2024-07-26 09:06:30.826213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.459 [2024-07-26 09:06:30.826239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.459 qpair failed and we were unable to recover it. 00:33:12.459 [2024-07-26 09:06:30.826383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.459 [2024-07-26 09:06:30.826409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.459 qpair failed and we were unable to recover it. 00:33:12.459 [2024-07-26 09:06:30.826546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.459 [2024-07-26 09:06:30.826571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.459 qpair failed and we were unable to recover it. 00:33:12.459 [2024-07-26 09:06:30.826760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.459 [2024-07-26 09:06:30.826788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.459 qpair failed and we were unable to recover it. 00:33:12.459 [2024-07-26 09:06:30.826960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.459 [2024-07-26 09:06:30.826989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.459 qpair failed and we were unable to recover it. 00:33:12.459 [2024-07-26 09:06:30.827133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.459 [2024-07-26 09:06:30.827159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.459 qpair failed and we were unable to recover it. 00:33:12.459 [2024-07-26 09:06:30.827277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.459 [2024-07-26 09:06:30.827303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.459 qpair failed and we were unable to recover it. 00:33:12.459 [2024-07-26 09:06:30.827426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.459 [2024-07-26 09:06:30.827451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.459 qpair failed and we were unable to recover it. 00:33:12.459 [2024-07-26 09:06:30.827617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.459 [2024-07-26 09:06:30.827642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.459 qpair failed and we were unable to recover it. 00:33:12.459 [2024-07-26 09:06:30.827779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.459 [2024-07-26 09:06:30.827804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.459 qpair failed and we were unable to recover it. 00:33:12.459 [2024-07-26 09:06:30.827943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.459 [2024-07-26 09:06:30.827984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.459 qpair failed and we were unable to recover it. 00:33:12.459 [2024-07-26 09:06:30.828149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.459 [2024-07-26 09:06:30.828175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.459 qpair failed and we were unable to recover it. 00:33:12.459 [2024-07-26 09:06:30.828319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.459 [2024-07-26 09:06:30.828344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.459 qpair failed and we were unable to recover it. 00:33:12.459 [2024-07-26 09:06:30.828482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.459 [2024-07-26 09:06:30.828510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.459 qpair failed and we were unable to recover it. 00:33:12.459 [2024-07-26 09:06:30.828665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.459 [2024-07-26 09:06:30.828693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.459 qpair failed and we were unable to recover it. 00:33:12.459 [2024-07-26 09:06:30.828845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.459 [2024-07-26 09:06:30.828874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.459 qpair failed and we were unable to recover it. 00:33:12.459 [2024-07-26 09:06:30.829035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.459 [2024-07-26 09:06:30.829070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.459 qpair failed and we were unable to recover it. 00:33:12.459 [2024-07-26 09:06:30.829236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.459 [2024-07-26 09:06:30.829261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.459 qpair failed and we were unable to recover it. 00:33:12.459 [2024-07-26 09:06:30.829435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.459 [2024-07-26 09:06:30.829461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.459 qpair failed and we were unable to recover it. 00:33:12.459 [2024-07-26 09:06:30.829640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.459 [2024-07-26 09:06:30.829665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.459 qpair failed and we were unable to recover it. 00:33:12.459 [2024-07-26 09:06:30.829815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.459 [2024-07-26 09:06:30.829856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.459 qpair failed and we were unable to recover it. 00:33:12.459 [2024-07-26 09:06:30.829985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.459 [2024-07-26 09:06:30.830013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.459 qpair failed and we were unable to recover it. 00:33:12.459 [2024-07-26 09:06:30.830183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.459 [2024-07-26 09:06:30.830209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.459 qpair failed and we were unable to recover it. 00:33:12.459 [2024-07-26 09:06:30.830337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.459 [2024-07-26 09:06:30.830366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.459 qpair failed and we were unable to recover it. 00:33:12.459 [2024-07-26 09:06:30.830510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.459 [2024-07-26 09:06:30.830550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.459 qpair failed and we were unable to recover it. 00:33:12.459 [2024-07-26 09:06:30.830714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.459 [2024-07-26 09:06:30.830743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.459 qpair failed and we were unable to recover it. 00:33:12.460 [2024-07-26 09:06:30.830936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.460 [2024-07-26 09:06:30.830961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.460 qpair failed and we were unable to recover it. 00:33:12.460 [2024-07-26 09:06:30.831116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.460 [2024-07-26 09:06:30.831142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.460 qpair failed and we were unable to recover it. 00:33:12.460 [2024-07-26 09:06:30.831255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.460 [2024-07-26 09:06:30.831280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.460 qpair failed and we were unable to recover it. 00:33:12.460 [2024-07-26 09:06:30.831422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.460 [2024-07-26 09:06:30.831447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.460 qpair failed and we were unable to recover it. 00:33:12.460 [2024-07-26 09:06:30.831556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.460 [2024-07-26 09:06:30.831581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.460 qpair failed and we were unable to recover it. 00:33:12.460 [2024-07-26 09:06:30.831744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.460 [2024-07-26 09:06:30.831772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.460 qpair failed and we were unable to recover it. 00:33:12.460 [2024-07-26 09:06:30.831921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.460 [2024-07-26 09:06:30.831960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:12.460 qpair failed and we were unable to recover it. 00:33:12.460 [2024-07-26 09:06:30.832158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.460 [2024-07-26 09:06:30.832203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:12.460 qpair failed and we were unable to recover it. 00:33:12.460 [2024-07-26 09:06:30.832376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.460 [2024-07-26 09:06:30.832402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:12.460 qpair failed and we were unable to recover it. 00:33:12.460 [2024-07-26 09:06:30.832591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.460 [2024-07-26 09:06:30.832620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:12.460 qpair failed and we were unable to recover it. 00:33:12.460 [2024-07-26 09:06:30.832807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.460 [2024-07-26 09:06:30.832852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:12.460 qpair failed and we were unable to recover it. 00:33:12.460 [2024-07-26 09:06:30.833023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.460 [2024-07-26 09:06:30.833049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:12.460 qpair failed and we were unable to recover it. 00:33:12.460 [2024-07-26 09:06:30.833215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.460 [2024-07-26 09:06:30.833258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:12.460 qpair failed and we were unable to recover it. 00:33:12.460 [2024-07-26 09:06:30.833424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.460 [2024-07-26 09:06:30.833467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:12.460 qpair failed and we were unable to recover it. 00:33:12.460 [2024-07-26 09:06:30.833627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.460 [2024-07-26 09:06:30.833652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:12.460 qpair failed and we were unable to recover it. 00:33:12.460 [2024-07-26 09:06:30.833805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.460 [2024-07-26 09:06:30.833832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.460 qpair failed and we were unable to recover it. 00:33:12.460 [2024-07-26 09:06:30.834007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.460 [2024-07-26 09:06:30.834049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.460 qpair failed and we were unable to recover it. 00:33:12.460 [2024-07-26 09:06:30.834225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.460 [2024-07-26 09:06:30.834253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.460 qpair failed and we were unable to recover it. 00:33:12.460 [2024-07-26 09:06:30.834418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.460 [2024-07-26 09:06:30.834446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.460 qpair failed and we were unable to recover it. 00:33:12.460 [2024-07-26 09:06:30.834625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.460 [2024-07-26 09:06:30.834651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.460 qpair failed and we were unable to recover it. 00:33:12.460 [2024-07-26 09:06:30.834771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.460 [2024-07-26 09:06:30.834797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.460 qpair failed and we were unable to recover it. 00:33:12.460 [2024-07-26 09:06:30.834948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.460 [2024-07-26 09:06:30.834973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.460 qpair failed and we were unable to recover it. 00:33:12.460 [2024-07-26 09:06:30.835092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.460 [2024-07-26 09:06:30.835118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.460 qpair failed and we were unable to recover it. 00:33:12.460 [2024-07-26 09:06:30.835267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.460 [2024-07-26 09:06:30.835292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.460 qpair failed and we were unable to recover it. 00:33:12.460 [2024-07-26 09:06:30.835489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.460 [2024-07-26 09:06:30.835517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.460 qpair failed and we were unable to recover it. 00:33:12.460 [2024-07-26 09:06:30.835681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.460 [2024-07-26 09:06:30.835706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.460 qpair failed and we were unable to recover it. 00:33:12.460 [2024-07-26 09:06:30.835879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.460 [2024-07-26 09:06:30.835908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.460 qpair failed and we were unable to recover it. 00:33:12.460 [2024-07-26 09:06:30.836051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.460 [2024-07-26 09:06:30.836084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.460 qpair failed and we were unable to recover it. 00:33:12.460 [2024-07-26 09:06:30.836251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.460 [2024-07-26 09:06:30.836277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.460 qpair failed and we were unable to recover it. 00:33:12.460 [2024-07-26 09:06:30.836458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.460 [2024-07-26 09:06:30.836486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.460 qpair failed and we were unable to recover it. 00:33:12.460 [2024-07-26 09:06:30.836654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.460 [2024-07-26 09:06:30.836682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.460 qpair failed and we were unable to recover it. 00:33:12.460 [2024-07-26 09:06:30.836857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.460 [2024-07-26 09:06:30.836882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.460 qpair failed and we were unable to recover it. 00:33:12.460 [2024-07-26 09:06:30.837001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.460 [2024-07-26 09:06:30.837026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.460 qpair failed and we were unable to recover it. 00:33:12.460 [2024-07-26 09:06:30.837212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.460 [2024-07-26 09:06:30.837238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.460 qpair failed and we were unable to recover it. 00:33:12.460 [2024-07-26 09:06:30.837376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.460 [2024-07-26 09:06:30.837404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.460 qpair failed and we were unable to recover it. 00:33:12.460 [2024-07-26 09:06:30.837583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.460 [2024-07-26 09:06:30.837608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.460 qpair failed and we were unable to recover it. 00:33:12.460 [2024-07-26 09:06:30.837750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.460 [2024-07-26 09:06:30.837775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.460 qpair failed and we were unable to recover it. 00:33:12.460 [2024-07-26 09:06:30.837912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.460 [2024-07-26 09:06:30.837940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.460 qpair failed and we were unable to recover it. 00:33:12.461 [2024-07-26 09:06:30.838074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.461 [2024-07-26 09:06:30.838124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.461 qpair failed and we were unable to recover it. 00:33:12.461 [2024-07-26 09:06:30.838268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.461 [2024-07-26 09:06:30.838293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.461 qpair failed and we were unable to recover it. 00:33:12.461 [2024-07-26 09:06:30.838425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.461 [2024-07-26 09:06:30.838453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.461 qpair failed and we were unable to recover it. 00:33:12.461 [2024-07-26 09:06:30.838604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.461 [2024-07-26 09:06:30.838629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.461 qpair failed and we were unable to recover it. 00:33:12.461 [2024-07-26 09:06:30.838749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.461 [2024-07-26 09:06:30.838774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.461 qpair failed and we were unable to recover it. 00:33:12.461 [2024-07-26 09:06:30.838916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.461 [2024-07-26 09:06:30.838944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.461 qpair failed and we were unable to recover it. 00:33:12.461 [2024-07-26 09:06:30.839081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.461 [2024-07-26 09:06:30.839107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.461 qpair failed and we were unable to recover it. 00:33:12.461 [2024-07-26 09:06:30.839250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.461 [2024-07-26 09:06:30.839276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.461 qpair failed and we were unable to recover it. 00:33:12.461 [2024-07-26 09:06:30.839433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.461 [2024-07-26 09:06:30.839466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.461 qpair failed and we were unable to recover it. 00:33:12.461 [2024-07-26 09:06:30.839605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.461 [2024-07-26 09:06:30.839648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.461 qpair failed and we were unable to recover it. 00:33:12.461 [2024-07-26 09:06:30.839801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.461 [2024-07-26 09:06:30.839829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.461 qpair failed and we were unable to recover it. 00:33:12.461 [2024-07-26 09:06:30.839953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.461 [2024-07-26 09:06:30.839981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.461 qpair failed and we were unable to recover it. 00:33:12.461 [2024-07-26 09:06:30.840148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.461 [2024-07-26 09:06:30.840174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.461 qpair failed and we were unable to recover it. 00:33:12.461 [2024-07-26 09:06:30.840340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.461 [2024-07-26 09:06:30.840369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.461 qpair failed and we were unable to recover it. 00:33:12.461 [2024-07-26 09:06:30.840555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.461 [2024-07-26 09:06:30.840581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.461 qpair failed and we were unable to recover it. 00:33:12.461 [2024-07-26 09:06:30.840705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.461 [2024-07-26 09:06:30.840730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.461 qpair failed and we were unable to recover it. 00:33:12.461 [2024-07-26 09:06:30.840862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.461 [2024-07-26 09:06:30.840901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:12.461 qpair failed and we were unable to recover it. 00:33:12.461 [2024-07-26 09:06:30.841028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.461 [2024-07-26 09:06:30.841056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:12.461 qpair failed and we were unable to recover it. 00:33:12.461 [2024-07-26 09:06:30.841211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.461 [2024-07-26 09:06:30.841237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:12.461 qpair failed and we were unable to recover it. 00:33:12.461 [2024-07-26 09:06:30.841399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.461 [2024-07-26 09:06:30.841426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:12.461 qpair failed and we were unable to recover it. 00:33:12.461 [2024-07-26 09:06:30.841571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.461 [2024-07-26 09:06:30.841597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:12.461 qpair failed and we were unable to recover it. 00:33:12.461 [2024-07-26 09:06:30.841752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.461 [2024-07-26 09:06:30.841778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:12.461 qpair failed and we were unable to recover it. 00:33:12.461 [2024-07-26 09:06:30.841898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.461 [2024-07-26 09:06:30.841924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.461 qpair failed and we were unable to recover it. 00:33:12.461 [2024-07-26 09:06:30.842038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.461 [2024-07-26 09:06:30.842071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.461 qpair failed and we were unable to recover it. 00:33:12.461 [2024-07-26 09:06:30.842225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.461 [2024-07-26 09:06:30.842251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.461 qpair failed and we were unable to recover it. 00:33:12.461 [2024-07-26 09:06:30.842421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.461 [2024-07-26 09:06:30.842447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.461 qpair failed and we were unable to recover it. 00:33:12.461 [2024-07-26 09:06:30.842572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.461 [2024-07-26 09:06:30.842597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.461 qpair failed and we were unable to recover it. 00:33:12.461 [2024-07-26 09:06:30.842766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.461 [2024-07-26 09:06:30.842792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.461 qpair failed and we were unable to recover it. 00:33:12.461 [2024-07-26 09:06:30.842905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.461 [2024-07-26 09:06:30.842930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.461 qpair failed and we were unable to recover it. 00:33:12.461 [2024-07-26 09:06:30.843082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.461 [2024-07-26 09:06:30.843108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.461 qpair failed and we were unable to recover it. 00:33:12.461 [2024-07-26 09:06:30.843228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.461 [2024-07-26 09:06:30.843253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.461 qpair failed and we were unable to recover it. 00:33:12.462 [2024-07-26 09:06:30.843392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.462 [2024-07-26 09:06:30.843418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.462 qpair failed and we were unable to recover it. 00:33:12.462 [2024-07-26 09:06:30.843529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.462 [2024-07-26 09:06:30.843555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.462 qpair failed and we were unable to recover it. 00:33:12.462 [2024-07-26 09:06:30.843715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.462 [2024-07-26 09:06:30.843743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.462 qpair failed and we were unable to recover it. 00:33:12.462 [2024-07-26 09:06:30.843898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.462 [2024-07-26 09:06:30.843926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.462 qpair failed and we were unable to recover it. 00:33:12.462 [2024-07-26 09:06:30.844051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.462 [2024-07-26 09:06:30.844089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.462 qpair failed and we were unable to recover it. 00:33:12.462 [2024-07-26 09:06:30.844257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.462 [2024-07-26 09:06:30.844283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.462 qpair failed and we were unable to recover it. 00:33:12.462 [2024-07-26 09:06:30.844430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.462 [2024-07-26 09:06:30.844456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.462 qpair failed and we were unable to recover it. 00:33:12.462 [2024-07-26 09:06:30.844594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.462 [2024-07-26 09:06:30.844620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.462 qpair failed and we were unable to recover it. 00:33:12.462 [2024-07-26 09:06:30.844804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.462 [2024-07-26 09:06:30.844833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.462 qpair failed and we were unable to recover it. 00:33:12.462 [2024-07-26 09:06:30.844998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.462 [2024-07-26 09:06:30.845024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.462 qpair failed and we were unable to recover it. 00:33:12.462 [2024-07-26 09:06:30.845179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.462 [2024-07-26 09:06:30.845205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.462 qpair failed and we were unable to recover it. 00:33:12.462 [2024-07-26 09:06:30.845321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.462 [2024-07-26 09:06:30.845346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.462 qpair failed and we were unable to recover it. 00:33:12.462 [2024-07-26 09:06:30.845537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.462 [2024-07-26 09:06:30.845565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.462 qpair failed and we were unable to recover it. 00:33:12.462 [2024-07-26 09:06:30.845718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.462 [2024-07-26 09:06:30.845747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.462 qpair failed and we were unable to recover it. 00:33:12.462 [2024-07-26 09:06:30.845870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.462 [2024-07-26 09:06:30.845900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.462 qpair failed and we were unable to recover it. 00:33:12.462 [2024-07-26 09:06:30.846030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.462 [2024-07-26 09:06:30.846056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.462 qpair failed and we were unable to recover it. 00:33:12.462 [2024-07-26 09:06:30.846216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.462 [2024-07-26 09:06:30.846242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.462 qpair failed and we were unable to recover it. 00:33:12.462 [2024-07-26 09:06:30.846390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.462 [2024-07-26 09:06:30.846416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.462 qpair failed and we were unable to recover it. 00:33:12.462 [2024-07-26 09:06:30.846596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.462 [2024-07-26 09:06:30.846621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.462 qpair failed and we were unable to recover it. 00:33:12.462 [2024-07-26 09:06:30.846769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.462 [2024-07-26 09:06:30.846794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.462 qpair failed and we were unable to recover it. 00:33:12.462 [2024-07-26 09:06:30.846941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.462 [2024-07-26 09:06:30.846967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.462 qpair failed and we were unable to recover it. 00:33:12.462 [2024-07-26 09:06:30.847111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.462 [2024-07-26 09:06:30.847141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:12.462 qpair failed and we were unable to recover it. 00:33:12.462 [2024-07-26 09:06:30.847265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.462 [2024-07-26 09:06:30.847292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:12.462 qpair failed and we were unable to recover it. 00:33:12.462 [2024-07-26 09:06:30.847466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.462 [2024-07-26 09:06:30.847510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:12.462 qpair failed and we were unable to recover it. 00:33:12.462 [2024-07-26 09:06:30.847768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.462 [2024-07-26 09:06:30.847820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:12.462 qpair failed and we were unable to recover it. 00:33:12.462 [2024-07-26 09:06:30.847964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.462 [2024-07-26 09:06:30.847990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:12.462 qpair failed and we were unable to recover it. 00:33:12.462 [2024-07-26 09:06:30.848161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.462 [2024-07-26 09:06:30.848188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:12.462 qpair failed and we were unable to recover it. 00:33:12.462 [2024-07-26 09:06:30.848357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.462 [2024-07-26 09:06:30.848383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:12.462 qpair failed and we were unable to recover it. 00:33:12.462 [2024-07-26 09:06:30.848531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.462 [2024-07-26 09:06:30.848557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:12.462 qpair failed and we were unable to recover it. 00:33:12.462 [2024-07-26 09:06:30.848726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.462 [2024-07-26 09:06:30.848774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:12.462 qpair failed and we were unable to recover it. 00:33:12.462 [2024-07-26 09:06:30.848917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.462 [2024-07-26 09:06:30.848944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.462 qpair failed and we were unable to recover it. 00:33:12.462 [2024-07-26 09:06:30.849103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.462 [2024-07-26 09:06:30.849137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.462 qpair failed and we were unable to recover it. 00:33:12.462 [2024-07-26 09:06:30.849267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.462 [2024-07-26 09:06:30.849295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.462 qpair failed and we were unable to recover it. 00:33:12.462 [2024-07-26 09:06:30.849488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.462 [2024-07-26 09:06:30.849513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.462 qpair failed and we were unable to recover it. 00:33:12.462 [2024-07-26 09:06:30.849631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.462 [2024-07-26 09:06:30.849656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.462 qpair failed and we were unable to recover it. 00:33:12.462 [2024-07-26 09:06:30.849845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.462 [2024-07-26 09:06:30.849871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.462 qpair failed and we were unable to recover it. 00:33:12.462 [2024-07-26 09:06:30.849996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.462 [2024-07-26 09:06:30.850022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.463 qpair failed and we were unable to recover it. 00:33:12.463 [2024-07-26 09:06:30.850173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.463 [2024-07-26 09:06:30.850200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.463 qpair failed and we were unable to recover it. 00:33:12.463 [2024-07-26 09:06:30.850314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.463 [2024-07-26 09:06:30.850361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.463 qpair failed and we were unable to recover it. 00:33:12.463 [2024-07-26 09:06:30.850543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.463 [2024-07-26 09:06:30.850569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.463 qpair failed and we were unable to recover it. 00:33:12.463 [2024-07-26 09:06:30.850717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.463 [2024-07-26 09:06:30.850742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.463 qpair failed and we were unable to recover it. 00:33:12.463 [2024-07-26 09:06:30.850913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.463 [2024-07-26 09:06:30.850941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.463 qpair failed and we were unable to recover it. 00:33:12.463 [2024-07-26 09:06:30.851107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.463 [2024-07-26 09:06:30.851133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.463 qpair failed and we were unable to recover it. 00:33:12.463 [2024-07-26 09:06:30.851276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.463 [2024-07-26 09:06:30.851319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.463 qpair failed and we were unable to recover it. 00:33:12.463 [2024-07-26 09:06:30.851501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.463 [2024-07-26 09:06:30.851529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.463 qpair failed and we were unable to recover it. 00:33:12.463 [2024-07-26 09:06:30.851693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.463 [2024-07-26 09:06:30.851721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.463 qpair failed and we were unable to recover it. 00:33:12.463 [2024-07-26 09:06:30.851881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.463 [2024-07-26 09:06:30.851909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.463 qpair failed and we were unable to recover it. 00:33:12.463 [2024-07-26 09:06:30.852082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.463 [2024-07-26 09:06:30.852108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.463 qpair failed and we were unable to recover it. 00:33:12.463 [2024-07-26 09:06:30.852259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.463 [2024-07-26 09:06:30.852288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:12.463 qpair failed and we were unable to recover it. 00:33:12.463 [2024-07-26 09:06:30.852485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.463 [2024-07-26 09:06:30.852529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:12.463 qpair failed and we were unable to recover it. 00:33:12.463 [2024-07-26 09:06:30.852749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.463 [2024-07-26 09:06:30.852775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:12.463 qpair failed and we were unable to recover it. 00:33:12.463 [2024-07-26 09:06:30.852894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.463 [2024-07-26 09:06:30.852921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:12.463 qpair failed and we were unable to recover it. 00:33:12.463 [2024-07-26 09:06:30.853101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.463 [2024-07-26 09:06:30.853139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:12.463 qpair failed and we were unable to recover it. 00:33:12.463 [2024-07-26 09:06:30.853310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.463 [2024-07-26 09:06:30.853336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:12.463 qpair failed and we were unable to recover it. 00:33:12.463 [2024-07-26 09:06:30.853488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.463 [2024-07-26 09:06:30.853514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:12.463 qpair failed and we were unable to recover it. 00:33:12.463 [2024-07-26 09:06:30.853708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.463 [2024-07-26 09:06:30.853751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:12.463 qpair failed and we were unable to recover it. 00:33:12.463 [2024-07-26 09:06:30.853900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.463 [2024-07-26 09:06:30.853926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:12.463 qpair failed and we were unable to recover it. 00:33:12.463 [2024-07-26 09:06:30.854110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.463 [2024-07-26 09:06:30.854137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.463 qpair failed and we were unable to recover it. 00:33:12.463 [2024-07-26 09:06:30.854287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.463 [2024-07-26 09:06:30.854320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.463 qpair failed and we were unable to recover it. 00:33:12.463 [2024-07-26 09:06:30.854523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.463 [2024-07-26 09:06:30.854552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.463 qpair failed and we were unable to recover it. 00:33:12.463 [2024-07-26 09:06:30.854763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.463 [2024-07-26 09:06:30.854824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.463 qpair failed and we were unable to recover it. 00:33:12.463 [2024-07-26 09:06:30.854983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.463 [2024-07-26 09:06:30.855012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.463 qpair failed and we were unable to recover it. 00:33:12.463 [2024-07-26 09:06:30.855192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.463 [2024-07-26 09:06:30.855219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.463 qpair failed and we were unable to recover it. 00:33:12.463 [2024-07-26 09:06:30.855398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.463 [2024-07-26 09:06:30.855452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.463 qpair failed and we were unable to recover it. 00:33:12.463 [2024-07-26 09:06:30.855604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.463 [2024-07-26 09:06:30.855630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.463 qpair failed and we were unable to recover it. 00:33:12.463 [2024-07-26 09:06:30.855764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.463 [2024-07-26 09:06:30.855790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.463 qpair failed and we were unable to recover it. 00:33:12.463 [2024-07-26 09:06:30.855932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.463 [2024-07-26 09:06:30.855970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:12.463 qpair failed and we were unable to recover it. 00:33:12.463 [2024-07-26 09:06:30.856160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.463 [2024-07-26 09:06:30.856188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:12.463 qpair failed and we were unable to recover it. 00:33:12.463 [2024-07-26 09:06:30.856404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.463 [2024-07-26 09:06:30.856430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:12.463 qpair failed and we were unable to recover it. 00:33:12.463 [2024-07-26 09:06:30.857487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.463 [2024-07-26 09:06:30.857526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:12.463 qpair failed and we were unable to recover it. 00:33:12.463 [2024-07-26 09:06:30.857724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.464 [2024-07-26 09:06:30.857768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:12.464 qpair failed and we were unable to recover it. 00:33:12.464 [2024-07-26 09:06:30.857923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.464 [2024-07-26 09:06:30.857949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:12.464 qpair failed and we were unable to recover it. 00:33:12.464 [2024-07-26 09:06:30.858112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.464 [2024-07-26 09:06:30.858138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:12.464 qpair failed and we were unable to recover it. 00:33:12.464 [2024-07-26 09:06:30.858324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.464 [2024-07-26 09:06:30.858351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:12.464 qpair failed and we were unable to recover it. 00:33:12.464 [2024-07-26 09:06:30.858556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.464 [2024-07-26 09:06:30.858585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:12.464 qpair failed and we were unable to recover it. 00:33:12.464 [2024-07-26 09:06:30.858750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.464 [2024-07-26 09:06:30.858776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:12.464 qpair failed and we were unable to recover it. 00:33:12.464 [2024-07-26 09:06:30.858946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.464 [2024-07-26 09:06:30.858973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:12.464 qpair failed and we were unable to recover it. 00:33:12.464 [2024-07-26 09:06:30.859124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.464 [2024-07-26 09:06:30.859150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:12.464 qpair failed and we were unable to recover it. 00:33:12.464 [2024-07-26 09:06:30.859293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.464 [2024-07-26 09:06:30.859319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:12.464 qpair failed and we were unable to recover it. 00:33:12.464 [2024-07-26 09:06:30.859468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.464 [2024-07-26 09:06:30.859494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:12.464 qpair failed and we were unable to recover it. 00:33:12.464 [2024-07-26 09:06:30.859666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.464 [2024-07-26 09:06:30.859692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:12.464 qpair failed and we were unable to recover it. 00:33:12.464 [2024-07-26 09:06:30.859806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.464 [2024-07-26 09:06:30.859831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:12.464 qpair failed and we were unable to recover it. 00:33:12.464 [2024-07-26 09:06:30.859954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.464 [2024-07-26 09:06:30.859979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:12.464 qpair failed and we were unable to recover it. 00:33:12.464 [2024-07-26 09:06:30.860138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.464 [2024-07-26 09:06:30.860166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:12.464 qpair failed and we were unable to recover it. 00:33:12.464 [2024-07-26 09:06:30.860339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.464 [2024-07-26 09:06:30.860386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.464 qpair failed and we were unable to recover it. 00:33:12.464 [2024-07-26 09:06:30.860554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.464 [2024-07-26 09:06:30.860586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.464 qpair failed and we were unable to recover it. 00:33:12.464 [2024-07-26 09:06:30.860746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.464 [2024-07-26 09:06:30.860772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.464 qpair failed and we were unable to recover it. 00:33:12.464 [2024-07-26 09:06:30.860889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.464 [2024-07-26 09:06:30.860914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.464 qpair failed and we were unable to recover it. 00:33:12.464 [2024-07-26 09:06:30.861054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.464 [2024-07-26 09:06:30.861087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.464 qpair failed and we were unable to recover it. 00:33:12.464 [2024-07-26 09:06:30.861228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.464 [2024-07-26 09:06:30.861270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.464 qpair failed and we were unable to recover it. 00:33:12.464 [2024-07-26 09:06:30.861435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.464 [2024-07-26 09:06:30.861463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.464 qpair failed and we were unable to recover it. 00:33:12.464 [2024-07-26 09:06:30.861639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.464 [2024-07-26 09:06:30.861664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.464 qpair failed and we were unable to recover it. 00:33:12.464 [2024-07-26 09:06:30.861816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.464 [2024-07-26 09:06:30.861842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.464 qpair failed and we were unable to recover it. 00:33:12.464 [2024-07-26 09:06:30.861985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.464 [2024-07-26 09:06:30.862010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.464 qpair failed and we were unable to recover it. 00:33:12.464 [2024-07-26 09:06:30.862209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.464 [2024-07-26 09:06:30.862236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.464 qpair failed and we were unable to recover it. 00:33:12.464 [2024-07-26 09:06:30.862359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.464 [2024-07-26 09:06:30.862385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.464 qpair failed and we were unable to recover it. 00:33:12.464 [2024-07-26 09:06:30.862531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.464 [2024-07-26 09:06:30.862573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.464 qpair failed and we were unable to recover it. 00:33:12.464 [2024-07-26 09:06:30.862748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.464 [2024-07-26 09:06:30.862774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.464 qpair failed and we were unable to recover it. 00:33:12.464 [2024-07-26 09:06:30.862947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.464 [2024-07-26 09:06:30.862973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.464 qpair failed and we were unable to recover it. 00:33:12.464 [2024-07-26 09:06:30.863113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.464 [2024-07-26 09:06:30.863139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.464 qpair failed and we were unable to recover it. 00:33:12.464 [2024-07-26 09:06:30.863259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.464 [2024-07-26 09:06:30.863285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.464 qpair failed and we were unable to recover it. 00:33:12.464 [2024-07-26 09:06:30.863439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.464 [2024-07-26 09:06:30.863465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.464 qpair failed and we were unable to recover it. 00:33:12.464 [2024-07-26 09:06:30.863639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.464 [2024-07-26 09:06:30.863664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.464 qpair failed and we were unable to recover it. 00:33:12.464 [2024-07-26 09:06:30.863870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.464 [2024-07-26 09:06:30.863899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.464 qpair failed and we were unable to recover it. 00:33:12.464 [2024-07-26 09:06:30.864072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.464 [2024-07-26 09:06:30.864099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.464 qpair failed and we were unable to recover it. 00:33:12.464 [2024-07-26 09:06:30.864246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.464 [2024-07-26 09:06:30.864273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.464 qpair failed and we were unable to recover it. 00:33:12.464 [2024-07-26 09:06:30.864481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.464 [2024-07-26 09:06:30.864507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.464 qpair failed and we were unable to recover it. 00:33:12.464 [2024-07-26 09:06:30.864651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.464 [2024-07-26 09:06:30.864676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.465 qpair failed and we were unable to recover it. 00:33:12.465 [2024-07-26 09:06:30.864946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.465 [2024-07-26 09:06:30.864997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.465 qpair failed and we were unable to recover it. 00:33:12.465 [2024-07-26 09:06:30.865175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.465 [2024-07-26 09:06:30.865201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.465 qpair failed and we were unable to recover it. 00:33:12.465 [2024-07-26 09:06:30.865364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.465 [2024-07-26 09:06:30.865391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.465 qpair failed and we were unable to recover it. 00:33:12.465 [2024-07-26 09:06:30.865568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.465 [2024-07-26 09:06:30.865596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.465 qpair failed and we were unable to recover it. 00:33:12.465 [2024-07-26 09:06:30.865777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.465 [2024-07-26 09:06:30.865810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.465 qpair failed and we were unable to recover it. 00:33:12.465 [2024-07-26 09:06:30.865979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.465 [2024-07-26 09:06:30.866007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.465 qpair failed and we were unable to recover it. 00:33:12.465 [2024-07-26 09:06:30.866202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.465 [2024-07-26 09:06:30.866228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.465 qpair failed and we were unable to recover it. 00:33:12.465 [2024-07-26 09:06:30.866402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.465 [2024-07-26 09:06:30.866428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.465 qpair failed and we were unable to recover it. 00:33:12.465 [2024-07-26 09:06:30.866569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.465 [2024-07-26 09:06:30.866595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.465 qpair failed and we were unable to recover it. 00:33:12.465 [2024-07-26 09:06:30.866740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.465 [2024-07-26 09:06:30.866766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.465 qpair failed and we were unable to recover it. 00:33:12.465 [2024-07-26 09:06:30.866876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.465 [2024-07-26 09:06:30.866902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.465 qpair failed and we were unable to recover it. 00:33:12.465 [2024-07-26 09:06:30.867048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.465 [2024-07-26 09:06:30.867080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.465 qpair failed and we were unable to recover it. 00:33:12.465 [2024-07-26 09:06:30.867203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.465 [2024-07-26 09:06:30.867229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.465 qpair failed and we were unable to recover it. 00:33:12.465 [2024-07-26 09:06:30.867395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.465 [2024-07-26 09:06:30.867423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.465 qpair failed and we were unable to recover it. 00:33:12.465 [2024-07-26 09:06:30.867568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.465 [2024-07-26 09:06:30.867594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.465 qpair failed and we were unable to recover it. 00:33:12.465 [2024-07-26 09:06:30.867742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.465 [2024-07-26 09:06:30.867769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.465 qpair failed and we were unable to recover it. 00:33:12.465 [2024-07-26 09:06:30.867904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.465 [2024-07-26 09:06:30.867933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.465 qpair failed and we were unable to recover it. 00:33:12.465 [2024-07-26 09:06:30.868134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.465 [2024-07-26 09:06:30.868160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.465 qpair failed and we were unable to recover it. 00:33:12.465 [2024-07-26 09:06:30.868301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.465 [2024-07-26 09:06:30.868341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:12.465 qpair failed and we were unable to recover it. 00:33:12.465 [2024-07-26 09:06:30.868476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.465 [2024-07-26 09:06:30.868504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:12.465 qpair failed and we were unable to recover it. 00:33:12.465 [2024-07-26 09:06:30.868689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.465 [2024-07-26 09:06:30.868716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:12.465 qpair failed and we were unable to recover it. 00:33:12.465 [2024-07-26 09:06:30.868881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.465 [2024-07-26 09:06:30.868907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:12.465 qpair failed and we were unable to recover it. 00:33:12.465 [2024-07-26 09:06:30.869048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.465 [2024-07-26 09:06:30.869080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:12.465 qpair failed and we were unable to recover it. 00:33:12.465 [2024-07-26 09:06:30.869225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.465 [2024-07-26 09:06:30.869252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:12.465 qpair failed and we were unable to recover it. 00:33:12.465 [2024-07-26 09:06:30.869401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.465 [2024-07-26 09:06:30.869428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:12.465 qpair failed and we were unable to recover it. 00:33:12.465 [2024-07-26 09:06:30.869597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.465 [2024-07-26 09:06:30.869640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:12.465 qpair failed and we were unable to recover it. 00:33:12.465 [2024-07-26 09:06:30.869811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.465 [2024-07-26 09:06:30.869854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:12.465 qpair failed and we were unable to recover it. 00:33:12.465 [2024-07-26 09:06:30.869973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.465 [2024-07-26 09:06:30.870000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.465 qpair failed and we were unable to recover it. 00:33:12.465 [2024-07-26 09:06:30.870155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.465 [2024-07-26 09:06:30.870181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.465 qpair failed and we were unable to recover it. 00:33:12.465 [2024-07-26 09:06:30.870303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.465 [2024-07-26 09:06:30.870328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.465 qpair failed and we were unable to recover it. 00:33:12.465 [2024-07-26 09:06:30.870531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.465 [2024-07-26 09:06:30.870556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.465 qpair failed and we were unable to recover it. 00:33:12.465 [2024-07-26 09:06:30.870705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.465 [2024-07-26 09:06:30.870734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.465 qpair failed and we were unable to recover it. 00:33:12.465 [2024-07-26 09:06:30.870911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.465 [2024-07-26 09:06:30.870939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.465 qpair failed and we were unable to recover it. 00:33:12.465 [2024-07-26 09:06:30.871089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.465 [2024-07-26 09:06:30.871117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:12.465 qpair failed and we were unable to recover it. 00:33:12.465 [2024-07-26 09:06:30.871263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.465 [2024-07-26 09:06:30.871289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:12.465 qpair failed and we were unable to recover it. 00:33:12.465 [2024-07-26 09:06:30.871428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.465 [2024-07-26 09:06:30.871471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:12.465 qpair failed and we were unable to recover it. 00:33:12.465 [2024-07-26 09:06:30.871617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.465 [2024-07-26 09:06:30.871646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:12.465 qpair failed and we were unable to recover it. 00:33:12.465 [2024-07-26 09:06:30.871877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.466 [2024-07-26 09:06:30.871902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:12.466 qpair failed and we were unable to recover it. 00:33:12.466 [2024-07-26 09:06:30.872056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.466 [2024-07-26 09:06:30.872090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:12.466 qpair failed and we were unable to recover it. 00:33:12.466 [2024-07-26 09:06:30.872243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.466 [2024-07-26 09:06:30.872269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:12.466 qpair failed and we were unable to recover it. 00:33:12.466 [2024-07-26 09:06:30.872431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.466 [2024-07-26 09:06:30.872457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:12.466 qpair failed and we were unable to recover it. 00:33:12.466 [2024-07-26 09:06:30.872601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.466 [2024-07-26 09:06:30.872628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:12.466 qpair failed and we were unable to recover it. 00:33:12.466 [2024-07-26 09:06:30.872784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.466 [2024-07-26 09:06:30.872811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.466 qpair failed and we were unable to recover it. 00:33:12.466 [2024-07-26 09:06:30.872928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.466 [2024-07-26 09:06:30.872954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.466 qpair failed and we were unable to recover it. 00:33:12.466 [2024-07-26 09:06:30.873073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.466 [2024-07-26 09:06:30.873099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.466 qpair failed and we were unable to recover it. 00:33:12.466 [2024-07-26 09:06:30.873272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.466 [2024-07-26 09:06:30.873297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.466 qpair failed and we were unable to recover it. 00:33:12.466 [2024-07-26 09:06:30.873450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.466 [2024-07-26 09:06:30.873475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.466 qpair failed and we were unable to recover it. 00:33:12.466 [2024-07-26 09:06:30.873626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.466 [2024-07-26 09:06:30.873651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.466 qpair failed and we were unable to recover it. 00:33:12.466 [2024-07-26 09:06:30.873794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.466 [2024-07-26 09:06:30.873819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.466 qpair failed and we were unable to recover it. 00:33:12.466 [2024-07-26 09:06:30.873990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.466 [2024-07-26 09:06:30.874015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.466 qpair failed and we were unable to recover it. 00:33:12.466 [2024-07-26 09:06:30.874173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.466 [2024-07-26 09:06:30.874200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.466 qpair failed and we were unable to recover it. 00:33:12.466 [2024-07-26 09:06:30.874346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.466 [2024-07-26 09:06:30.874372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.466 qpair failed and we were unable to recover it. 00:33:12.466 [2024-07-26 09:06:30.874525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.466 [2024-07-26 09:06:30.874551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.466 qpair failed and we were unable to recover it. 00:33:12.466 [2024-07-26 09:06:30.874753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.466 [2024-07-26 09:06:30.874782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.466 qpair failed and we were unable to recover it. 00:33:12.466 [2024-07-26 09:06:30.874990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.466 [2024-07-26 09:06:30.875018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.466 qpair failed and we were unable to recover it. 00:33:12.466 [2024-07-26 09:06:30.875174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.466 [2024-07-26 09:06:30.875200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.466 qpair failed and we were unable to recover it. 00:33:12.466 [2024-07-26 09:06:30.875378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.466 [2024-07-26 09:06:30.875403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.466 qpair failed and we were unable to recover it. 00:33:12.466 [2024-07-26 09:06:30.875522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.466 [2024-07-26 09:06:30.875562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.466 qpair failed and we were unable to recover it. 00:33:12.466 [2024-07-26 09:06:30.875701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.466 [2024-07-26 09:06:30.875727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.466 qpair failed and we were unable to recover it. 00:33:12.466 [2024-07-26 09:06:30.875904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.466 [2024-07-26 09:06:30.875932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.466 qpair failed and we were unable to recover it. 00:33:12.466 [2024-07-26 09:06:30.876132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.466 [2024-07-26 09:06:30.876158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.466 qpair failed and we were unable to recover it. 00:33:12.466 [2024-07-26 09:06:30.876296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.466 [2024-07-26 09:06:30.876324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.466 qpair failed and we were unable to recover it. 00:33:12.466 [2024-07-26 09:06:30.876486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.466 [2024-07-26 09:06:30.876514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.466 qpair failed and we were unable to recover it. 00:33:12.466 [2024-07-26 09:06:30.876686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.466 [2024-07-26 09:06:30.876711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.466 qpair failed and we were unable to recover it. 00:33:12.466 [2024-07-26 09:06:30.876825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.466 [2024-07-26 09:06:30.876851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.466 qpair failed and we were unable to recover it. 00:33:12.466 [2024-07-26 09:06:30.876992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.466 [2024-07-26 09:06:30.877018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.466 qpair failed and we were unable to recover it. 00:33:12.466 [2024-07-26 09:06:30.877183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.466 [2024-07-26 09:06:30.877223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:12.466 qpair failed and we were unable to recover it. 00:33:12.466 [2024-07-26 09:06:30.877395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.466 [2024-07-26 09:06:30.877440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:12.466 qpair failed and we were unable to recover it. 00:33:12.466 [2024-07-26 09:06:30.877594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.466 [2024-07-26 09:06:30.877621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:12.466 qpair failed and we were unable to recover it. 00:33:12.466 [2024-07-26 09:06:30.877827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.467 [2024-07-26 09:06:30.877871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:12.467 qpair failed and we were unable to recover it. 00:33:12.467 [2024-07-26 09:06:30.877997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.467 [2024-07-26 09:06:30.878024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:12.467 qpair failed and we were unable to recover it. 00:33:12.467 [2024-07-26 09:06:30.878165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.467 [2024-07-26 09:06:30.878193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:12.467 qpair failed and we were unable to recover it. 00:33:12.467 [2024-07-26 09:06:30.878381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.467 [2024-07-26 09:06:30.878408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.467 qpair failed and we were unable to recover it. 00:33:12.467 [2024-07-26 09:06:30.878529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.467 [2024-07-26 09:06:30.878555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.467 qpair failed and we were unable to recover it. 00:33:12.467 [2024-07-26 09:06:30.878726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.467 [2024-07-26 09:06:30.878757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.467 qpair failed and we were unable to recover it. 00:33:12.467 [2024-07-26 09:06:30.878919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.467 [2024-07-26 09:06:30.878950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.467 qpair failed and we were unable to recover it. 00:33:12.467 [2024-07-26 09:06:30.879121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.467 [2024-07-26 09:06:30.879146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.467 qpair failed and we were unable to recover it. 00:33:12.467 [2024-07-26 09:06:30.879290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.467 [2024-07-26 09:06:30.879315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.467 qpair failed and we were unable to recover it. 00:33:12.467 [2024-07-26 09:06:30.879470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.467 [2024-07-26 09:06:30.879495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.467 qpair failed and we were unable to recover it. 00:33:12.467 [2024-07-26 09:06:30.879668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.467 [2024-07-26 09:06:30.879696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.467 qpair failed and we were unable to recover it. 00:33:12.467 [2024-07-26 09:06:30.879854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.467 [2024-07-26 09:06:30.879882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.467 qpair failed and we were unable to recover it. 00:33:12.467 [2024-07-26 09:06:30.880043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.467 [2024-07-26 09:06:30.880086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.467 qpair failed and we were unable to recover it. 00:33:12.467 [2024-07-26 09:06:30.880256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.467 [2024-07-26 09:06:30.880282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.467 qpair failed and we were unable to recover it. 00:33:12.467 [2024-07-26 09:06:30.880481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.467 [2024-07-26 09:06:30.880506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.467 qpair failed and we were unable to recover it. 00:33:12.467 [2024-07-26 09:06:30.880696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.467 [2024-07-26 09:06:30.880724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.467 qpair failed and we were unable to recover it. 00:33:12.467 [2024-07-26 09:06:30.880883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.467 [2024-07-26 09:06:30.880911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.467 qpair failed and we were unable to recover it. 00:33:12.467 [2024-07-26 09:06:30.881069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.467 [2024-07-26 09:06:30.881095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.467 qpair failed and we were unable to recover it. 00:33:12.467 [2024-07-26 09:06:30.881267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.467 [2024-07-26 09:06:30.881292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.467 qpair failed and we were unable to recover it. 00:33:12.467 [2024-07-26 09:06:30.881493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.467 [2024-07-26 09:06:30.881522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.467 qpair failed and we were unable to recover it. 00:33:12.467 [2024-07-26 09:06:30.881649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.467 [2024-07-26 09:06:30.881677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.467 qpair failed and we were unable to recover it. 00:33:12.467 [2024-07-26 09:06:30.881941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.467 [2024-07-26 09:06:30.881966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.467 qpair failed and we were unable to recover it. 00:33:12.467 [2024-07-26 09:06:30.882121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.467 [2024-07-26 09:06:30.882148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.467 qpair failed and we were unable to recover it. 00:33:12.467 [2024-07-26 09:06:30.882298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.467 [2024-07-26 09:06:30.882324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.467 qpair failed and we were unable to recover it. 00:33:12.467 [2024-07-26 09:06:30.882506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.467 [2024-07-26 09:06:30.882534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.467 qpair failed and we were unable to recover it. 00:33:12.467 [2024-07-26 09:06:30.882697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.467 [2024-07-26 09:06:30.882725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.467 qpair failed and we were unable to recover it. 00:33:12.467 [2024-07-26 09:06:30.882909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.467 [2024-07-26 09:06:30.882937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.467 qpair failed and we were unable to recover it. 00:33:12.467 [2024-07-26 09:06:30.883088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.467 [2024-07-26 09:06:30.883114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.467 qpair failed and we were unable to recover it. 00:33:12.467 [2024-07-26 09:06:30.883260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.467 [2024-07-26 09:06:30.883285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.467 qpair failed and we were unable to recover it. 00:33:12.467 [2024-07-26 09:06:30.883459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.467 [2024-07-26 09:06:30.883487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.467 qpair failed and we were unable to recover it. 00:33:12.467 [2024-07-26 09:06:30.883627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.467 [2024-07-26 09:06:30.883675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.467 qpair failed and we were unable to recover it. 00:33:12.467 [2024-07-26 09:06:30.883842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.467 [2024-07-26 09:06:30.883870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.467 qpair failed and we were unable to recover it. 00:33:12.467 [2024-07-26 09:06:30.884068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.467 [2024-07-26 09:06:30.884094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.467 qpair failed and we were unable to recover it. 00:33:12.467 [2024-07-26 09:06:30.884219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.467 [2024-07-26 09:06:30.884244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.467 qpair failed and we were unable to recover it. 00:33:12.467 [2024-07-26 09:06:30.884377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.467 [2024-07-26 09:06:30.884405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.467 qpair failed and we were unable to recover it. 00:33:12.467 [2024-07-26 09:06:30.884572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.467 [2024-07-26 09:06:30.884597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.467 qpair failed and we were unable to recover it. 00:33:12.467 [2024-07-26 09:06:30.884788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.467 [2024-07-26 09:06:30.884813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.467 qpair failed and we were unable to recover it. 00:33:12.467 [2024-07-26 09:06:30.884994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.468 [2024-07-26 09:06:30.885019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.468 qpair failed and we were unable to recover it. 00:33:12.468 [2024-07-26 09:06:30.885169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.468 [2024-07-26 09:06:30.885195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.468 qpair failed and we were unable to recover it. 00:33:12.468 [2024-07-26 09:06:30.885339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.468 [2024-07-26 09:06:30.885364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.468 qpair failed and we were unable to recover it. 00:33:12.468 [2024-07-26 09:06:30.885534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.468 [2024-07-26 09:06:30.885559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.468 qpair failed and we were unable to recover it. 00:33:12.468 [2024-07-26 09:06:30.885708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.468 [2024-07-26 09:06:30.885733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.468 qpair failed and we were unable to recover it. 00:33:12.468 [2024-07-26 09:06:30.885906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.468 [2024-07-26 09:06:30.885931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.468 qpair failed and we were unable to recover it. 00:33:12.468 [2024-07-26 09:06:30.886085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.468 [2024-07-26 09:06:30.886113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.468 qpair failed and we were unable to recover it. 00:33:12.468 [2024-07-26 09:06:30.886265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.468 [2024-07-26 09:06:30.886290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.468 qpair failed and we were unable to recover it. 00:33:12.468 [2024-07-26 09:06:30.886442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.468 [2024-07-26 09:06:30.886469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.468 qpair failed and we were unable to recover it. 00:33:12.468 [2024-07-26 09:06:30.886632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.468 [2024-07-26 09:06:30.886658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.468 qpair failed and we were unable to recover it. 00:33:12.468 [2024-07-26 09:06:30.886779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.468 [2024-07-26 09:06:30.886803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.468 qpair failed and we were unable to recover it. 00:33:12.468 [2024-07-26 09:06:30.886974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.468 [2024-07-26 09:06:30.887000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.468 qpair failed and we were unable to recover it. 00:33:12.468 [2024-07-26 09:06:30.887162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.468 [2024-07-26 09:06:30.887188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.468 qpair failed and we were unable to recover it. 00:33:12.468 [2024-07-26 09:06:30.887306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.468 [2024-07-26 09:06:30.887331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.468 qpair failed and we were unable to recover it. 00:33:12.468 [2024-07-26 09:06:30.887465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.468 [2024-07-26 09:06:30.887493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.468 qpair failed and we were unable to recover it. 00:33:12.468 [2024-07-26 09:06:30.887653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.468 [2024-07-26 09:06:30.887681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.468 qpair failed and we were unable to recover it. 00:33:12.468 [2024-07-26 09:06:30.887879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.468 [2024-07-26 09:06:30.887904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.468 qpair failed and we were unable to recover it. 00:33:12.468 [2024-07-26 09:06:30.888057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.468 [2024-07-26 09:06:30.888090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.468 qpair failed and we were unable to recover it. 00:33:12.468 [2024-07-26 09:06:30.888209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.468 [2024-07-26 09:06:30.888234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.468 qpair failed and we were unable to recover it. 00:33:12.468 [2024-07-26 09:06:30.888379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.468 [2024-07-26 09:06:30.888404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.468 qpair failed and we were unable to recover it. 00:33:12.468 [2024-07-26 09:06:30.888643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.468 [2024-07-26 09:06:30.888676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.468 qpair failed and we were unable to recover it. 00:33:12.468 [2024-07-26 09:06:30.888832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.468 [2024-07-26 09:06:30.888860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.468 qpair failed and we were unable to recover it. 00:33:12.468 [2024-07-26 09:06:30.889035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.468 [2024-07-26 09:06:30.889065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.468 qpair failed and we were unable to recover it. 00:33:12.468 [2024-07-26 09:06:30.889187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.468 [2024-07-26 09:06:30.889212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.468 qpair failed and we were unable to recover it. 00:33:12.468 [2024-07-26 09:06:30.889362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.468 [2024-07-26 09:06:30.889391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.468 qpair failed and we were unable to recover it. 00:33:12.468 [2024-07-26 09:06:30.889577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.468 [2024-07-26 09:06:30.889602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.468 qpair failed and we were unable to recover it. 00:33:12.468 [2024-07-26 09:06:30.889771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.468 [2024-07-26 09:06:30.889813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.468 qpair failed and we were unable to recover it. 00:33:12.468 [2024-07-26 09:06:30.889969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.468 [2024-07-26 09:06:30.889997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.468 qpair failed and we were unable to recover it. 00:33:12.468 [2024-07-26 09:06:30.890145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.468 [2024-07-26 09:06:30.890172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.468 qpair failed and we were unable to recover it. 00:33:12.468 [2024-07-26 09:06:30.890323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.468 [2024-07-26 09:06:30.890348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.468 qpair failed and we were unable to recover it. 00:33:12.468 [2024-07-26 09:06:30.890512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.468 [2024-07-26 09:06:30.890540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.468 qpair failed and we were unable to recover it. 00:33:12.468 [2024-07-26 09:06:30.890755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.468 [2024-07-26 09:06:30.890784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.468 qpair failed and we were unable to recover it. 00:33:12.468 [2024-07-26 09:06:30.890964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.468 [2024-07-26 09:06:30.890989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.468 qpair failed and we were unable to recover it. 00:33:12.468 [2024-07-26 09:06:30.891151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.468 [2024-07-26 09:06:30.891177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.468 qpair failed and we were unable to recover it. 00:33:12.468 [2024-07-26 09:06:30.891321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.468 [2024-07-26 09:06:30.891346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.468 qpair failed and we were unable to recover it. 00:33:12.468 [2024-07-26 09:06:30.891507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.468 [2024-07-26 09:06:30.891547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.468 qpair failed and we were unable to recover it. 00:33:12.468 [2024-07-26 09:06:30.891688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.469 [2024-07-26 09:06:30.891714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.469 qpair failed and we were unable to recover it. 00:33:12.469 [2024-07-26 09:06:30.891857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.469 [2024-07-26 09:06:30.891885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.469 qpair failed and we were unable to recover it. 00:33:12.469 [2024-07-26 09:06:30.892062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.469 [2024-07-26 09:06:30.892088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.469 qpair failed and we were unable to recover it. 00:33:12.469 [2024-07-26 09:06:30.892256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.469 [2024-07-26 09:06:30.892281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.469 qpair failed and we were unable to recover it. 00:33:12.469 [2024-07-26 09:06:30.892481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.469 [2024-07-26 09:06:30.892506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.469 qpair failed and we were unable to recover it. 00:33:12.469 [2024-07-26 09:06:30.892622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.469 [2024-07-26 09:06:30.892647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.469 qpair failed and we were unable to recover it. 00:33:12.469 [2024-07-26 09:06:30.892790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.469 [2024-07-26 09:06:30.892817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.469 qpair failed and we were unable to recover it. 00:33:12.469 [2024-07-26 09:06:30.892978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.469 [2024-07-26 09:06:30.893005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.469 qpair failed and we were unable to recover it. 00:33:12.469 [2024-07-26 09:06:30.893166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.469 [2024-07-26 09:06:30.893192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.469 qpair failed and we were unable to recover it. 00:33:12.469 [2024-07-26 09:06:30.893335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.469 [2024-07-26 09:06:30.893360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.469 qpair failed and we were unable to recover it. 00:33:12.469 [2024-07-26 09:06:30.893508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.469 [2024-07-26 09:06:30.893544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.469 qpair failed and we were unable to recover it. 00:33:12.469 [2024-07-26 09:06:30.893764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.469 [2024-07-26 09:06:30.893792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.469 qpair failed and we were unable to recover it. 00:33:12.469 [2024-07-26 09:06:30.893983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.469 [2024-07-26 09:06:30.894017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.469 qpair failed and we were unable to recover it. 00:33:12.469 [2024-07-26 09:06:30.894165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.469 [2024-07-26 09:06:30.894191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.469 qpair failed and we were unable to recover it. 00:33:12.469 [2024-07-26 09:06:30.894310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.469 [2024-07-26 09:06:30.894335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.469 qpair failed and we were unable to recover it. 00:33:12.469 [2024-07-26 09:06:30.894465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.469 [2024-07-26 09:06:30.894490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.469 qpair failed and we were unable to recover it. 00:33:12.469 [2024-07-26 09:06:30.894668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.469 [2024-07-26 09:06:30.894693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.469 qpair failed and we were unable to recover it. 00:33:12.469 [2024-07-26 09:06:30.894885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.469 [2024-07-26 09:06:30.894913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.469 qpair failed and we were unable to recover it. 00:33:12.469 [2024-07-26 09:06:30.895052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.469 [2024-07-26 09:06:30.895097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.469 qpair failed and we were unable to recover it. 00:33:12.469 [2024-07-26 09:06:30.895264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.469 [2024-07-26 09:06:30.895290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.469 qpair failed and we were unable to recover it. 00:33:12.469 [2024-07-26 09:06:30.895403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.469 [2024-07-26 09:06:30.895428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.469 qpair failed and we were unable to recover it. 00:33:12.469 [2024-07-26 09:06:30.895538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.469 [2024-07-26 09:06:30.895564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.469 qpair failed and we were unable to recover it. 00:33:12.469 [2024-07-26 09:06:30.895707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.469 [2024-07-26 09:06:30.895733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.469 qpair failed and we were unable to recover it. 00:33:12.469 [2024-07-26 09:06:30.895917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.469 [2024-07-26 09:06:30.895942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.469 qpair failed and we were unable to recover it. 00:33:12.469 [2024-07-26 09:06:30.896099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.469 [2024-07-26 09:06:30.896142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.469 qpair failed and we were unable to recover it. 00:33:12.469 [2024-07-26 09:06:30.896275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.469 [2024-07-26 09:06:30.896309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.469 qpair failed and we were unable to recover it. 00:33:12.469 [2024-07-26 09:06:30.896483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.469 [2024-07-26 09:06:30.896509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.469 qpair failed and we were unable to recover it. 00:33:12.469 [2024-07-26 09:06:30.896626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.469 [2024-07-26 09:06:30.896651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.469 qpair failed and we were unable to recover it. 00:33:12.469 [2024-07-26 09:06:30.896824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.469 [2024-07-26 09:06:30.896848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.469 qpair failed and we were unable to recover it. 00:33:12.469 [2024-07-26 09:06:30.896991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.469 [2024-07-26 09:06:30.897016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.469 qpair failed and we were unable to recover it. 00:33:12.469 [2024-07-26 09:06:30.897166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.469 [2024-07-26 09:06:30.897192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.469 qpair failed and we were unable to recover it. 00:33:12.469 [2024-07-26 09:06:30.897336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.469 [2024-07-26 09:06:30.897377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.469 qpair failed and we were unable to recover it. 00:33:12.469 [2024-07-26 09:06:30.897540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.469 [2024-07-26 09:06:30.897565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.469 qpair failed and we were unable to recover it. 00:33:12.469 [2024-07-26 09:06:30.897709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.469 [2024-07-26 09:06:30.897749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.469 qpair failed and we were unable to recover it. 00:33:12.469 [2024-07-26 09:06:30.897912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.469 [2024-07-26 09:06:30.897937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.469 qpair failed and we were unable to recover it. 00:33:12.469 [2024-07-26 09:06:30.898104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.469 [2024-07-26 09:06:30.898131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.469 qpair failed and we were unable to recover it. 00:33:12.469 [2024-07-26 09:06:30.898274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.469 [2024-07-26 09:06:30.898303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.469 qpair failed and we were unable to recover it. 00:33:12.469 [2024-07-26 09:06:30.898504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.469 [2024-07-26 09:06:30.898530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.469 qpair failed and we were unable to recover it. 00:33:12.470 [2024-07-26 09:06:30.898679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.470 [2024-07-26 09:06:30.898705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.470 qpair failed and we were unable to recover it. 00:33:12.470 [2024-07-26 09:06:30.898850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.470 [2024-07-26 09:06:30.898875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.470 qpair failed and we were unable to recover it. 00:33:12.470 [2024-07-26 09:06:30.899015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.470 [2024-07-26 09:06:30.899040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.470 qpair failed and we were unable to recover it. 00:33:12.470 [2024-07-26 09:06:30.899161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.470 [2024-07-26 09:06:30.899187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.470 qpair failed and we were unable to recover it. 00:33:12.470 [2024-07-26 09:06:30.899363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.470 [2024-07-26 09:06:30.899388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.470 qpair failed and we were unable to recover it. 00:33:12.470 [2024-07-26 09:06:30.899513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.470 [2024-07-26 09:06:30.899539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.470 qpair failed and we were unable to recover it. 00:33:12.470 [2024-07-26 09:06:30.899686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.470 [2024-07-26 09:06:30.899711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.470 qpair failed and we were unable to recover it. 00:33:12.470 [2024-07-26 09:06:30.899856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.470 [2024-07-26 09:06:30.899881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.470 qpair failed and we were unable to recover it. 00:33:12.470 [2024-07-26 09:06:30.900029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.470 [2024-07-26 09:06:30.900055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.470 qpair failed and we were unable to recover it. 00:33:12.470 [2024-07-26 09:06:30.900215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.470 [2024-07-26 09:06:30.900240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.470 qpair failed and we were unable to recover it. 00:33:12.470 [2024-07-26 09:06:30.900366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.470 [2024-07-26 09:06:30.900393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.470 qpair failed and we were unable to recover it. 00:33:12.470 [2024-07-26 09:06:30.900546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.470 [2024-07-26 09:06:30.900572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.470 qpair failed and we were unable to recover it. 00:33:12.470 [2024-07-26 09:06:30.900687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.470 [2024-07-26 09:06:30.900713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.470 qpair failed and we were unable to recover it. 00:33:12.470 [2024-07-26 09:06:30.900836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.470 [2024-07-26 09:06:30.900863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.470 qpair failed and we were unable to recover it. 00:33:12.470 [2024-07-26 09:06:30.901030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.470 [2024-07-26 09:06:30.901065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.470 qpair failed and we were unable to recover it. 00:33:12.470 [2024-07-26 09:06:30.901195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.470 [2024-07-26 09:06:30.901220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.470 qpair failed and we were unable to recover it. 00:33:12.470 [2024-07-26 09:06:30.901362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.470 [2024-07-26 09:06:30.901388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.470 qpair failed and we were unable to recover it. 00:33:12.470 [2024-07-26 09:06:30.901545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.470 [2024-07-26 09:06:30.901570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.470 qpair failed and we were unable to recover it. 00:33:12.470 [2024-07-26 09:06:30.901751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.470 [2024-07-26 09:06:30.901776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.470 qpair failed and we were unable to recover it. 00:33:12.470 [2024-07-26 09:06:30.901924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.470 [2024-07-26 09:06:30.901949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.470 qpair failed and we were unable to recover it. 00:33:12.470 [2024-07-26 09:06:30.902120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.470 [2024-07-26 09:06:30.902147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.470 qpair failed and we were unable to recover it. 00:33:12.470 [2024-07-26 09:06:30.902303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.470 [2024-07-26 09:06:30.902329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.470 qpair failed and we were unable to recover it. 00:33:12.470 [2024-07-26 09:06:30.902474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.470 [2024-07-26 09:06:30.902499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.470 qpair failed and we were unable to recover it. 00:33:12.470 [2024-07-26 09:06:30.902622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.751 [2024-07-26 09:06:30.902647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.751 qpair failed and we were unable to recover it. 00:33:12.751 [2024-07-26 09:06:30.902796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.751 [2024-07-26 09:06:30.902823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.751 qpair failed and we were unable to recover it. 00:33:12.751 [2024-07-26 09:06:30.902942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.751 [2024-07-26 09:06:30.902969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.751 qpair failed and we were unable to recover it. 00:33:12.751 [2024-07-26 09:06:30.903128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.751 [2024-07-26 09:06:30.903154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.751 qpair failed and we were unable to recover it. 00:33:12.751 [2024-07-26 09:06:30.903271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.751 [2024-07-26 09:06:30.903296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.751 qpair failed and we were unable to recover it. 00:33:12.751 [2024-07-26 09:06:30.903451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.751 [2024-07-26 09:06:30.903477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.751 qpair failed and we were unable to recover it. 00:33:12.751 [2024-07-26 09:06:30.903591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.751 [2024-07-26 09:06:30.903616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.751 qpair failed and we were unable to recover it. 00:33:12.751 [2024-07-26 09:06:30.903744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.751 [2024-07-26 09:06:30.903770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.751 qpair failed and we were unable to recover it. 00:33:12.751 [2024-07-26 09:06:30.903915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.751 [2024-07-26 09:06:30.903940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.751 qpair failed and we were unable to recover it. 00:33:12.751 [2024-07-26 09:06:30.904088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.751 [2024-07-26 09:06:30.904113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.751 qpair failed and we were unable to recover it. 00:33:12.751 [2024-07-26 09:06:30.904262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.751 [2024-07-26 09:06:30.904288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.751 qpair failed and we were unable to recover it. 00:33:12.751 [2024-07-26 09:06:30.904469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.751 [2024-07-26 09:06:30.904494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.751 qpair failed and we were unable to recover it. 00:33:12.751 [2024-07-26 09:06:30.904642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.751 [2024-07-26 09:06:30.904668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.751 qpair failed and we were unable to recover it. 00:33:12.751 [2024-07-26 09:06:30.904814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.751 [2024-07-26 09:06:30.904840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.751 qpair failed and we were unable to recover it. 00:33:12.751 [2024-07-26 09:06:30.904961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.751 [2024-07-26 09:06:30.904988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.751 qpair failed and we were unable to recover it. 00:33:12.751 [2024-07-26 09:06:30.905149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.751 [2024-07-26 09:06:30.905175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.751 qpair failed and we were unable to recover it. 00:33:12.751 [2024-07-26 09:06:30.905306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.751 [2024-07-26 09:06:30.905332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.751 qpair failed and we were unable to recover it. 00:33:12.751 [2024-07-26 09:06:30.905483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.751 [2024-07-26 09:06:30.905508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.751 qpair failed and we were unable to recover it. 00:33:12.751 [2024-07-26 09:06:30.905680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.751 [2024-07-26 09:06:30.905705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.751 qpair failed and we were unable to recover it. 00:33:12.751 [2024-07-26 09:06:30.905864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.751 [2024-07-26 09:06:30.905890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.751 qpair failed and we were unable to recover it. 00:33:12.751 [2024-07-26 09:06:30.906008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.751 [2024-07-26 09:06:30.906033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.751 qpair failed and we were unable to recover it. 00:33:12.751 [2024-07-26 09:06:30.906181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.751 [2024-07-26 09:06:30.906208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.751 qpair failed and we were unable to recover it. 00:33:12.751 [2024-07-26 09:06:30.906360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.751 [2024-07-26 09:06:30.906391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.751 qpair failed and we were unable to recover it. 00:33:12.751 [2024-07-26 09:06:30.906535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.751 [2024-07-26 09:06:30.906560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.751 qpair failed and we were unable to recover it. 00:33:12.751 [2024-07-26 09:06:30.906725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.751 [2024-07-26 09:06:30.906754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.751 qpair failed and we were unable to recover it. 00:33:12.751 [2024-07-26 09:06:30.906983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.751 [2024-07-26 09:06:30.907008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.751 qpair failed and we were unable to recover it. 00:33:12.751 [2024-07-26 09:06:30.907124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.751 [2024-07-26 09:06:30.907150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.751 qpair failed and we were unable to recover it. 00:33:12.751 [2024-07-26 09:06:30.907299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.751 [2024-07-26 09:06:30.907325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.751 qpair failed and we were unable to recover it. 00:33:12.751 [2024-07-26 09:06:30.907475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.751 [2024-07-26 09:06:30.907500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.751 qpair failed and we were unable to recover it. 00:33:12.752 [2024-07-26 09:06:30.907621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.752 [2024-07-26 09:06:30.907648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.752 qpair failed and we were unable to recover it. 00:33:12.752 [2024-07-26 09:06:30.907794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.752 [2024-07-26 09:06:30.907819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.752 qpair failed and we were unable to recover it. 00:33:12.752 [2024-07-26 09:06:30.907963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.752 [2024-07-26 09:06:30.907992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.752 qpair failed and we were unable to recover it. 00:33:12.752 [2024-07-26 09:06:30.908165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.752 [2024-07-26 09:06:30.908195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.752 qpair failed and we were unable to recover it. 00:33:12.752 [2024-07-26 09:06:30.908336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.752 [2024-07-26 09:06:30.908361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.752 qpair failed and we were unable to recover it. 00:33:12.752 [2024-07-26 09:06:30.908536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.752 [2024-07-26 09:06:30.908561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.752 qpair failed and we were unable to recover it. 00:33:12.752 [2024-07-26 09:06:30.908734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.752 [2024-07-26 09:06:30.908759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.752 qpair failed and we were unable to recover it. 00:33:12.752 [2024-07-26 09:06:30.908938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.752 [2024-07-26 09:06:30.908964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.752 qpair failed and we were unable to recover it. 00:33:12.752 [2024-07-26 09:06:30.909126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.752 [2024-07-26 09:06:30.909153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.752 qpair failed and we were unable to recover it. 00:33:12.752 [2024-07-26 09:06:30.909301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.752 [2024-07-26 09:06:30.909326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.752 qpair failed and we were unable to recover it. 00:33:12.752 [2024-07-26 09:06:30.909483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.752 [2024-07-26 09:06:30.909509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.752 qpair failed and we were unable to recover it. 00:33:12.752 [2024-07-26 09:06:30.909683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.752 [2024-07-26 09:06:30.909709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.752 qpair failed and we were unable to recover it. 00:33:12.752 [2024-07-26 09:06:30.909819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.752 [2024-07-26 09:06:30.909844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.752 qpair failed and we were unable to recover it. 00:33:12.752 [2024-07-26 09:06:30.910000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.752 [2024-07-26 09:06:30.910026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.752 qpair failed and we were unable to recover it. 00:33:12.752 [2024-07-26 09:06:30.910167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.752 [2024-07-26 09:06:30.910193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.752 qpair failed and we were unable to recover it. 00:33:12.752 [2024-07-26 09:06:30.910344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.752 [2024-07-26 09:06:30.910381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.752 qpair failed and we were unable to recover it. 00:33:12.752 [2024-07-26 09:06:30.910499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.752 [2024-07-26 09:06:30.910525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.752 qpair failed and we were unable to recover it. 00:33:12.752 [2024-07-26 09:06:30.910651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.752 [2024-07-26 09:06:30.910677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.752 qpair failed and we were unable to recover it. 00:33:12.752 [2024-07-26 09:06:30.910824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.752 [2024-07-26 09:06:30.910850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.752 qpair failed and we were unable to recover it. 00:33:12.752 [2024-07-26 09:06:30.910999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.752 [2024-07-26 09:06:30.911024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.752 qpair failed and we were unable to recover it. 00:33:12.752 [2024-07-26 09:06:30.911194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.752 [2024-07-26 09:06:30.911221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.752 qpair failed and we were unable to recover it. 00:33:12.752 [2024-07-26 09:06:30.911338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.752 [2024-07-26 09:06:30.911363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.752 qpair failed and we were unable to recover it. 00:33:12.752 [2024-07-26 09:06:30.911547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.752 [2024-07-26 09:06:30.911572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.752 qpair failed and we were unable to recover it. 00:33:12.752 [2024-07-26 09:06:30.911720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.752 [2024-07-26 09:06:30.911745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.752 qpair failed and we were unable to recover it. 00:33:12.752 [2024-07-26 09:06:30.911884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.752 [2024-07-26 09:06:30.911910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.752 qpair failed and we were unable to recover it. 00:33:12.752 [2024-07-26 09:06:30.912067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.752 [2024-07-26 09:06:30.912093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.752 qpair failed and we were unable to recover it. 00:33:12.752 [2024-07-26 09:06:30.912219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.752 [2024-07-26 09:06:30.912245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.752 qpair failed and we were unable to recover it. 00:33:12.752 [2024-07-26 09:06:30.912362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.752 [2024-07-26 09:06:30.912388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.752 qpair failed and we were unable to recover it. 00:33:12.752 [2024-07-26 09:06:30.912537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.752 [2024-07-26 09:06:30.912563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.752 qpair failed and we were unable to recover it. 00:33:12.752 [2024-07-26 09:06:30.912686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.752 [2024-07-26 09:06:30.912712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.752 qpair failed and we were unable to recover it. 00:33:12.752 [2024-07-26 09:06:30.912852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.752 [2024-07-26 09:06:30.912878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.752 qpair failed and we were unable to recover it. 00:33:12.752 [2024-07-26 09:06:30.913034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.752 [2024-07-26 09:06:30.913065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.752 qpair failed and we were unable to recover it. 00:33:12.752 [2024-07-26 09:06:30.913216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.752 [2024-07-26 09:06:30.913241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.752 qpair failed and we were unable to recover it. 00:33:12.752 [2024-07-26 09:06:30.913361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.752 [2024-07-26 09:06:30.913386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.752 qpair failed and we were unable to recover it. 00:33:12.752 [2024-07-26 09:06:30.913528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.752 [2024-07-26 09:06:30.913554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.752 qpair failed and we were unable to recover it. 00:33:12.752 [2024-07-26 09:06:30.913716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.752 [2024-07-26 09:06:30.913747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.752 qpair failed and we were unable to recover it. 00:33:12.752 [2024-07-26 09:06:30.913895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.752 [2024-07-26 09:06:30.913920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.752 qpair failed and we were unable to recover it. 00:33:12.752 [2024-07-26 09:06:30.914056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.753 [2024-07-26 09:06:30.914095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.753 qpair failed and we were unable to recover it. 00:33:12.753 [2024-07-26 09:06:30.914241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.753 [2024-07-26 09:06:30.914267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.753 qpair failed and we were unable to recover it. 00:33:12.753 [2024-07-26 09:06:30.914428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.753 [2024-07-26 09:06:30.914454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.753 qpair failed and we were unable to recover it. 00:33:12.753 [2024-07-26 09:06:30.914601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.753 [2024-07-26 09:06:30.914627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.753 qpair failed and we were unable to recover it. 00:33:12.753 [2024-07-26 09:06:30.914767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.753 [2024-07-26 09:06:30.914793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.753 qpair failed and we were unable to recover it. 00:33:12.753 [2024-07-26 09:06:30.914941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.753 [2024-07-26 09:06:30.914967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.753 qpair failed and we were unable to recover it. 00:33:12.753 [2024-07-26 09:06:30.915113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.753 [2024-07-26 09:06:30.915139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.753 qpair failed and we were unable to recover it. 00:33:12.753 [2024-07-26 09:06:30.915271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.753 [2024-07-26 09:06:30.915296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.753 qpair failed and we were unable to recover it. 00:33:12.753 [2024-07-26 09:06:30.915415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.753 [2024-07-26 09:06:30.915441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.753 qpair failed and we were unable to recover it. 00:33:12.753 [2024-07-26 09:06:30.915561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.753 [2024-07-26 09:06:30.915586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.753 qpair failed and we were unable to recover it. 00:33:12.753 [2024-07-26 09:06:30.915698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.753 [2024-07-26 09:06:30.915724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.753 qpair failed and we were unable to recover it. 00:33:12.753 [2024-07-26 09:06:30.915873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.753 [2024-07-26 09:06:30.915898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.753 qpair failed and we were unable to recover it. 00:33:12.753 [2024-07-26 09:06:30.916042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.753 [2024-07-26 09:06:30.916072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.753 qpair failed and we were unable to recover it. 00:33:12.753 [2024-07-26 09:06:30.916219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.753 [2024-07-26 09:06:30.916245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.753 qpair failed and we were unable to recover it. 00:33:12.753 [2024-07-26 09:06:30.916421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.753 [2024-07-26 09:06:30.916447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.753 qpair failed and we were unable to recover it. 00:33:12.753 [2024-07-26 09:06:30.916568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.753 [2024-07-26 09:06:30.916594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.753 qpair failed and we were unable to recover it. 00:33:12.753 [2024-07-26 09:06:30.916722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.753 [2024-07-26 09:06:30.916747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.753 qpair failed and we were unable to recover it. 00:33:12.753 [2024-07-26 09:06:30.916914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.753 [2024-07-26 09:06:30.916940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.753 qpair failed and we were unable to recover it. 00:33:12.753 [2024-07-26 09:06:30.917079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.753 [2024-07-26 09:06:30.917106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.753 qpair failed and we were unable to recover it. 00:33:12.753 [2024-07-26 09:06:30.917291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.753 [2024-07-26 09:06:30.917317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.753 qpair failed and we were unable to recover it. 00:33:12.753 [2024-07-26 09:06:30.917437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.753 [2024-07-26 09:06:30.917462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.753 qpair failed and we were unable to recover it. 00:33:12.753 [2024-07-26 09:06:30.917607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.753 [2024-07-26 09:06:30.917632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.753 qpair failed and we were unable to recover it. 00:33:12.753 [2024-07-26 09:06:30.917806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.753 [2024-07-26 09:06:30.917832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.753 qpair failed and we were unable to recover it. 00:33:12.753 [2024-07-26 09:06:30.917953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.753 [2024-07-26 09:06:30.917979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.753 qpair failed and we were unable to recover it. 00:33:12.753 [2024-07-26 09:06:30.918117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.753 [2024-07-26 09:06:30.918144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.753 qpair failed and we were unable to recover it. 00:33:12.753 [2024-07-26 09:06:30.918275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.753 [2024-07-26 09:06:30.918300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.753 qpair failed and we were unable to recover it. 00:33:12.753 [2024-07-26 09:06:30.918460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.753 [2024-07-26 09:06:30.918486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.753 qpair failed and we were unable to recover it. 00:33:12.753 [2024-07-26 09:06:30.918609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.753 [2024-07-26 09:06:30.918634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.753 qpair failed and we were unable to recover it. 00:33:12.753 [2024-07-26 09:06:30.918773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.753 [2024-07-26 09:06:30.918800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.753 qpair failed and we were unable to recover it. 00:33:12.753 [2024-07-26 09:06:30.918912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.753 [2024-07-26 09:06:30.918938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.753 qpair failed and we were unable to recover it. 00:33:12.753 [2024-07-26 09:06:30.919114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.753 [2024-07-26 09:06:30.919140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.753 qpair failed and we were unable to recover it. 00:33:12.753 [2024-07-26 09:06:30.919254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.753 [2024-07-26 09:06:30.919280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.753 qpair failed and we were unable to recover it. 00:33:12.753 [2024-07-26 09:06:30.919396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.753 [2024-07-26 09:06:30.919421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.753 qpair failed and we were unable to recover it. 00:33:12.753 [2024-07-26 09:06:30.919548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.753 [2024-07-26 09:06:30.919574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.753 qpair failed and we were unable to recover it. 00:33:12.753 [2024-07-26 09:06:30.919726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.753 [2024-07-26 09:06:30.919755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.753 qpair failed and we were unable to recover it. 00:33:12.753 [2024-07-26 09:06:30.919902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.753 [2024-07-26 09:06:30.919928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.753 qpair failed and we were unable to recover it. 00:33:12.753 [2024-07-26 09:06:30.920054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.753 [2024-07-26 09:06:30.920085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.753 qpair failed and we were unable to recover it. 00:33:12.753 [2024-07-26 09:06:30.920267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.753 [2024-07-26 09:06:30.920293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.753 qpair failed and we were unable to recover it. 00:33:12.754 [2024-07-26 09:06:30.920407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.754 [2024-07-26 09:06:30.920432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.754 qpair failed and we were unable to recover it. 00:33:12.754 [2024-07-26 09:06:30.920553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.754 [2024-07-26 09:06:30.920578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.754 qpair failed and we were unable to recover it. 00:33:12.754 [2024-07-26 09:06:30.920691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.754 [2024-07-26 09:06:30.920716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.754 qpair failed and we were unable to recover it. 00:33:12.754 [2024-07-26 09:06:30.920861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.754 [2024-07-26 09:06:30.920886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.754 qpair failed and we were unable to recover it. 00:33:12.754 [2024-07-26 09:06:30.921026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.754 [2024-07-26 09:06:30.921051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.754 qpair failed and we were unable to recover it. 00:33:12.754 [2024-07-26 09:06:30.921200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.754 [2024-07-26 09:06:30.921226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.754 qpair failed and we were unable to recover it. 00:33:12.754 [2024-07-26 09:06:30.921384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.754 [2024-07-26 09:06:30.921410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.754 qpair failed and we were unable to recover it. 00:33:12.754 [2024-07-26 09:06:30.921547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.754 [2024-07-26 09:06:30.921572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.754 qpair failed and we were unable to recover it. 00:33:12.754 [2024-07-26 09:06:30.921746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.754 [2024-07-26 09:06:30.921772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.754 qpair failed and we were unable to recover it. 00:33:12.754 [2024-07-26 09:06:30.921918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.754 [2024-07-26 09:06:30.921945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.754 qpair failed and we were unable to recover it. 00:33:12.754 [2024-07-26 09:06:30.922067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.754 [2024-07-26 09:06:30.922098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.754 qpair failed and we were unable to recover it. 00:33:12.754 [2024-07-26 09:06:30.922251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.754 [2024-07-26 09:06:30.922277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.754 qpair failed and we were unable to recover it. 00:33:12.754 [2024-07-26 09:06:30.922458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.754 [2024-07-26 09:06:30.922483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.754 qpair failed and we were unable to recover it. 00:33:12.754 [2024-07-26 09:06:30.922605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.754 [2024-07-26 09:06:30.922630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.754 qpair failed and we were unable to recover it. 00:33:12.754 [2024-07-26 09:06:30.922801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.754 [2024-07-26 09:06:30.922827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.754 qpair failed and we were unable to recover it. 00:33:12.754 [2024-07-26 09:06:30.923003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.754 [2024-07-26 09:06:30.923028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.754 qpair failed and we were unable to recover it. 00:33:12.754 [2024-07-26 09:06:30.923193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.754 [2024-07-26 09:06:30.923220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.754 qpair failed and we were unable to recover it. 00:33:12.754 [2024-07-26 09:06:30.923375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.754 [2024-07-26 09:06:30.923401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.754 qpair failed and we were unable to recover it. 00:33:12.754 [2024-07-26 09:06:30.923532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.754 [2024-07-26 09:06:30.923558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.754 qpair failed and we were unable to recover it. 00:33:12.754 [2024-07-26 09:06:30.923699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.754 [2024-07-26 09:06:30.923724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.754 qpair failed and we were unable to recover it. 00:33:12.754 [2024-07-26 09:06:30.923879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.754 [2024-07-26 09:06:30.923905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.754 qpair failed and we were unable to recover it. 00:33:12.754 [2024-07-26 09:06:30.924053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.754 [2024-07-26 09:06:30.924089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.754 qpair failed and we were unable to recover it. 00:33:12.754 [2024-07-26 09:06:30.924263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.754 [2024-07-26 09:06:30.924288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.754 qpair failed and we were unable to recover it. 00:33:12.754 [2024-07-26 09:06:30.924414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.754 [2024-07-26 09:06:30.924447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.754 qpair failed and we were unable to recover it. 00:33:12.754 [2024-07-26 09:06:30.924587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.754 [2024-07-26 09:06:30.924612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.754 qpair failed and we were unable to recover it. 00:33:12.754 [2024-07-26 09:06:30.924756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.754 [2024-07-26 09:06:30.924781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.754 qpair failed and we were unable to recover it. 00:33:12.754 [2024-07-26 09:06:30.924903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.754 [2024-07-26 09:06:30.924928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.754 qpair failed and we were unable to recover it. 00:33:12.754 [2024-07-26 09:06:30.925072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.754 [2024-07-26 09:06:30.925098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.754 qpair failed and we were unable to recover it. 00:33:12.754 [2024-07-26 09:06:30.925217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.754 [2024-07-26 09:06:30.925242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.754 qpair failed and we were unable to recover it. 00:33:12.754 [2024-07-26 09:06:30.925392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.754 [2024-07-26 09:06:30.925419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.754 qpair failed and we were unable to recover it. 00:33:12.754 [2024-07-26 09:06:30.925590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.754 [2024-07-26 09:06:30.925616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.754 qpair failed and we were unable to recover it. 00:33:12.754 [2024-07-26 09:06:30.925732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.754 [2024-07-26 09:06:30.925757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.754 qpair failed and we were unable to recover it. 00:33:12.754 [2024-07-26 09:06:30.925900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.754 [2024-07-26 09:06:30.925925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.754 qpair failed and we were unable to recover it. 00:33:12.754 [2024-07-26 09:06:30.926071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.754 [2024-07-26 09:06:30.926101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.754 qpair failed and we were unable to recover it. 00:33:12.754 [2024-07-26 09:06:30.926221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.754 [2024-07-26 09:06:30.926249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.754 qpair failed and we were unable to recover it. 00:33:12.754 [2024-07-26 09:06:30.926362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.754 [2024-07-26 09:06:30.926393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.754 qpair failed and we were unable to recover it. 00:33:12.754 [2024-07-26 09:06:30.926534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.754 [2024-07-26 09:06:30.926559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.754 qpair failed and we were unable to recover it. 00:33:12.754 [2024-07-26 09:06:30.926688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.754 [2024-07-26 09:06:30.926717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.754 qpair failed and we were unable to recover it. 00:33:12.755 [2024-07-26 09:06:30.926892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.755 [2024-07-26 09:06:30.926918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.755 qpair failed and we were unable to recover it. 00:33:12.755 [2024-07-26 09:06:30.927088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.755 [2024-07-26 09:06:30.927114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.755 qpair failed and we were unable to recover it. 00:33:12.755 [2024-07-26 09:06:30.927253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.755 [2024-07-26 09:06:30.927279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.755 qpair failed and we were unable to recover it. 00:33:12.755 [2024-07-26 09:06:30.927429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.755 [2024-07-26 09:06:30.927455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.755 qpair failed and we were unable to recover it. 00:33:12.755 [2024-07-26 09:06:30.927600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.755 [2024-07-26 09:06:30.927625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.755 qpair failed and we were unable to recover it. 00:33:12.755 [2024-07-26 09:06:30.927750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.755 [2024-07-26 09:06:30.927776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.755 qpair failed and we were unable to recover it. 00:33:12.755 [2024-07-26 09:06:30.927961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.755 [2024-07-26 09:06:30.927986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.755 qpair failed and we were unable to recover it. 00:33:12.755 [2024-07-26 09:06:30.928164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.755 [2024-07-26 09:06:30.928191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.755 qpair failed and we were unable to recover it. 00:33:12.755 [2024-07-26 09:06:30.928334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.755 [2024-07-26 09:06:30.928367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.755 qpair failed and we were unable to recover it. 00:33:12.755 [2024-07-26 09:06:30.928522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.755 [2024-07-26 09:06:30.928548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.755 qpair failed and we were unable to recover it. 00:33:12.755 [2024-07-26 09:06:30.928694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.755 [2024-07-26 09:06:30.928719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.755 qpair failed and we were unable to recover it. 00:33:12.755 [2024-07-26 09:06:30.928864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.755 [2024-07-26 09:06:30.928890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.755 qpair failed and we were unable to recover it. 00:33:12.755 [2024-07-26 09:06:30.929015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.755 [2024-07-26 09:06:30.929041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.755 qpair failed and we were unable to recover it. 00:33:12.755 [2024-07-26 09:06:30.929178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.755 [2024-07-26 09:06:30.929204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.755 qpair failed and we were unable to recover it. 00:33:12.755 [2024-07-26 09:06:30.929341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.755 [2024-07-26 09:06:30.929367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.755 qpair failed and we were unable to recover it. 00:33:12.755 [2024-07-26 09:06:30.929516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.755 [2024-07-26 09:06:30.929542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.755 qpair failed and we were unable to recover it. 00:33:12.755 [2024-07-26 09:06:30.929672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.755 [2024-07-26 09:06:30.929698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.755 qpair failed and we were unable to recover it. 00:33:12.755 [2024-07-26 09:06:30.929862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.755 [2024-07-26 09:06:30.929888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.755 qpair failed and we were unable to recover it. 00:33:12.755 [2024-07-26 09:06:30.930069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.755 [2024-07-26 09:06:30.930100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.755 qpair failed and we were unable to recover it. 00:33:12.755 [2024-07-26 09:06:30.930219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.755 [2024-07-26 09:06:30.930244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.755 qpair failed and we were unable to recover it. 00:33:12.755 [2024-07-26 09:06:30.930393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.755 [2024-07-26 09:06:30.930419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.755 qpair failed and we were unable to recover it. 00:33:12.755 [2024-07-26 09:06:30.930567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.755 [2024-07-26 09:06:30.930593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.755 qpair failed and we were unable to recover it. 00:33:12.755 [2024-07-26 09:06:30.930766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.755 [2024-07-26 09:06:30.930792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.755 qpair failed and we were unable to recover it. 00:33:12.755 [2024-07-26 09:06:30.930938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.755 [2024-07-26 09:06:30.930964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.755 qpair failed and we were unable to recover it. 00:33:12.755 [2024-07-26 09:06:30.931121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.755 [2024-07-26 09:06:30.931148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.755 qpair failed and we were unable to recover it. 00:33:12.755 [2024-07-26 09:06:30.931300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.755 [2024-07-26 09:06:30.931325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.755 qpair failed and we were unable to recover it. 00:33:12.755 [2024-07-26 09:06:30.931476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.755 [2024-07-26 09:06:30.931508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.755 qpair failed and we were unable to recover it. 00:33:12.755 [2024-07-26 09:06:30.932366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.755 [2024-07-26 09:06:30.932400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.755 qpair failed and we were unable to recover it. 00:33:12.755 [2024-07-26 09:06:30.932586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.755 [2024-07-26 09:06:30.932613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.755 qpair failed and we were unable to recover it. 00:33:12.755 [2024-07-26 09:06:30.932742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.755 [2024-07-26 09:06:30.932767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.755 qpair failed and we were unable to recover it. 00:33:12.756 [2024-07-26 09:06:30.932916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.756 [2024-07-26 09:06:30.932942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.756 qpair failed and we were unable to recover it. 00:33:12.756 [2024-07-26 09:06:30.933090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.756 [2024-07-26 09:06:30.933116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.756 qpair failed and we were unable to recover it. 00:33:12.756 [2024-07-26 09:06:30.933266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.756 [2024-07-26 09:06:30.933291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.756 qpair failed and we were unable to recover it. 00:33:12.756 [2024-07-26 09:06:30.933442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.756 [2024-07-26 09:06:30.933468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.756 qpair failed and we were unable to recover it. 00:33:12.756 [2024-07-26 09:06:30.933643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.756 [2024-07-26 09:06:30.933669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.756 qpair failed and we were unable to recover it. 00:33:12.756 [2024-07-26 09:06:30.933838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.756 [2024-07-26 09:06:30.933864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.756 qpair failed and we were unable to recover it. 00:33:12.756 [2024-07-26 09:06:30.934038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.756 [2024-07-26 09:06:30.934073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.756 qpair failed and we were unable to recover it. 00:33:12.756 [2024-07-26 09:06:30.934234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.756 [2024-07-26 09:06:30.934259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.756 qpair failed and we were unable to recover it. 00:33:12.756 [2024-07-26 09:06:30.934384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.756 [2024-07-26 09:06:30.934409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.756 qpair failed and we were unable to recover it. 00:33:12.756 [2024-07-26 09:06:30.934537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.756 [2024-07-26 09:06:30.934564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.756 qpair failed and we were unable to recover it. 00:33:12.756 [2024-07-26 09:06:30.934722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.756 [2024-07-26 09:06:30.934748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.756 qpair failed and we were unable to recover it. 00:33:12.756 [2024-07-26 09:06:30.934918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.756 [2024-07-26 09:06:30.934944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.756 qpair failed and we were unable to recover it. 00:33:12.756 [2024-07-26 09:06:30.935073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.756 [2024-07-26 09:06:30.935100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.756 qpair failed and we were unable to recover it. 00:33:12.756 [2024-07-26 09:06:30.935243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.756 [2024-07-26 09:06:30.935268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.756 qpair failed and we were unable to recover it. 00:33:12.756 [2024-07-26 09:06:30.935410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.756 [2024-07-26 09:06:30.935436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.756 qpair failed and we were unable to recover it. 00:33:12.756 [2024-07-26 09:06:30.935589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.756 [2024-07-26 09:06:30.935614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.756 qpair failed and we were unable to recover it. 00:33:12.756 [2024-07-26 09:06:30.935735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.756 [2024-07-26 09:06:30.935761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.756 qpair failed and we were unable to recover it. 00:33:12.756 [2024-07-26 09:06:30.935930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.756 [2024-07-26 09:06:30.935956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.756 qpair failed and we were unable to recover it. 00:33:12.756 [2024-07-26 09:06:30.936122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.756 [2024-07-26 09:06:30.936148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.756 qpair failed and we were unable to recover it. 00:33:12.756 [2024-07-26 09:06:30.936289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.756 [2024-07-26 09:06:30.936314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.756 qpair failed and we were unable to recover it. 00:33:12.756 [2024-07-26 09:06:30.936465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.756 [2024-07-26 09:06:30.936490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.756 qpair failed and we were unable to recover it. 00:33:12.756 [2024-07-26 09:06:30.936643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.756 [2024-07-26 09:06:30.936668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.756 qpair failed and we were unable to recover it. 00:33:12.756 [2024-07-26 09:06:30.936820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.756 [2024-07-26 09:06:30.936845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.756 qpair failed and we were unable to recover it. 00:33:12.756 [2024-07-26 09:06:30.937003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.756 [2024-07-26 09:06:30.937028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.756 qpair failed and we were unable to recover it. 00:33:12.756 [2024-07-26 09:06:30.937153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.756 [2024-07-26 09:06:30.937179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.756 qpair failed and we were unable to recover it. 00:33:12.756 [2024-07-26 09:06:30.937348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.756 [2024-07-26 09:06:30.937373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.756 qpair failed and we were unable to recover it. 00:33:12.756 [2024-07-26 09:06:30.937516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.756 [2024-07-26 09:06:30.937541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.756 qpair failed and we were unable to recover it. 00:33:12.756 [2024-07-26 09:06:30.937694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.756 [2024-07-26 09:06:30.937720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.756 qpair failed and we were unable to recover it. 00:33:12.756 [2024-07-26 09:06:30.937840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.756 [2024-07-26 09:06:30.937865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.756 qpair failed and we were unable to recover it. 00:33:12.756 [2024-07-26 09:06:30.938015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.756 [2024-07-26 09:06:30.938041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.756 qpair failed and we were unable to recover it. 00:33:12.756 [2024-07-26 09:06:30.938201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.756 [2024-07-26 09:06:30.938227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.756 qpair failed and we were unable to recover it. 00:33:12.756 [2024-07-26 09:06:30.938345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.756 [2024-07-26 09:06:30.938376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.756 qpair failed and we were unable to recover it. 00:33:12.756 [2024-07-26 09:06:30.938490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.756 [2024-07-26 09:06:30.938525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.756 qpair failed and we were unable to recover it. 00:33:12.756 [2024-07-26 09:06:30.938681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.756 [2024-07-26 09:06:30.938706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.756 qpair failed and we were unable to recover it. 00:33:12.756 [2024-07-26 09:06:30.938873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.756 [2024-07-26 09:06:30.938899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.756 qpair failed and we were unable to recover it. 00:33:12.756 [2024-07-26 09:06:30.939057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.756 [2024-07-26 09:06:30.939111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.756 qpair failed and we were unable to recover it. 00:33:12.756 [2024-07-26 09:06:30.939258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.756 [2024-07-26 09:06:30.939284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.756 qpair failed and we were unable to recover it. 00:33:12.756 [2024-07-26 09:06:30.939414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.756 [2024-07-26 09:06:30.939443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.757 qpair failed and we were unable to recover it. 00:33:12.757 [2024-07-26 09:06:30.939588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.757 [2024-07-26 09:06:30.939613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.757 qpair failed and we were unable to recover it. 00:33:12.757 [2024-07-26 09:06:30.939764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.757 [2024-07-26 09:06:30.939790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.757 qpair failed and we were unable to recover it. 00:33:12.757 [2024-07-26 09:06:30.939915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.757 [2024-07-26 09:06:30.939941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.757 qpair failed and we were unable to recover it. 00:33:12.757 [2024-07-26 09:06:30.940092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.757 [2024-07-26 09:06:30.940118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.757 qpair failed and we were unable to recover it. 00:33:12.757 [2024-07-26 09:06:30.940268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.757 [2024-07-26 09:06:30.940293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.757 qpair failed and we were unable to recover it. 00:33:12.757 [2024-07-26 09:06:30.940450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.757 [2024-07-26 09:06:30.940475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.757 qpair failed and we were unable to recover it. 00:33:12.757 [2024-07-26 09:06:30.940615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.757 [2024-07-26 09:06:30.940640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.757 qpair failed and we were unable to recover it. 00:33:12.757 [2024-07-26 09:06:30.940762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.757 [2024-07-26 09:06:30.940788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.757 qpair failed and we were unable to recover it. 00:33:12.757 [2024-07-26 09:06:30.940912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.757 [2024-07-26 09:06:30.940937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.757 qpair failed and we were unable to recover it. 00:33:12.757 [2024-07-26 09:06:30.941072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.757 [2024-07-26 09:06:30.941099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.757 qpair failed and we were unable to recover it. 00:33:12.757 [2024-07-26 09:06:30.941276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.757 [2024-07-26 09:06:30.941302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.757 qpair failed and we were unable to recover it. 00:33:12.757 [2024-07-26 09:06:30.941448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.757 [2024-07-26 09:06:30.941474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.757 qpair failed and we were unable to recover it. 00:33:12.757 [2024-07-26 09:06:30.941616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.757 [2024-07-26 09:06:30.941642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.757 qpair failed and we were unable to recover it. 00:33:12.757 [2024-07-26 09:06:30.941798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.757 [2024-07-26 09:06:30.941835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.757 qpair failed and we were unable to recover it. 00:33:12.757 [2024-07-26 09:06:30.941950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.757 [2024-07-26 09:06:30.941975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.757 qpair failed and we were unable to recover it. 00:33:12.757 [2024-07-26 09:06:30.942133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.757 [2024-07-26 09:06:30.942160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.757 qpair failed and we were unable to recover it. 00:33:12.757 [2024-07-26 09:06:30.942307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.757 [2024-07-26 09:06:30.942334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.757 qpair failed and we were unable to recover it. 00:33:12.757 [2024-07-26 09:06:30.942503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.757 [2024-07-26 09:06:30.942528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.757 qpair failed and we were unable to recover it. 00:33:12.757 [2024-07-26 09:06:30.942683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.757 [2024-07-26 09:06:30.942708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.757 qpair failed and we were unable to recover it. 00:33:12.757 [2024-07-26 09:06:30.942851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.757 [2024-07-26 09:06:30.942876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.757 qpair failed and we were unable to recover it. 00:33:12.757 [2024-07-26 09:06:30.943017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.757 [2024-07-26 09:06:30.943043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.757 qpair failed and we were unable to recover it. 00:33:12.757 [2024-07-26 09:06:30.943238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.757 [2024-07-26 09:06:30.943264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.757 qpair failed and we were unable to recover it. 00:33:12.757 [2024-07-26 09:06:30.943420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.757 [2024-07-26 09:06:30.943445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.757 qpair failed and we were unable to recover it. 00:33:12.757 [2024-07-26 09:06:30.943624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.757 [2024-07-26 09:06:30.943649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.757 qpair failed and we were unable to recover it. 00:33:12.757 [2024-07-26 09:06:30.943795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.757 [2024-07-26 09:06:30.943820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.757 qpair failed and we were unable to recover it. 00:33:12.757 [2024-07-26 09:06:30.943972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.757 [2024-07-26 09:06:30.943997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.757 qpair failed and we were unable to recover it. 00:33:12.757 [2024-07-26 09:06:30.944126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.757 [2024-07-26 09:06:30.944156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.757 qpair failed and we were unable to recover it. 00:33:12.757 [2024-07-26 09:06:30.944279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.757 [2024-07-26 09:06:30.944304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.757 qpair failed and we were unable to recover it. 00:33:12.757 [2024-07-26 09:06:30.944475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.757 [2024-07-26 09:06:30.944501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.757 qpair failed and we were unable to recover it. 00:33:12.757 [2024-07-26 09:06:30.944657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.757 [2024-07-26 09:06:30.944682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.757 qpair failed and we were unable to recover it. 00:33:12.757 [2024-07-26 09:06:30.944836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.757 [2024-07-26 09:06:30.944861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.757 qpair failed and we were unable to recover it. 00:33:12.757 [2024-07-26 09:06:30.944997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.757 [2024-07-26 09:06:30.945022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.757 qpair failed and we were unable to recover it. 00:33:12.757 [2024-07-26 09:06:30.945209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.757 [2024-07-26 09:06:30.945235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.757 qpair failed and we were unable to recover it. 00:33:12.757 [2024-07-26 09:06:30.945360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.757 [2024-07-26 09:06:30.945386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.757 qpair failed and we were unable to recover it. 00:33:12.757 [2024-07-26 09:06:30.945565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.757 [2024-07-26 09:06:30.945591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.757 qpair failed and we were unable to recover it. 00:33:12.757 [2024-07-26 09:06:30.945715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.757 [2024-07-26 09:06:30.945740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.757 qpair failed and we were unable to recover it. 00:33:12.757 [2024-07-26 09:06:30.945872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.757 [2024-07-26 09:06:30.945897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.757 qpair failed and we were unable to recover it. 00:33:12.758 [2024-07-26 09:06:30.946074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.758 [2024-07-26 09:06:30.946104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.758 qpair failed and we were unable to recover it. 00:33:12.758 [2024-07-26 09:06:30.946276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.758 [2024-07-26 09:06:30.946302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.758 qpair failed and we were unable to recover it. 00:33:12.758 [2024-07-26 09:06:30.946425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.758 [2024-07-26 09:06:30.946450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.758 qpair failed and we were unable to recover it. 00:33:12.758 [2024-07-26 09:06:30.946606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.758 [2024-07-26 09:06:30.946632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.758 qpair failed and we were unable to recover it. 00:33:12.758 [2024-07-26 09:06:30.946769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.758 [2024-07-26 09:06:30.946795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.758 qpair failed and we were unable to recover it. 00:33:12.758 [2024-07-26 09:06:30.946908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.758 [2024-07-26 09:06:30.946932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.758 qpair failed and we were unable to recover it. 00:33:12.758 [2024-07-26 09:06:30.947110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.758 [2024-07-26 09:06:30.947136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.758 qpair failed and we were unable to recover it. 00:33:12.758 [2024-07-26 09:06:30.947256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.758 [2024-07-26 09:06:30.947281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.758 qpair failed and we were unable to recover it. 00:33:12.758 [2024-07-26 09:06:30.947452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.758 [2024-07-26 09:06:30.947477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.758 qpair failed and we were unable to recover it. 00:33:12.758 [2024-07-26 09:06:30.947654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.758 [2024-07-26 09:06:30.947679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.758 qpair failed and we were unable to recover it. 00:33:12.758 [2024-07-26 09:06:30.947852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.758 [2024-07-26 09:06:30.947877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.758 qpair failed and we were unable to recover it. 00:33:12.758 [2024-07-26 09:06:30.947998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.758 [2024-07-26 09:06:30.948023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.758 qpair failed and we were unable to recover it. 00:33:12.758 [2024-07-26 09:06:30.948228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.758 [2024-07-26 09:06:30.948255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.758 qpair failed and we were unable to recover it. 00:33:12.758 [2024-07-26 09:06:30.948437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.758 [2024-07-26 09:06:30.948462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.758 qpair failed and we were unable to recover it. 00:33:12.758 [2024-07-26 09:06:30.948584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.758 [2024-07-26 09:06:30.948609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.758 qpair failed and we were unable to recover it. 00:33:12.758 [2024-07-26 09:06:30.948761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.758 [2024-07-26 09:06:30.948788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.758 qpair failed and we were unable to recover it. 00:33:12.758 [2024-07-26 09:06:30.948934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.758 [2024-07-26 09:06:30.948959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.758 qpair failed and we were unable to recover it. 00:33:12.758 [2024-07-26 09:06:30.949119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.758 [2024-07-26 09:06:30.949146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.758 qpair failed and we were unable to recover it. 00:33:12.758 [2024-07-26 09:06:30.949265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.758 [2024-07-26 09:06:30.949291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.758 qpair failed and we were unable to recover it. 00:33:12.758 [2024-07-26 09:06:30.949469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.758 [2024-07-26 09:06:30.949495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.758 qpair failed and we were unable to recover it. 00:33:12.758 [2024-07-26 09:06:30.949615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.758 [2024-07-26 09:06:30.949640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.758 qpair failed and we were unable to recover it. 00:33:12.758 [2024-07-26 09:06:30.949795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.758 [2024-07-26 09:06:30.949820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.758 qpair failed and we were unable to recover it. 00:33:12.758 [2024-07-26 09:06:30.949935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.758 [2024-07-26 09:06:30.949961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.758 qpair failed and we were unable to recover it. 00:33:12.758 [2024-07-26 09:06:30.950084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.758 [2024-07-26 09:06:30.950111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.758 qpair failed and we were unable to recover it. 00:33:12.758 [2024-07-26 09:06:30.950259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.758 [2024-07-26 09:06:30.950285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.758 qpair failed and we were unable to recover it. 00:33:12.758 [2024-07-26 09:06:30.950465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.758 [2024-07-26 09:06:30.950490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.758 qpair failed and we were unable to recover it. 00:33:12.758 [2024-07-26 09:06:30.950640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.758 [2024-07-26 09:06:30.950665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.758 qpair failed and we were unable to recover it. 00:33:12.758 [2024-07-26 09:06:30.950825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.758 [2024-07-26 09:06:30.950851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.758 qpair failed and we were unable to recover it. 00:33:12.758 [2024-07-26 09:06:30.951022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.758 [2024-07-26 09:06:30.951047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.758 qpair failed and we were unable to recover it. 00:33:12.758 [2024-07-26 09:06:30.951196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.758 [2024-07-26 09:06:30.951222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.758 qpair failed and we were unable to recover it. 00:33:12.758 [2024-07-26 09:06:30.951340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.758 [2024-07-26 09:06:30.951379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.758 qpair failed and we were unable to recover it. 00:33:12.758 [2024-07-26 09:06:30.951525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.758 [2024-07-26 09:06:30.951550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.758 qpair failed and we were unable to recover it. 00:33:12.758 [2024-07-26 09:06:30.951691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.758 [2024-07-26 09:06:30.951717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.758 qpair failed and we were unable to recover it. 00:33:12.758 [2024-07-26 09:06:30.951862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.758 [2024-07-26 09:06:30.951887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.758 qpair failed and we were unable to recover it. 00:33:12.758 [2024-07-26 09:06:30.952031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.758 [2024-07-26 09:06:30.952071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.758 qpair failed and we were unable to recover it. 00:33:12.758 [2024-07-26 09:06:30.952216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.758 [2024-07-26 09:06:30.952241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.758 qpair failed and we were unable to recover it. 00:33:12.758 [2024-07-26 09:06:30.952420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.758 [2024-07-26 09:06:30.952445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.758 qpair failed and we were unable to recover it. 00:33:12.758 [2024-07-26 09:06:30.952614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.758 [2024-07-26 09:06:30.952640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.758 qpair failed and we were unable to recover it. 00:33:12.759 [2024-07-26 09:06:30.952813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.759 [2024-07-26 09:06:30.952838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.759 qpair failed and we were unable to recover it. 00:33:12.759 [2024-07-26 09:06:30.952958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.759 [2024-07-26 09:06:30.952984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.759 qpair failed and we were unable to recover it. 00:33:12.759 [2024-07-26 09:06:30.953137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.759 [2024-07-26 09:06:30.953163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.759 qpair failed and we were unable to recover it. 00:33:12.759 [2024-07-26 09:06:30.953331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.759 [2024-07-26 09:06:30.953356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.759 qpair failed and we were unable to recover it. 00:33:12.759 [2024-07-26 09:06:30.953505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.759 [2024-07-26 09:06:30.953530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.759 qpair failed and we were unable to recover it. 00:33:12.759 [2024-07-26 09:06:30.953706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.759 [2024-07-26 09:06:30.953731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.759 qpair failed and we were unable to recover it. 00:33:12.759 [2024-07-26 09:06:30.953873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.759 [2024-07-26 09:06:30.953899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.759 qpair failed and we were unable to recover it. 00:33:12.759 [2024-07-26 09:06:30.954065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.759 [2024-07-26 09:06:30.954097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.759 qpair failed and we were unable to recover it. 00:33:12.759 [2024-07-26 09:06:30.954253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.759 [2024-07-26 09:06:30.954279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.759 qpair failed and we were unable to recover it. 00:33:12.759 [2024-07-26 09:06:30.954400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.759 [2024-07-26 09:06:30.954425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.759 qpair failed and we were unable to recover it. 00:33:12.759 [2024-07-26 09:06:30.954557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.759 [2024-07-26 09:06:30.954583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.759 qpair failed and we were unable to recover it. 00:33:12.759 [2024-07-26 09:06:30.954731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.759 [2024-07-26 09:06:30.954768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.759 qpair failed and we were unable to recover it. 00:33:12.759 [2024-07-26 09:06:30.954894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.759 [2024-07-26 09:06:30.954919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.759 qpair failed and we were unable to recover it. 00:33:12.759 [2024-07-26 09:06:30.955080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.759 [2024-07-26 09:06:30.955107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.759 qpair failed and we were unable to recover it. 00:33:12.759 [2024-07-26 09:06:30.955277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.759 [2024-07-26 09:06:30.955303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.759 qpair failed and we were unable to recover it. 00:33:12.759 [2024-07-26 09:06:30.955451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.759 [2024-07-26 09:06:30.955478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.759 qpair failed and we were unable to recover it. 00:33:12.759 [2024-07-26 09:06:30.955638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.759 [2024-07-26 09:06:30.955664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.759 qpair failed and we were unable to recover it. 00:33:12.759 [2024-07-26 09:06:30.955778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.759 [2024-07-26 09:06:30.955803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.759 qpair failed and we were unable to recover it. 00:33:12.759 [2024-07-26 09:06:30.955970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.759 [2024-07-26 09:06:30.955995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.759 qpair failed and we were unable to recover it. 00:33:12.759 [2024-07-26 09:06:30.956124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.759 [2024-07-26 09:06:30.956154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.759 qpair failed and we were unable to recover it. 00:33:12.759 [2024-07-26 09:06:30.956324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.759 [2024-07-26 09:06:30.956350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.759 qpair failed and we were unable to recover it. 00:33:12.759 [2024-07-26 09:06:30.956475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.759 [2024-07-26 09:06:30.956500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.759 qpair failed and we were unable to recover it. 00:33:12.759 [2024-07-26 09:06:30.956648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.759 [2024-07-26 09:06:30.956673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.759 qpair failed and we were unable to recover it. 00:33:12.759 [2024-07-26 09:06:30.956812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.759 [2024-07-26 09:06:30.956837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.759 qpair failed and we were unable to recover it. 00:33:12.759 [2024-07-26 09:06:30.956959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.759 [2024-07-26 09:06:30.956985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.759 qpair failed and we were unable to recover it. 00:33:12.759 [2024-07-26 09:06:30.957113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.759 [2024-07-26 09:06:30.957139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.759 qpair failed and we were unable to recover it. 00:33:12.759 [2024-07-26 09:06:30.957286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.759 [2024-07-26 09:06:30.957311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.759 qpair failed and we were unable to recover it. 00:33:12.759 [2024-07-26 09:06:30.957434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.759 [2024-07-26 09:06:30.957459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.759 qpair failed and we were unable to recover it. 00:33:12.759 [2024-07-26 09:06:30.957650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.759 [2024-07-26 09:06:30.957675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.759 qpair failed and we were unable to recover it. 00:33:12.759 [2024-07-26 09:06:30.957814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.759 [2024-07-26 09:06:30.957839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.759 qpair failed and we were unable to recover it. 00:33:12.759 [2024-07-26 09:06:30.957988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.759 [2024-07-26 09:06:30.958014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.759 qpair failed and we were unable to recover it. 00:33:12.759 [2024-07-26 09:06:30.958146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.759 [2024-07-26 09:06:30.958172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.759 qpair failed and we were unable to recover it. 00:33:12.759 [2024-07-26 09:06:30.958294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.759 [2024-07-26 09:06:30.958319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.759 qpair failed and we were unable to recover it. 00:33:12.759 [2024-07-26 09:06:30.958499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.759 [2024-07-26 09:06:30.958524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.759 qpair failed and we were unable to recover it. 00:33:12.759 [2024-07-26 09:06:30.958644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.759 [2024-07-26 09:06:30.958670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.759 qpair failed and we were unable to recover it. 00:33:12.759 [2024-07-26 09:06:30.958812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.759 [2024-07-26 09:06:30.958837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.759 qpair failed and we were unable to recover it. 00:33:12.759 [2024-07-26 09:06:30.958987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.759 [2024-07-26 09:06:30.959012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.759 qpair failed and we were unable to recover it. 00:33:12.759 [2024-07-26 09:06:30.959147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.760 [2024-07-26 09:06:30.959173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.760 qpair failed and we were unable to recover it. 00:33:12.760 [2024-07-26 09:06:30.959296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.760 [2024-07-26 09:06:30.959321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.760 qpair failed and we were unable to recover it. 00:33:12.760 [2024-07-26 09:06:30.959474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.760 [2024-07-26 09:06:30.959499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.760 qpair failed and we were unable to recover it. 00:33:12.760 [2024-07-26 09:06:30.959633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.760 [2024-07-26 09:06:30.959658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.760 qpair failed and we were unable to recover it. 00:33:12.760 [2024-07-26 09:06:30.959835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.760 [2024-07-26 09:06:30.959860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.760 qpair failed and we were unable to recover it. 00:33:12.760 [2024-07-26 09:06:30.960002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.760 [2024-07-26 09:06:30.960027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.760 qpair failed and we were unable to recover it. 00:33:12.760 [2024-07-26 09:06:30.960178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.760 [2024-07-26 09:06:30.960204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.760 qpair failed and we were unable to recover it. 00:33:12.760 [2024-07-26 09:06:30.960323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.760 [2024-07-26 09:06:30.960349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.760 qpair failed and we were unable to recover it. 00:33:12.760 [2024-07-26 09:06:30.960504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.760 [2024-07-26 09:06:30.960529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.760 qpair failed and we were unable to recover it. 00:33:12.760 [2024-07-26 09:06:30.960676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.760 [2024-07-26 09:06:30.960702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.760 qpair failed and we were unable to recover it. 00:33:12.760 [2024-07-26 09:06:30.960854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.760 [2024-07-26 09:06:30.960880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.760 qpair failed and we were unable to recover it. 00:33:12.760 [2024-07-26 09:06:30.961020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.760 [2024-07-26 09:06:30.961045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.760 qpair failed and we were unable to recover it. 00:33:12.760 [2024-07-26 09:06:30.961202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.760 [2024-07-26 09:06:30.961228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.760 qpair failed and we were unable to recover it. 00:33:12.760 [2024-07-26 09:06:30.961399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.760 [2024-07-26 09:06:30.961424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.760 qpair failed and we were unable to recover it. 00:33:12.760 [2024-07-26 09:06:30.961535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.760 [2024-07-26 09:06:30.961561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.760 qpair failed and we were unable to recover it. 00:33:12.760 [2024-07-26 09:06:30.961682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.760 [2024-07-26 09:06:30.961708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.760 qpair failed and we were unable to recover it. 00:33:12.760 [2024-07-26 09:06:30.961888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.760 [2024-07-26 09:06:30.961913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.760 qpair failed and we were unable to recover it. 00:33:12.760 [2024-07-26 09:06:30.962035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.760 [2024-07-26 09:06:30.962069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.760 qpair failed and we were unable to recover it. 00:33:12.760 [2024-07-26 09:06:30.962262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.760 [2024-07-26 09:06:30.962288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.760 qpair failed and we were unable to recover it. 00:33:12.760 [2024-07-26 09:06:30.962432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.760 [2024-07-26 09:06:30.962458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.760 qpair failed and we were unable to recover it. 00:33:12.760 [2024-07-26 09:06:30.962615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.760 [2024-07-26 09:06:30.962640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.760 qpair failed and we were unable to recover it. 00:33:12.760 [2024-07-26 09:06:30.962821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.760 [2024-07-26 09:06:30.962847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.760 qpair failed and we were unable to recover it. 00:33:12.760 [2024-07-26 09:06:30.962991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.760 [2024-07-26 09:06:30.963017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.760 qpair failed and we were unable to recover it. 00:33:12.760 [2024-07-26 09:06:30.963207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.760 [2024-07-26 09:06:30.963237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.760 qpair failed and we were unable to recover it. 00:33:12.760 [2024-07-26 09:06:30.963351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.760 [2024-07-26 09:06:30.963386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.760 qpair failed and we were unable to recover it. 00:33:12.760 [2024-07-26 09:06:30.963553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.760 [2024-07-26 09:06:30.963578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.760 qpair failed and we were unable to recover it. 00:33:12.760 [2024-07-26 09:06:30.963700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.760 [2024-07-26 09:06:30.963725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.760 qpair failed and we were unable to recover it. 00:33:12.760 [2024-07-26 09:06:30.963874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.760 [2024-07-26 09:06:30.963899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.760 qpair failed and we were unable to recover it. 00:33:12.760 [2024-07-26 09:06:30.964072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.760 [2024-07-26 09:06:30.964098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.760 qpair failed and we were unable to recover it. 00:33:12.760 [2024-07-26 09:06:30.964213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.760 [2024-07-26 09:06:30.964240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.760 qpair failed and we were unable to recover it. 00:33:12.760 [2024-07-26 09:06:30.964420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.760 [2024-07-26 09:06:30.964446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.760 qpair failed and we were unable to recover it. 00:33:12.760 [2024-07-26 09:06:30.964562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.760 [2024-07-26 09:06:30.964587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.760 qpair failed and we were unable to recover it. 00:33:12.760 [2024-07-26 09:06:30.964730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.761 [2024-07-26 09:06:30.964755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.761 qpair failed and we were unable to recover it. 00:33:12.761 [2024-07-26 09:06:30.964926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.761 [2024-07-26 09:06:30.964952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.761 qpair failed and we were unable to recover it. 00:33:12.761 [2024-07-26 09:06:30.965133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.761 [2024-07-26 09:06:30.965159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.761 qpair failed and we were unable to recover it. 00:33:12.761 [2024-07-26 09:06:30.965300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.761 [2024-07-26 09:06:30.965325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.761 qpair failed and we were unable to recover it. 00:33:12.761 [2024-07-26 09:06:30.965509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.761 [2024-07-26 09:06:30.965535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.761 qpair failed and we were unable to recover it. 00:33:12.761 [2024-07-26 09:06:30.965687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.761 [2024-07-26 09:06:30.965724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.761 qpair failed and we were unable to recover it. 00:33:12.761 [2024-07-26 09:06:30.965903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.761 [2024-07-26 09:06:30.965928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.761 qpair failed and we were unable to recover it. 00:33:12.761 [2024-07-26 09:06:30.966111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.761 [2024-07-26 09:06:30.966138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.761 qpair failed and we were unable to recover it. 00:33:12.761 [2024-07-26 09:06:30.966261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.761 [2024-07-26 09:06:30.966286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.761 qpair failed and we were unable to recover it. 00:33:12.761 [2024-07-26 09:06:30.966461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.761 [2024-07-26 09:06:30.966487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.761 qpair failed and we were unable to recover it. 00:33:12.761 [2024-07-26 09:06:30.966607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.761 [2024-07-26 09:06:30.966633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.761 qpair failed and we were unable to recover it. 00:33:12.761 [2024-07-26 09:06:30.966752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.761 [2024-07-26 09:06:30.966777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.761 qpair failed and we were unable to recover it. 00:33:12.761 [2024-07-26 09:06:30.966925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.761 [2024-07-26 09:06:30.966950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.761 qpair failed and we were unable to recover it. 00:33:12.761 [2024-07-26 09:06:30.967108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.761 [2024-07-26 09:06:30.967133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.761 qpair failed and we were unable to recover it. 00:33:12.761 [2024-07-26 09:06:30.967286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.761 [2024-07-26 09:06:30.967312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.761 qpair failed and we were unable to recover it. 00:33:12.761 [2024-07-26 09:06:30.967464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.761 [2024-07-26 09:06:30.967489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.761 qpair failed and we were unable to recover it. 00:33:12.761 [2024-07-26 09:06:30.967669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.761 [2024-07-26 09:06:30.967694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.761 qpair failed and we were unable to recover it. 00:33:12.761 [2024-07-26 09:06:30.967811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.761 [2024-07-26 09:06:30.967837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.761 qpair failed and we were unable to recover it. 00:33:12.761 [2024-07-26 09:06:30.967983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.761 [2024-07-26 09:06:30.968008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.761 qpair failed and we were unable to recover it. 00:33:12.761 [2024-07-26 09:06:30.968187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.761 [2024-07-26 09:06:30.968213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.761 qpair failed and we were unable to recover it. 00:33:12.761 [2024-07-26 09:06:30.968386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.761 [2024-07-26 09:06:30.968411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.761 qpair failed and we were unable to recover it. 00:33:12.761 [2024-07-26 09:06:30.968578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.761 [2024-07-26 09:06:30.968603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.761 qpair failed and we were unable to recover it. 00:33:12.761 [2024-07-26 09:06:30.968774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.761 [2024-07-26 09:06:30.968800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.761 qpair failed and we were unable to recover it. 00:33:12.761 [2024-07-26 09:06:30.968911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.761 [2024-07-26 09:06:30.968937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.761 qpair failed and we were unable to recover it. 00:33:12.761 [2024-07-26 09:06:30.969073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.761 [2024-07-26 09:06:30.969099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.761 qpair failed and we were unable to recover it. 00:33:12.761 [2024-07-26 09:06:30.969222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.761 [2024-07-26 09:06:30.969247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.761 qpair failed and we were unable to recover it. 00:33:12.761 [2024-07-26 09:06:30.969420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.761 [2024-07-26 09:06:30.969446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.761 qpair failed and we were unable to recover it. 00:33:12.761 [2024-07-26 09:06:30.969587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.761 [2024-07-26 09:06:30.969612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.761 qpair failed and we were unable to recover it. 00:33:12.761 [2024-07-26 09:06:30.969769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.761 [2024-07-26 09:06:30.969794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.761 qpair failed and we were unable to recover it. 00:33:12.761 [2024-07-26 09:06:30.969913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.761 [2024-07-26 09:06:30.969939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.761 qpair failed and we were unable to recover it. 00:33:12.761 [2024-07-26 09:06:30.970091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.761 [2024-07-26 09:06:30.970117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.761 qpair failed and we were unable to recover it. 00:33:12.761 [2024-07-26 09:06:30.970242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.761 [2024-07-26 09:06:30.970267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.761 qpair failed and we were unable to recover it. 00:33:12.761 [2024-07-26 09:06:30.970409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.761 [2024-07-26 09:06:30.970434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.761 qpair failed and we were unable to recover it. 00:33:12.761 [2024-07-26 09:06:30.970612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.761 [2024-07-26 09:06:30.970637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.761 qpair failed and we were unable to recover it. 00:33:12.761 [2024-07-26 09:06:30.970810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.761 [2024-07-26 09:06:30.970835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.761 qpair failed and we were unable to recover it. 00:33:12.761 [2024-07-26 09:06:30.970981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.761 [2024-07-26 09:06:30.971006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.761 qpair failed and we were unable to recover it. 00:33:12.761 [2024-07-26 09:06:30.971175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.761 [2024-07-26 09:06:30.971202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.761 qpair failed and we were unable to recover it. 00:33:12.761 [2024-07-26 09:06:30.971352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.761 [2024-07-26 09:06:30.971378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.761 qpair failed and we were unable to recover it. 00:33:12.761 [2024-07-26 09:06:30.971559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.762 [2024-07-26 09:06:30.971584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.762 qpair failed and we were unable to recover it. 00:33:12.762 [2024-07-26 09:06:30.971756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.762 [2024-07-26 09:06:30.971781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.762 qpair failed and we were unable to recover it. 00:33:12.762 [2024-07-26 09:06:30.971953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.762 [2024-07-26 09:06:30.971979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.762 qpair failed and we were unable to recover it. 00:33:12.762 [2024-07-26 09:06:30.972126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.762 [2024-07-26 09:06:30.972152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.762 qpair failed and we were unable to recover it. 00:33:12.762 [2024-07-26 09:06:30.972303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.762 [2024-07-26 09:06:30.972329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.762 qpair failed and we were unable to recover it. 00:33:12.762 [2024-07-26 09:06:30.972480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.762 [2024-07-26 09:06:30.972506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.762 qpair failed and we were unable to recover it. 00:33:12.762 [2024-07-26 09:06:30.972655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.762 [2024-07-26 09:06:30.972681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.762 qpair failed and we were unable to recover it. 00:33:12.762 [2024-07-26 09:06:30.972827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.762 [2024-07-26 09:06:30.972853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.762 qpair failed and we were unable to recover it. 00:33:12.762 [2024-07-26 09:06:30.973030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.762 [2024-07-26 09:06:30.973056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.762 qpair failed and we were unable to recover it. 00:33:12.762 [2024-07-26 09:06:30.973210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.762 [2024-07-26 09:06:30.973236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.762 qpair failed and we were unable to recover it. 00:33:12.762 [2024-07-26 09:06:30.973426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.762 [2024-07-26 09:06:30.973451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.762 qpair failed and we were unable to recover it. 00:33:12.762 [2024-07-26 09:06:30.973579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.762 [2024-07-26 09:06:30.973604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.762 qpair failed and we were unable to recover it. 00:33:12.762 [2024-07-26 09:06:30.973747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.762 [2024-07-26 09:06:30.973772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.762 qpair failed and we were unable to recover it. 00:33:12.762 [2024-07-26 09:06:30.973894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.762 [2024-07-26 09:06:30.973919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.762 qpair failed and we were unable to recover it. 00:33:12.762 [2024-07-26 09:06:30.974107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.762 [2024-07-26 09:06:30.974134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.762 qpair failed and we were unable to recover it. 00:33:12.762 [2024-07-26 09:06:30.974262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.762 [2024-07-26 09:06:30.974287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.762 qpair failed and we were unable to recover it. 00:33:12.762 [2024-07-26 09:06:30.974397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.762 [2024-07-26 09:06:30.974423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.762 qpair failed and we were unable to recover it. 00:33:12.762 [2024-07-26 09:06:30.974576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.762 [2024-07-26 09:06:30.974602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.762 qpair failed and we were unable to recover it. 00:33:12.762 [2024-07-26 09:06:30.974748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.762 [2024-07-26 09:06:30.974773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.762 qpair failed and we were unable to recover it. 00:33:12.762 [2024-07-26 09:06:30.974899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.762 [2024-07-26 09:06:30.974926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.762 qpair failed and we were unable to recover it. 00:33:12.762 [2024-07-26 09:06:30.975050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.762 [2024-07-26 09:06:30.975081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.762 qpair failed and we were unable to recover it. 00:33:12.762 [2024-07-26 09:06:30.975252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.762 [2024-07-26 09:06:30.975281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.762 qpair failed and we were unable to recover it. 00:33:12.762 [2024-07-26 09:06:30.975414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.762 [2024-07-26 09:06:30.975439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.762 qpair failed and we were unable to recover it. 00:33:12.762 [2024-07-26 09:06:30.975554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.762 [2024-07-26 09:06:30.975580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.762 qpair failed and we were unable to recover it. 00:33:12.762 [2024-07-26 09:06:30.975722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.762 [2024-07-26 09:06:30.975748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.762 qpair failed and we were unable to recover it. 00:33:12.762 [2024-07-26 09:06:30.975891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.762 [2024-07-26 09:06:30.975916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.762 qpair failed and we were unable to recover it. 00:33:12.762 [2024-07-26 09:06:30.976086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.762 [2024-07-26 09:06:30.976113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.762 qpair failed and we were unable to recover it. 00:33:12.762 [2024-07-26 09:06:30.976260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.762 [2024-07-26 09:06:30.976286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.762 qpair failed and we were unable to recover it. 00:33:12.762 [2024-07-26 09:06:30.976415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.762 [2024-07-26 09:06:30.976440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.762 qpair failed and we were unable to recover it. 00:33:12.762 [2024-07-26 09:06:30.976594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.762 [2024-07-26 09:06:30.976619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.762 qpair failed and we were unable to recover it. 00:33:12.762 [2024-07-26 09:06:30.976790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.762 [2024-07-26 09:06:30.976815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.762 qpair failed and we were unable to recover it. 00:33:12.762 [2024-07-26 09:06:30.976986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.762 [2024-07-26 09:06:30.977011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.762 qpair failed and we were unable to recover it. 00:33:12.762 [2024-07-26 09:06:30.977190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.762 [2024-07-26 09:06:30.977216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.762 qpair failed and we were unable to recover it. 00:33:12.762 [2024-07-26 09:06:30.977358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.762 [2024-07-26 09:06:30.977383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.762 qpair failed and we were unable to recover it. 00:33:12.762 [2024-07-26 09:06:30.977531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.762 [2024-07-26 09:06:30.977556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.762 qpair failed and we were unable to recover it. 00:33:12.762 [2024-07-26 09:06:30.977706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.762 [2024-07-26 09:06:30.977732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.762 qpair failed and we were unable to recover it. 00:33:12.762 [2024-07-26 09:06:30.977889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.762 [2024-07-26 09:06:30.977915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.762 qpair failed and we were unable to recover it. 00:33:12.762 [2024-07-26 09:06:30.978040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.762 [2024-07-26 09:06:30.978075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.762 qpair failed and we were unable to recover it. 00:33:12.763 [2024-07-26 09:06:30.978228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.763 [2024-07-26 09:06:30.978254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.763 qpair failed and we were unable to recover it. 00:33:12.763 [2024-07-26 09:06:30.978379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.763 [2024-07-26 09:06:30.978405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.763 qpair failed and we were unable to recover it. 00:33:12.763 [2024-07-26 09:06:30.978544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.763 [2024-07-26 09:06:30.978569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.763 qpair failed and we were unable to recover it. 00:33:12.763 [2024-07-26 09:06:30.978714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.763 [2024-07-26 09:06:30.978739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.763 qpair failed and we were unable to recover it. 00:33:12.763 [2024-07-26 09:06:30.978917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.763 [2024-07-26 09:06:30.978943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.763 qpair failed and we were unable to recover it. 00:33:12.763 [2024-07-26 09:06:30.979112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.763 [2024-07-26 09:06:30.979138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.763 qpair failed and we were unable to recover it. 00:33:12.763 [2024-07-26 09:06:30.979318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.763 [2024-07-26 09:06:30.979343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.763 qpair failed and we were unable to recover it. 00:33:12.763 [2024-07-26 09:06:30.979525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.763 [2024-07-26 09:06:30.979550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.763 qpair failed and we were unable to recover it. 00:33:12.763 [2024-07-26 09:06:30.979660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.763 [2024-07-26 09:06:30.979685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.763 qpair failed and we were unable to recover it. 00:33:12.763 [2024-07-26 09:06:30.979830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.763 [2024-07-26 09:06:30.979855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.763 qpair failed and we were unable to recover it. 00:33:12.763 [2024-07-26 09:06:30.980027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.763 [2024-07-26 09:06:30.980052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.763 qpair failed and we were unable to recover it. 00:33:12.763 [2024-07-26 09:06:30.980234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.763 [2024-07-26 09:06:30.980259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.763 qpair failed and we were unable to recover it. 00:33:12.763 [2024-07-26 09:06:30.980377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.763 [2024-07-26 09:06:30.980402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.763 qpair failed and we were unable to recover it. 00:33:12.763 [2024-07-26 09:06:30.980572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.763 [2024-07-26 09:06:30.980597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.763 qpair failed and we were unable to recover it. 00:33:12.763 [2024-07-26 09:06:30.980741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.763 [2024-07-26 09:06:30.980765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.763 qpair failed and we were unable to recover it. 00:33:12.763 [2024-07-26 09:06:30.980938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.763 [2024-07-26 09:06:30.980963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.763 qpair failed and we were unable to recover it. 00:33:12.763 [2024-07-26 09:06:30.981071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.763 [2024-07-26 09:06:30.981097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.763 qpair failed and we were unable to recover it. 00:33:12.763 [2024-07-26 09:06:30.981243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.763 [2024-07-26 09:06:30.981268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.763 qpair failed and we were unable to recover it. 00:33:12.763 [2024-07-26 09:06:30.981419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.763 [2024-07-26 09:06:30.981444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.763 qpair failed and we were unable to recover it. 00:33:12.763 [2024-07-26 09:06:30.981611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.763 [2024-07-26 09:06:30.981636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.763 qpair failed and we were unable to recover it. 00:33:12.763 [2024-07-26 09:06:30.981756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.763 [2024-07-26 09:06:30.981781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.763 qpair failed and we were unable to recover it. 00:33:12.763 [2024-07-26 09:06:30.981904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.763 [2024-07-26 09:06:30.981930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.763 qpair failed and we were unable to recover it. 00:33:12.763 [2024-07-26 09:06:30.982073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.763 [2024-07-26 09:06:30.982102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.763 qpair failed and we were unable to recover it. 00:33:12.763 [2024-07-26 09:06:30.982245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.763 [2024-07-26 09:06:30.982270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.763 qpair failed and we were unable to recover it. 00:33:12.763 [2024-07-26 09:06:30.982424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.763 [2024-07-26 09:06:30.982449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.763 qpair failed and we were unable to recover it. 00:33:12.763 [2024-07-26 09:06:30.982593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.763 [2024-07-26 09:06:30.982618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.763 qpair failed and we were unable to recover it. 00:33:12.763 [2024-07-26 09:06:30.982764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.763 [2024-07-26 09:06:30.982790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.763 qpair failed and we were unable to recover it. 00:33:12.763 [2024-07-26 09:06:30.982960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.763 [2024-07-26 09:06:30.982985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.763 qpair failed and we were unable to recover it. 00:33:12.763 [2024-07-26 09:06:30.983169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.763 [2024-07-26 09:06:30.983194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.763 qpair failed and we were unable to recover it. 00:33:12.763 [2024-07-26 09:06:30.983340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.763 [2024-07-26 09:06:30.983365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.763 qpair failed and we were unable to recover it. 00:33:12.763 [2024-07-26 09:06:30.983481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.763 [2024-07-26 09:06:30.983506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.763 qpair failed and we were unable to recover it. 00:33:12.763 [2024-07-26 09:06:30.983684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.763 [2024-07-26 09:06:30.983709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.763 qpair failed and we were unable to recover it. 00:33:12.763 [2024-07-26 09:06:30.983826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.763 [2024-07-26 09:06:30.983850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.763 qpair failed and we were unable to recover it. 00:33:12.763 [2024-07-26 09:06:30.984019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.763 [2024-07-26 09:06:30.984045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.763 qpair failed and we were unable to recover it. 00:33:12.763 [2024-07-26 09:06:30.984190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.763 [2024-07-26 09:06:30.984216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.763 qpair failed and we were unable to recover it. 00:33:12.763 [2024-07-26 09:06:30.984389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.763 [2024-07-26 09:06:30.984414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.763 qpair failed and we were unable to recover it. 00:33:12.763 [2024-07-26 09:06:30.984522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.763 [2024-07-26 09:06:30.984548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.763 qpair failed and we were unable to recover it. 00:33:12.763 [2024-07-26 09:06:30.984698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.763 [2024-07-26 09:06:30.984724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.763 qpair failed and we were unable to recover it. 00:33:12.764 [2024-07-26 09:06:30.984908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.764 [2024-07-26 09:06:30.984933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.764 qpair failed and we were unable to recover it. 00:33:12.764 [2024-07-26 09:06:30.985084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.764 [2024-07-26 09:06:30.985110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.764 qpair failed and we were unable to recover it. 00:33:12.764 [2024-07-26 09:06:30.985223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.764 [2024-07-26 09:06:30.985249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.764 qpair failed and we were unable to recover it. 00:33:12.764 [2024-07-26 09:06:30.985399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.764 [2024-07-26 09:06:30.985424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.764 qpair failed and we were unable to recover it. 00:33:12.764 [2024-07-26 09:06:30.985547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.764 [2024-07-26 09:06:30.985574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.764 qpair failed and we were unable to recover it. 00:33:12.764 [2024-07-26 09:06:30.985748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.764 [2024-07-26 09:06:30.985773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.764 qpair failed and we were unable to recover it. 00:33:12.764 [2024-07-26 09:06:30.985918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.764 [2024-07-26 09:06:30.985945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.764 qpair failed and we were unable to recover it. 00:33:12.764 [2024-07-26 09:06:30.986094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.764 [2024-07-26 09:06:30.986122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.764 qpair failed and we were unable to recover it. 00:33:12.764 [2024-07-26 09:06:30.986264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.764 [2024-07-26 09:06:30.986289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.764 qpair failed and we were unable to recover it. 00:33:12.764 [2024-07-26 09:06:30.986434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.764 [2024-07-26 09:06:30.986459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.764 qpair failed and we were unable to recover it. 00:33:12.764 [2024-07-26 09:06:30.986605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.764 [2024-07-26 09:06:30.986631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.764 qpair failed and we were unable to recover it. 00:33:12.764 [2024-07-26 09:06:30.986800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.764 [2024-07-26 09:06:30.986825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.764 qpair failed and we were unable to recover it. 00:33:12.764 [2024-07-26 09:06:30.986943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.764 [2024-07-26 09:06:30.986969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.764 qpair failed and we were unable to recover it. 00:33:12.764 [2024-07-26 09:06:30.987098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.764 [2024-07-26 09:06:30.987128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.764 qpair failed and we were unable to recover it. 00:33:12.764 [2024-07-26 09:06:30.987276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.764 [2024-07-26 09:06:30.987301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.764 qpair failed and we were unable to recover it. 00:33:12.764 [2024-07-26 09:06:30.987453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.764 [2024-07-26 09:06:30.987479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.764 qpair failed and we were unable to recover it. 00:33:12.764 [2024-07-26 09:06:30.987660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.764 [2024-07-26 09:06:30.987686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.764 qpair failed and we were unable to recover it. 00:33:12.764 [2024-07-26 09:06:30.987825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.764 [2024-07-26 09:06:30.987850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.764 qpair failed and we were unable to recover it. 00:33:12.764 [2024-07-26 09:06:30.987998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.764 [2024-07-26 09:06:30.988023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.764 qpair failed and we were unable to recover it. 00:33:12.764 [2024-07-26 09:06:30.988186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.764 [2024-07-26 09:06:30.988213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.764 qpair failed and we were unable to recover it. 00:33:12.764 [2024-07-26 09:06:30.988333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.764 [2024-07-26 09:06:30.988359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.764 qpair failed and we were unable to recover it. 00:33:12.764 [2024-07-26 09:06:30.988534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.764 [2024-07-26 09:06:30.988559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.764 qpair failed and we were unable to recover it. 00:33:12.764 [2024-07-26 09:06:30.988683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.764 [2024-07-26 09:06:30.988708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.764 qpair failed and we were unable to recover it. 00:33:12.764 [2024-07-26 09:06:30.988877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.764 [2024-07-26 09:06:30.988903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.764 qpair failed and we were unable to recover it. 00:33:12.764 [2024-07-26 09:06:30.989020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.764 [2024-07-26 09:06:30.989045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.764 qpair failed and we were unable to recover it. 00:33:12.764 [2024-07-26 09:06:30.989195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.764 [2024-07-26 09:06:30.989221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.764 qpair failed and we were unable to recover it. 00:33:12.764 [2024-07-26 09:06:30.989400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.764 [2024-07-26 09:06:30.989426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.764 qpair failed and we were unable to recover it. 00:33:12.764 [2024-07-26 09:06:30.989565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.764 [2024-07-26 09:06:30.989591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.764 qpair failed and we were unable to recover it. 00:33:12.764 [2024-07-26 09:06:30.989706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.764 [2024-07-26 09:06:30.989733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.764 qpair failed and we were unable to recover it. 00:33:12.764 [2024-07-26 09:06:30.989853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.764 [2024-07-26 09:06:30.989878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.764 qpair failed and we were unable to recover it. 00:33:12.764 [2024-07-26 09:06:30.989993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.764 [2024-07-26 09:06:30.990018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.764 qpair failed and we were unable to recover it. 00:33:12.764 [2024-07-26 09:06:30.990206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.764 [2024-07-26 09:06:30.990233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.764 qpair failed and we were unable to recover it. 00:33:12.764 [2024-07-26 09:06:30.990382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.764 [2024-07-26 09:06:30.990407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.764 qpair failed and we were unable to recover it. 00:33:12.764 [2024-07-26 09:06:30.990567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.764 [2024-07-26 09:06:30.990593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.764 qpair failed and we were unable to recover it. 00:33:12.764 [2024-07-26 09:06:30.990746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.764 [2024-07-26 09:06:30.990771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.764 qpair failed and we were unable to recover it. 00:33:12.764 [2024-07-26 09:06:30.990914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.764 [2024-07-26 09:06:30.990939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.764 qpair failed and we were unable to recover it. 00:33:12.764 [2024-07-26 09:06:30.991089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.764 [2024-07-26 09:06:30.991116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.764 qpair failed and we were unable to recover it. 00:33:12.764 [2024-07-26 09:06:30.991237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.764 [2024-07-26 09:06:30.991262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.765 qpair failed and we were unable to recover it. 00:33:12.765 [2024-07-26 09:06:30.991400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.765 [2024-07-26 09:06:30.991426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.765 qpair failed and we were unable to recover it. 00:33:12.765 [2024-07-26 09:06:30.991568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.765 [2024-07-26 09:06:30.991594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.765 qpair failed and we were unable to recover it. 00:33:12.765 [2024-07-26 09:06:30.991714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.765 [2024-07-26 09:06:30.991740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.765 qpair failed and we were unable to recover it. 00:33:12.765 [2024-07-26 09:06:30.991914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.765 [2024-07-26 09:06:30.991939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.765 qpair failed and we were unable to recover it. 00:33:12.765 [2024-07-26 09:06:30.992099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.765 [2024-07-26 09:06:30.992125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.765 qpair failed and we were unable to recover it. 00:33:12.765 [2024-07-26 09:06:30.992294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.765 [2024-07-26 09:06:30.992320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.765 qpair failed and we were unable to recover it. 00:33:12.765 [2024-07-26 09:06:30.992470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.765 [2024-07-26 09:06:30.992495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.765 qpair failed and we were unable to recover it. 00:33:12.765 [2024-07-26 09:06:30.992664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.765 [2024-07-26 09:06:30.992689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.765 qpair failed and we were unable to recover it. 00:33:12.765 [2024-07-26 09:06:30.992812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.765 [2024-07-26 09:06:30.992838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.765 qpair failed and we were unable to recover it. 00:33:12.765 [2024-07-26 09:06:30.993011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.765 [2024-07-26 09:06:30.993036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.765 qpair failed and we were unable to recover it. 00:33:12.765 [2024-07-26 09:06:30.993166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.765 [2024-07-26 09:06:30.993192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.765 qpair failed and we were unable to recover it. 00:33:12.765 [2024-07-26 09:06:30.993334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.765 [2024-07-26 09:06:30.993360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.765 qpair failed and we were unable to recover it. 00:33:12.765 [2024-07-26 09:06:30.993532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.765 [2024-07-26 09:06:30.993557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.765 qpair failed and we were unable to recover it. 00:33:12.765 [2024-07-26 09:06:30.993728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.765 [2024-07-26 09:06:30.993754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.765 qpair failed and we were unable to recover it. 00:33:12.765 [2024-07-26 09:06:30.993869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.765 [2024-07-26 09:06:30.993895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.765 qpair failed and we were unable to recover it. 00:33:12.765 [2024-07-26 09:06:30.994041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.765 [2024-07-26 09:06:30.994076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.765 qpair failed and we were unable to recover it. 00:33:12.765 [2024-07-26 09:06:30.994206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.765 [2024-07-26 09:06:30.994235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.765 qpair failed and we were unable to recover it. 00:33:12.765 [2024-07-26 09:06:30.994381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.765 [2024-07-26 09:06:30.994406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.765 qpair failed and we were unable to recover it. 00:33:12.765 [2024-07-26 09:06:30.994548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.765 [2024-07-26 09:06:30.994574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.765 qpair failed and we were unable to recover it. 00:33:12.765 [2024-07-26 09:06:30.994688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.765 [2024-07-26 09:06:30.994713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.765 qpair failed and we were unable to recover it. 00:33:12.765 [2024-07-26 09:06:30.994889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.765 [2024-07-26 09:06:30.994915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.765 qpair failed and we were unable to recover it. 00:33:12.765 [2024-07-26 09:06:30.995089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.765 [2024-07-26 09:06:30.995115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.765 qpair failed and we were unable to recover it. 00:33:12.765 [2024-07-26 09:06:30.995265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.765 [2024-07-26 09:06:30.995291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.765 qpair failed and we were unable to recover it. 00:33:12.765 [2024-07-26 09:06:30.995444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.765 [2024-07-26 09:06:30.995470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.765 qpair failed and we were unable to recover it. 00:33:12.765 [2024-07-26 09:06:30.995614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.765 [2024-07-26 09:06:30.995640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.765 qpair failed and we were unable to recover it. 00:33:12.765 [2024-07-26 09:06:30.995814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.765 [2024-07-26 09:06:30.995839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.765 qpair failed and we were unable to recover it. 00:33:12.765 [2024-07-26 09:06:30.995989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.765 [2024-07-26 09:06:30.996016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.765 qpair failed and we were unable to recover it. 00:33:12.765 [2024-07-26 09:06:30.996184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.765 [2024-07-26 09:06:30.996211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.765 qpair failed and we were unable to recover it. 00:33:12.765 [2024-07-26 09:06:30.996355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.765 [2024-07-26 09:06:30.996381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.765 qpair failed and we were unable to recover it. 00:33:12.765 [2024-07-26 09:06:30.996530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.765 [2024-07-26 09:06:30.996556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.765 qpair failed and we were unable to recover it. 00:33:12.765 [2024-07-26 09:06:30.996730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.765 [2024-07-26 09:06:30.996756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.765 qpair failed and we were unable to recover it. 00:33:12.765 [2024-07-26 09:06:30.996905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.765 [2024-07-26 09:06:30.996931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.766 qpair failed and we were unable to recover it. 00:33:12.766 [2024-07-26 09:06:30.997078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.766 [2024-07-26 09:06:30.997104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.766 qpair failed and we were unable to recover it. 00:33:12.766 [2024-07-26 09:06:30.997243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.766 [2024-07-26 09:06:30.997268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.766 qpair failed and we were unable to recover it. 00:33:12.766 [2024-07-26 09:06:30.997380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.766 [2024-07-26 09:06:30.997405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.766 qpair failed and we were unable to recover it. 00:33:12.766 [2024-07-26 09:06:30.997551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.766 [2024-07-26 09:06:30.997576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.766 qpair failed and we were unable to recover it. 00:33:12.766 [2024-07-26 09:06:30.997717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.766 [2024-07-26 09:06:30.997742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.766 qpair failed and we were unable to recover it. 00:33:12.766 [2024-07-26 09:06:30.997863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.766 [2024-07-26 09:06:30.997888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.766 qpair failed and we were unable to recover it. 00:33:12.766 [2024-07-26 09:06:30.998070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.766 [2024-07-26 09:06:30.998100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.766 qpair failed and we were unable to recover it. 00:33:12.766 [2024-07-26 09:06:30.998246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.766 [2024-07-26 09:06:30.998272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.766 qpair failed and we were unable to recover it. 00:33:12.766 [2024-07-26 09:06:30.998393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.766 [2024-07-26 09:06:30.998418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.766 qpair failed and we were unable to recover it. 00:33:12.766 [2024-07-26 09:06:30.998563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.766 [2024-07-26 09:06:30.998589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.766 qpair failed and we were unable to recover it. 00:33:12.766 [2024-07-26 09:06:30.998732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.766 [2024-07-26 09:06:30.998757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.766 qpair failed and we were unable to recover it. 00:33:12.766 [2024-07-26 09:06:30.998877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.766 [2024-07-26 09:06:30.998905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.766 qpair failed and we were unable to recover it. 00:33:12.766 [2024-07-26 09:06:30.999046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.766 [2024-07-26 09:06:30.999078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.766 qpair failed and we were unable to recover it. 00:33:12.766 [2024-07-26 09:06:30.999247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.766 [2024-07-26 09:06:30.999273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.766 qpair failed and we were unable to recover it. 00:33:12.766 [2024-07-26 09:06:30.999418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.766 [2024-07-26 09:06:30.999443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.766 qpair failed and we were unable to recover it. 00:33:12.766 [2024-07-26 09:06:30.999563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.766 [2024-07-26 09:06:30.999589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.766 qpair failed and we were unable to recover it. 00:33:12.766 [2024-07-26 09:06:30.999759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.766 [2024-07-26 09:06:30.999784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.766 qpair failed and we were unable to recover it. 00:33:12.766 [2024-07-26 09:06:30.999900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.766 [2024-07-26 09:06:30.999925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.766 qpair failed and we were unable to recover it. 00:33:12.766 [2024-07-26 09:06:31.000051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.766 [2024-07-26 09:06:31.000084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.766 qpair failed and we were unable to recover it. 00:33:12.766 [2024-07-26 09:06:31.000214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.766 [2024-07-26 09:06:31.000239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.766 qpair failed and we were unable to recover it. 00:33:12.766 [2024-07-26 09:06:31.000383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.766 [2024-07-26 09:06:31.000408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.766 qpair failed and we were unable to recover it. 00:33:12.766 [2024-07-26 09:06:31.000551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.766 [2024-07-26 09:06:31.000577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.766 qpair failed and we were unable to recover it. 00:33:12.766 [2024-07-26 09:06:31.000691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.766 [2024-07-26 09:06:31.000716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.766 qpair failed and we were unable to recover it. 00:33:12.766 [2024-07-26 09:06:31.000830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.766 [2024-07-26 09:06:31.000855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.766 qpair failed and we were unable to recover it. 00:33:12.766 [2024-07-26 09:06:31.000977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.766 [2024-07-26 09:06:31.001003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.766 qpair failed and we were unable to recover it. 00:33:12.766 [2024-07-26 09:06:31.001143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.766 [2024-07-26 09:06:31.001188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.766 qpair failed and we were unable to recover it. 00:33:12.766 [2024-07-26 09:06:31.001312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.766 [2024-07-26 09:06:31.001340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.766 qpair failed and we were unable to recover it. 00:33:12.766 [2024-07-26 09:06:31.001520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.766 [2024-07-26 09:06:31.001547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.766 qpair failed and we were unable to recover it. 00:33:12.766 [2024-07-26 09:06:31.001690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.766 [2024-07-26 09:06:31.001717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.766 qpair failed and we were unable to recover it. 00:33:12.766 [2024-07-26 09:06:31.001895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.766 [2024-07-26 09:06:31.001922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.766 qpair failed and we were unable to recover it. 00:33:12.766 [2024-07-26 09:06:31.002049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.766 [2024-07-26 09:06:31.002083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.766 qpair failed and we were unable to recover it. 00:33:12.766 [2024-07-26 09:06:31.002235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.766 [2024-07-26 09:06:31.002262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.766 qpair failed and we were unable to recover it. 00:33:12.766 [2024-07-26 09:06:31.002407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.766 [2024-07-26 09:06:31.002435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.766 qpair failed and we were unable to recover it. 00:33:12.766 [2024-07-26 09:06:31.002607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.766 [2024-07-26 09:06:31.002634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.766 qpair failed and we were unable to recover it. 00:33:12.766 [2024-07-26 09:06:31.002783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.766 [2024-07-26 09:06:31.002810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.766 qpair failed and we were unable to recover it. 00:33:12.766 [2024-07-26 09:06:31.002989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.766 [2024-07-26 09:06:31.003014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.766 qpair failed and we were unable to recover it. 00:33:12.766 [2024-07-26 09:06:31.003164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.766 [2024-07-26 09:06:31.003191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.766 qpair failed and we were unable to recover it. 00:33:12.766 [2024-07-26 09:06:31.003336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.767 [2024-07-26 09:06:31.003362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.767 qpair failed and we were unable to recover it. 00:33:12.767 [2024-07-26 09:06:31.003514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.767 [2024-07-26 09:06:31.003545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.767 qpair failed and we were unable to recover it. 00:33:12.767 [2024-07-26 09:06:31.003673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.767 [2024-07-26 09:06:31.003699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.767 qpair failed and we were unable to recover it. 00:33:12.767 [2024-07-26 09:06:31.003836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.767 [2024-07-26 09:06:31.003861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.767 qpair failed and we were unable to recover it. 00:33:12.767 [2024-07-26 09:06:31.004005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.767 [2024-07-26 09:06:31.004030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.767 qpair failed and we were unable to recover it. 00:33:12.767 [2024-07-26 09:06:31.004207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.767 [2024-07-26 09:06:31.004233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.767 qpair failed and we were unable to recover it. 00:33:12.767 [2024-07-26 09:06:31.004358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.767 [2024-07-26 09:06:31.004385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.767 qpair failed and we were unable to recover it. 00:33:12.767 [2024-07-26 09:06:31.004560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.767 [2024-07-26 09:06:31.004587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.767 qpair failed and we were unable to recover it. 00:33:12.767 [2024-07-26 09:06:31.004761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.767 [2024-07-26 09:06:31.004787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.767 qpair failed and we were unable to recover it. 00:33:12.767 [2024-07-26 09:06:31.004935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.767 [2024-07-26 09:06:31.004961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.767 qpair failed and we were unable to recover it. 00:33:12.767 [2024-07-26 09:06:31.005104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.767 [2024-07-26 09:06:31.005130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.767 qpair failed and we were unable to recover it. 00:33:12.767 [2024-07-26 09:06:31.005275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.767 [2024-07-26 09:06:31.005301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.767 qpair failed and we were unable to recover it. 00:33:12.767 [2024-07-26 09:06:31.005434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.767 [2024-07-26 09:06:31.005459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.767 qpair failed and we were unable to recover it. 00:33:12.767 [2024-07-26 09:06:31.005613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.767 [2024-07-26 09:06:31.005638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.767 qpair failed and we were unable to recover it. 00:33:12.767 [2024-07-26 09:06:31.005789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.767 [2024-07-26 09:06:31.005815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.767 qpair failed and we were unable to recover it. 00:33:12.767 [2024-07-26 09:06:31.005966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.767 [2024-07-26 09:06:31.005992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.767 qpair failed and we were unable to recover it. 00:33:12.767 [2024-07-26 09:06:31.006128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.767 [2024-07-26 09:06:31.006155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.767 qpair failed and we were unable to recover it. 00:33:12.767 [2024-07-26 09:06:31.006329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.767 [2024-07-26 09:06:31.006355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.767 qpair failed and we were unable to recover it. 00:33:12.767 [2024-07-26 09:06:31.006501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.767 [2024-07-26 09:06:31.006528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.767 qpair failed and we were unable to recover it. 00:33:12.767 [2024-07-26 09:06:31.006680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.767 [2024-07-26 09:06:31.006706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.767 qpair failed and we were unable to recover it. 00:33:12.767 [2024-07-26 09:06:31.006827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.767 [2024-07-26 09:06:31.006853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.767 qpair failed and we were unable to recover it. 00:33:12.767 [2024-07-26 09:06:31.006976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.767 [2024-07-26 09:06:31.007001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.767 qpair failed and we were unable to recover it. 00:33:12.767 [2024-07-26 09:06:31.007188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.767 [2024-07-26 09:06:31.007214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.767 qpair failed and we were unable to recover it. 00:33:12.767 [2024-07-26 09:06:31.007369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.767 [2024-07-26 09:06:31.007396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.767 qpair failed and we were unable to recover it. 00:33:12.767 [2024-07-26 09:06:31.007544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.767 [2024-07-26 09:06:31.007570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.767 qpair failed and we were unable to recover it. 00:33:12.767 [2024-07-26 09:06:31.007720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.767 [2024-07-26 09:06:31.007746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.767 qpair failed and we were unable to recover it. 00:33:12.767 [2024-07-26 09:06:31.007922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.767 [2024-07-26 09:06:31.007947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.767 qpair failed and we were unable to recover it. 00:33:12.767 [2024-07-26 09:06:31.008106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.767 [2024-07-26 09:06:31.008133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.767 qpair failed and we were unable to recover it. 00:33:12.767 [2024-07-26 09:06:31.008256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.767 [2024-07-26 09:06:31.008287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.767 qpair failed and we were unable to recover it. 00:33:12.767 [2024-07-26 09:06:31.008412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.767 [2024-07-26 09:06:31.008438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.767 qpair failed and we were unable to recover it. 00:33:12.767 [2024-07-26 09:06:31.008585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.767 [2024-07-26 09:06:31.008610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.767 qpair failed and we were unable to recover it. 00:33:12.767 [2024-07-26 09:06:31.008758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.767 [2024-07-26 09:06:31.008784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.767 qpair failed and we were unable to recover it. 00:33:12.767 [2024-07-26 09:06:31.008913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.767 [2024-07-26 09:06:31.008938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.767 qpair failed and we were unable to recover it. 00:33:12.767 [2024-07-26 09:06:31.009087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.767 [2024-07-26 09:06:31.009117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.767 qpair failed and we were unable to recover it. 00:33:12.767 [2024-07-26 09:06:31.009265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.767 [2024-07-26 09:06:31.009292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.767 qpair failed and we were unable to recover it. 00:33:12.767 [2024-07-26 09:06:31.009459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.767 [2024-07-26 09:06:31.009486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.767 qpair failed and we were unable to recover it. 00:33:12.767 [2024-07-26 09:06:31.009663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.767 [2024-07-26 09:06:31.009688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.767 qpair failed and we were unable to recover it. 00:33:12.767 [2024-07-26 09:06:31.009813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.767 [2024-07-26 09:06:31.009839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.767 qpair failed and we were unable to recover it. 00:33:12.768 [2024-07-26 09:06:31.009992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.768 [2024-07-26 09:06:31.010018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.768 qpair failed and we were unable to recover it. 00:33:12.768 [2024-07-26 09:06:31.010150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.768 [2024-07-26 09:06:31.010176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.768 qpair failed and we were unable to recover it. 00:33:12.768 [2024-07-26 09:06:31.010327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.768 [2024-07-26 09:06:31.010354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.768 qpair failed and we were unable to recover it. 00:33:12.768 [2024-07-26 09:06:31.010525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.768 [2024-07-26 09:06:31.010551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.768 qpair failed and we were unable to recover it. 00:33:12.768 [2024-07-26 09:06:31.010700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.768 [2024-07-26 09:06:31.010726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.768 qpair failed and we were unable to recover it. 00:33:12.768 [2024-07-26 09:06:31.010878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.768 [2024-07-26 09:06:31.010904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.768 qpair failed and we were unable to recover it. 00:33:12.768 [2024-07-26 09:06:31.011078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.768 [2024-07-26 09:06:31.011105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.768 qpair failed and we were unable to recover it. 00:33:12.768 [2024-07-26 09:06:31.011275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.768 [2024-07-26 09:06:31.011301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.768 qpair failed and we were unable to recover it. 00:33:12.768 [2024-07-26 09:06:31.011493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.768 [2024-07-26 09:06:31.011518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.768 qpair failed and we were unable to recover it. 00:33:12.768 [2024-07-26 09:06:31.011665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.768 [2024-07-26 09:06:31.011692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.768 qpair failed and we were unable to recover it. 00:33:12.768 [2024-07-26 09:06:31.011843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.768 [2024-07-26 09:06:31.011869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.768 qpair failed and we were unable to recover it. 00:33:12.768 [2024-07-26 09:06:31.012008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.768 [2024-07-26 09:06:31.012033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.768 qpair failed and we were unable to recover it. 00:33:12.768 [2024-07-26 09:06:31.012191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.768 [2024-07-26 09:06:31.012217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.768 qpair failed and we were unable to recover it. 00:33:12.768 [2024-07-26 09:06:31.012363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.768 [2024-07-26 09:06:31.012389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.768 qpair failed and we were unable to recover it. 00:33:12.768 [2024-07-26 09:06:31.012509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.768 [2024-07-26 09:06:31.012535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.768 qpair failed and we were unable to recover it. 00:33:12.768 [2024-07-26 09:06:31.012709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.768 [2024-07-26 09:06:31.012735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.768 qpair failed and we were unable to recover it. 00:33:12.768 [2024-07-26 09:06:31.012875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.768 [2024-07-26 09:06:31.012901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.768 qpair failed and we were unable to recover it. 00:33:12.768 [2024-07-26 09:06:31.013056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.768 [2024-07-26 09:06:31.013087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.768 qpair failed and we were unable to recover it. 00:33:12.768 [2024-07-26 09:06:31.013238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.768 [2024-07-26 09:06:31.013263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.768 qpair failed and we were unable to recover it. 00:33:12.768 [2024-07-26 09:06:31.013414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.768 [2024-07-26 09:06:31.013440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.768 qpair failed and we were unable to recover it. 00:33:12.768 [2024-07-26 09:06:31.013586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.768 [2024-07-26 09:06:31.013612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.768 qpair failed and we were unable to recover it. 00:33:12.768 [2024-07-26 09:06:31.013764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.768 [2024-07-26 09:06:31.013790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.768 qpair failed and we were unable to recover it. 00:33:12.768 [2024-07-26 09:06:31.013910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.768 [2024-07-26 09:06:31.013936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.768 qpair failed and we were unable to recover it. 00:33:12.768 [2024-07-26 09:06:31.014065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.768 [2024-07-26 09:06:31.014090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.768 qpair failed and we were unable to recover it. 00:33:12.768 [2024-07-26 09:06:31.014209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.768 [2024-07-26 09:06:31.014235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.768 qpair failed and we were unable to recover it. 00:33:12.768 [2024-07-26 09:06:31.014384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.768 [2024-07-26 09:06:31.014410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.768 qpair failed and we were unable to recover it. 00:33:12.768 [2024-07-26 09:06:31.014584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.768 [2024-07-26 09:06:31.014609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.768 qpair failed and we were unable to recover it. 00:33:12.768 [2024-07-26 09:06:31.014747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.768 [2024-07-26 09:06:31.014772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.768 qpair failed and we were unable to recover it. 00:33:12.768 [2024-07-26 09:06:31.014921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.768 [2024-07-26 09:06:31.014947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.768 qpair failed and we were unable to recover it. 00:33:12.768 [2024-07-26 09:06:31.015129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.768 [2024-07-26 09:06:31.015156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.768 qpair failed and we were unable to recover it. 00:33:12.768 [2024-07-26 09:06:31.015281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.768 [2024-07-26 09:06:31.015311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.768 qpair failed and we were unable to recover it. 00:33:12.768 [2024-07-26 09:06:31.015472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.768 [2024-07-26 09:06:31.015498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.768 qpair failed and we were unable to recover it. 00:33:12.768 [2024-07-26 09:06:31.015619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.768 [2024-07-26 09:06:31.015646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.768 qpair failed and we were unable to recover it. 00:33:12.768 [2024-07-26 09:06:31.015819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.768 [2024-07-26 09:06:31.015845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.768 qpair failed and we were unable to recover it. 00:33:12.768 [2024-07-26 09:06:31.015991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.768 [2024-07-26 09:06:31.016017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.768 qpair failed and we were unable to recover it. 00:33:12.768 [2024-07-26 09:06:31.016137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.768 [2024-07-26 09:06:31.016163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.768 qpair failed and we were unable to recover it. 00:33:12.768 [2024-07-26 09:06:31.016279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.768 [2024-07-26 09:06:31.016304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.768 qpair failed and we were unable to recover it. 00:33:12.768 [2024-07-26 09:06:31.016480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.769 [2024-07-26 09:06:31.016506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.769 qpair failed and we were unable to recover it. 00:33:12.769 [2024-07-26 09:06:31.016654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.769 [2024-07-26 09:06:31.016680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.769 qpair failed and we were unable to recover it. 00:33:12.769 [2024-07-26 09:06:31.016803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.769 [2024-07-26 09:06:31.016828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.769 qpair failed and we were unable to recover it. 00:33:12.769 [2024-07-26 09:06:31.017009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.769 [2024-07-26 09:06:31.017035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.769 qpair failed and we were unable to recover it. 00:33:12.769 [2024-07-26 09:06:31.017185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.769 [2024-07-26 09:06:31.017210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.769 qpair failed and we were unable to recover it. 00:33:12.769 [2024-07-26 09:06:31.017360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.769 [2024-07-26 09:06:31.017386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.769 qpair failed and we were unable to recover it. 00:33:12.769 [2024-07-26 09:06:31.017554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.769 [2024-07-26 09:06:31.017580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.769 qpair failed and we were unable to recover it. 00:33:12.769 [2024-07-26 09:06:31.017762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.769 [2024-07-26 09:06:31.017789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.769 qpair failed and we were unable to recover it. 00:33:12.769 [2024-07-26 09:06:31.017941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.769 [2024-07-26 09:06:31.017966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.769 qpair failed and we were unable to recover it. 00:33:12.769 [2024-07-26 09:06:31.018085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.769 [2024-07-26 09:06:31.018112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.769 qpair failed and we were unable to recover it. 00:33:12.769 [2024-07-26 09:06:31.018285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.769 [2024-07-26 09:06:31.018311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.769 qpair failed and we were unable to recover it. 00:33:12.769 [2024-07-26 09:06:31.018449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.769 [2024-07-26 09:06:31.018474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.769 qpair failed and we were unable to recover it. 00:33:12.769 [2024-07-26 09:06:31.018650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.769 [2024-07-26 09:06:31.018675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.769 qpair failed and we were unable to recover it. 00:33:12.769 [2024-07-26 09:06:31.018822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.769 [2024-07-26 09:06:31.018849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.769 qpair failed and we were unable to recover it. 00:33:12.769 [2024-07-26 09:06:31.018972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.769 [2024-07-26 09:06:31.018998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.769 qpair failed and we were unable to recover it. 00:33:12.769 [2024-07-26 09:06:31.019171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.769 [2024-07-26 09:06:31.019198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.769 qpair failed and we were unable to recover it. 00:33:12.769 [2024-07-26 09:06:31.019322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.769 [2024-07-26 09:06:31.019349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.769 qpair failed and we were unable to recover it. 00:33:12.769 [2024-07-26 09:06:31.019524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.769 [2024-07-26 09:06:31.019550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.769 qpair failed and we were unable to recover it. 00:33:12.769 [2024-07-26 09:06:31.019710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.769 [2024-07-26 09:06:31.019736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.769 qpair failed and we were unable to recover it. 00:33:12.769 [2024-07-26 09:06:31.019858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.769 [2024-07-26 09:06:31.019884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.769 qpair failed and we were unable to recover it. 00:33:12.769 [2024-07-26 09:06:31.020001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.769 [2024-07-26 09:06:31.020027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.769 qpair failed and we were unable to recover it. 00:33:12.769 [2024-07-26 09:06:31.020180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.769 [2024-07-26 09:06:31.020206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.769 qpair failed and we were unable to recover it. 00:33:12.769 [2024-07-26 09:06:31.020354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.769 [2024-07-26 09:06:31.020380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.769 qpair failed and we were unable to recover it. 00:33:12.769 [2024-07-26 09:06:31.020549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.769 [2024-07-26 09:06:31.020575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.769 qpair failed and we were unable to recover it. 00:33:12.769 [2024-07-26 09:06:31.020722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.769 [2024-07-26 09:06:31.020747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.769 qpair failed and we were unable to recover it. 00:33:12.769 [2024-07-26 09:06:31.020869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.769 [2024-07-26 09:06:31.020895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.769 qpair failed and we were unable to recover it. 00:33:12.769 [2024-07-26 09:06:31.021040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.769 [2024-07-26 09:06:31.021070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.769 qpair failed and we were unable to recover it. 00:33:12.769 [2024-07-26 09:06:31.021222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.769 [2024-07-26 09:06:31.021247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.769 qpair failed and we were unable to recover it. 00:33:12.769 [2024-07-26 09:06:31.021396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.769 [2024-07-26 09:06:31.021422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.769 qpair failed and we were unable to recover it. 00:33:12.769 [2024-07-26 09:06:31.021551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.769 [2024-07-26 09:06:31.021577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.769 qpair failed and we were unable to recover it. 00:33:12.769 [2024-07-26 09:06:31.021719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.769 [2024-07-26 09:06:31.021744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.769 qpair failed and we were unable to recover it. 00:33:12.769 [2024-07-26 09:06:31.021889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.769 [2024-07-26 09:06:31.021915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.769 qpair failed and we were unable to recover it. 00:33:12.769 [2024-07-26 09:06:31.022092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.769 [2024-07-26 09:06:31.022119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.769 qpair failed and we were unable to recover it. 00:33:12.769 [2024-07-26 09:06:31.022244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.769 [2024-07-26 09:06:31.022275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.769 qpair failed and we were unable to recover it. 00:33:12.769 [2024-07-26 09:06:31.022432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.769 [2024-07-26 09:06:31.022457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.770 qpair failed and we were unable to recover it. 00:33:12.770 [2024-07-26 09:06:31.022631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.770 [2024-07-26 09:06:31.022657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.770 qpair failed and we were unable to recover it. 00:33:12.770 [2024-07-26 09:06:31.022830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.770 [2024-07-26 09:06:31.022857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.770 qpair failed and we were unable to recover it. 00:33:12.770 [2024-07-26 09:06:31.022971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.770 [2024-07-26 09:06:31.022996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.770 qpair failed and we were unable to recover it. 00:33:12.770 [2024-07-26 09:06:31.023151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.770 [2024-07-26 09:06:31.023177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.770 qpair failed and we were unable to recover it. 00:33:12.770 [2024-07-26 09:06:31.023347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.770 [2024-07-26 09:06:31.023374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.770 qpair failed and we were unable to recover it. 00:33:12.770 [2024-07-26 09:06:31.023496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.770 [2024-07-26 09:06:31.023523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.770 qpair failed and we were unable to recover it. 00:33:12.770 [2024-07-26 09:06:31.023671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.770 [2024-07-26 09:06:31.023697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.770 qpair failed and we were unable to recover it. 00:33:12.770 [2024-07-26 09:06:31.023818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.770 [2024-07-26 09:06:31.023844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.770 qpair failed and we were unable to recover it. 00:33:12.770 [2024-07-26 09:06:31.023989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.770 [2024-07-26 09:06:31.024014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.770 qpair failed and we were unable to recover it. 00:33:12.770 [2024-07-26 09:06:31.024135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.770 [2024-07-26 09:06:31.024162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.770 qpair failed and we were unable to recover it. 00:33:12.770 [2024-07-26 09:06:31.024291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.770 [2024-07-26 09:06:31.024316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.770 qpair failed and we were unable to recover it. 00:33:12.770 [2024-07-26 09:06:31.024446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.770 [2024-07-26 09:06:31.024474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.770 qpair failed and we were unable to recover it. 00:33:12.770 [2024-07-26 09:06:31.024604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.770 [2024-07-26 09:06:31.024631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.770 qpair failed and we were unable to recover it. 00:33:12.770 [2024-07-26 09:06:31.024784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.770 [2024-07-26 09:06:31.024810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.770 qpair failed and we were unable to recover it. 00:33:12.770 [2024-07-26 09:06:31.024970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.770 [2024-07-26 09:06:31.024996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.770 qpair failed and we were unable to recover it. 00:33:12.770 [2024-07-26 09:06:31.025143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.770 [2024-07-26 09:06:31.025170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.770 qpair failed and we were unable to recover it. 00:33:12.770 [2024-07-26 09:06:31.025317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.770 [2024-07-26 09:06:31.025343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.770 qpair failed and we were unable to recover it. 00:33:12.770 [2024-07-26 09:06:31.025495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.770 [2024-07-26 09:06:31.025521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.770 qpair failed and we were unable to recover it. 00:33:12.770 [2024-07-26 09:06:31.025647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.770 [2024-07-26 09:06:31.025673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.770 qpair failed and we were unable to recover it. 00:33:12.770 [2024-07-26 09:06:31.025835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.770 [2024-07-26 09:06:31.025861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.770 qpair failed and we were unable to recover it. 00:33:12.770 [2024-07-26 09:06:31.026008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.770 [2024-07-26 09:06:31.026033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.770 qpair failed and we were unable to recover it. 00:33:12.770 [2024-07-26 09:06:31.026186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.770 [2024-07-26 09:06:31.026212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.770 qpair failed and we were unable to recover it. 00:33:12.770 [2024-07-26 09:06:31.026330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.770 [2024-07-26 09:06:31.026356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.770 qpair failed and we were unable to recover it. 00:33:12.770 [2024-07-26 09:06:31.026502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.770 [2024-07-26 09:06:31.026529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.770 qpair failed and we were unable to recover it. 00:33:12.770 [2024-07-26 09:06:31.026673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.770 [2024-07-26 09:06:31.026699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.770 qpair failed and we were unable to recover it. 00:33:12.770 [2024-07-26 09:06:31.026848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.770 [2024-07-26 09:06:31.026874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.770 qpair failed and we were unable to recover it. 00:33:12.770 [2024-07-26 09:06:31.026996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.770 [2024-07-26 09:06:31.027023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.770 qpair failed and we were unable to recover it. 00:33:12.770 [2024-07-26 09:06:31.027156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.770 [2024-07-26 09:06:31.027182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.770 qpair failed and we were unable to recover it. 00:33:12.770 [2024-07-26 09:06:31.027297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.770 [2024-07-26 09:06:31.027323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.770 qpair failed and we were unable to recover it. 00:33:12.770 [2024-07-26 09:06:31.027500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.770 [2024-07-26 09:06:31.027526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.770 qpair failed and we were unable to recover it. 00:33:12.770 [2024-07-26 09:06:31.027676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.770 [2024-07-26 09:06:31.027701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.770 qpair failed and we were unable to recover it. 00:33:12.770 [2024-07-26 09:06:31.027820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.770 [2024-07-26 09:06:31.027845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.770 qpair failed and we were unable to recover it. 00:33:12.770 [2024-07-26 09:06:31.027991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.770 [2024-07-26 09:06:31.028017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.770 qpair failed and we were unable to recover it. 00:33:12.770 [2024-07-26 09:06:31.028171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.770 [2024-07-26 09:06:31.028197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.770 qpair failed and we were unable to recover it. 00:33:12.771 [2024-07-26 09:06:31.028346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.771 [2024-07-26 09:06:31.028372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.771 qpair failed and we were unable to recover it. 00:33:12.771 [2024-07-26 09:06:31.028487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.771 [2024-07-26 09:06:31.028513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.771 qpair failed and we were unable to recover it. 00:33:12.771 [2024-07-26 09:06:31.028680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.771 [2024-07-26 09:06:31.028705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.771 qpair failed and we were unable to recover it. 00:33:12.771 [2024-07-26 09:06:31.028856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.771 [2024-07-26 09:06:31.028883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.771 qpair failed and we were unable to recover it. 00:33:12.771 [2024-07-26 09:06:31.029053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.771 [2024-07-26 09:06:31.029089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.771 qpair failed and we were unable to recover it. 00:33:12.771 [2024-07-26 09:06:31.029207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.771 [2024-07-26 09:06:31.029233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.771 qpair failed and we were unable to recover it. 00:33:12.771 [2024-07-26 09:06:31.029351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.771 [2024-07-26 09:06:31.029376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.771 qpair failed and we were unable to recover it. 00:33:12.771 [2024-07-26 09:06:31.029552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.771 [2024-07-26 09:06:31.029579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.771 qpair failed and we were unable to recover it. 00:33:12.771 [2024-07-26 09:06:31.029728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.771 [2024-07-26 09:06:31.029754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.771 qpair failed and we were unable to recover it. 00:33:12.771 [2024-07-26 09:06:31.029866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.771 [2024-07-26 09:06:31.029893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.771 qpair failed and we were unable to recover it. 00:33:12.771 [2024-07-26 09:06:31.030037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.771 [2024-07-26 09:06:31.030070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.771 qpair failed and we were unable to recover it. 00:33:12.771 [2024-07-26 09:06:31.030224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.771 [2024-07-26 09:06:31.030250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.771 qpair failed and we were unable to recover it. 00:33:12.771 [2024-07-26 09:06:31.030376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.771 [2024-07-26 09:06:31.030402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.771 qpair failed and we were unable to recover it. 00:33:12.771 [2024-07-26 09:06:31.030523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.771 [2024-07-26 09:06:31.030549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.771 qpair failed and we were unable to recover it. 00:33:12.771 [2024-07-26 09:06:31.030670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.771 [2024-07-26 09:06:31.030695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.771 qpair failed and we were unable to recover it. 00:33:12.771 [2024-07-26 09:06:31.030876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.771 [2024-07-26 09:06:31.030902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.771 qpair failed and we were unable to recover it. 00:33:12.771 [2024-07-26 09:06:31.031050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.771 [2024-07-26 09:06:31.031094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.771 qpair failed and we were unable to recover it. 00:33:12.771 [2024-07-26 09:06:31.031242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.771 [2024-07-26 09:06:31.031269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.771 qpair failed and we were unable to recover it. 00:33:12.771 [2024-07-26 09:06:31.031417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.771 [2024-07-26 09:06:31.031442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.771 qpair failed and we were unable to recover it. 00:33:12.771 [2024-07-26 09:06:31.031632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.771 [2024-07-26 09:06:31.031659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.771 qpair failed and we were unable to recover it. 00:33:12.771 [2024-07-26 09:06:31.031825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.771 [2024-07-26 09:06:31.031851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.771 qpair failed and we were unable to recover it. 00:33:12.771 [2024-07-26 09:06:31.031974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.771 [2024-07-26 09:06:31.032000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.771 qpair failed and we were unable to recover it. 00:33:12.771 [2024-07-26 09:06:31.032179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.771 [2024-07-26 09:06:31.032204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.771 qpair failed and we were unable to recover it. 00:33:12.771 [2024-07-26 09:06:31.032379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.771 [2024-07-26 09:06:31.032405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.771 qpair failed and we were unable to recover it. 00:33:12.771 [2024-07-26 09:06:31.032528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.771 [2024-07-26 09:06:31.032553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.771 qpair failed and we were unable to recover it. 00:33:12.771 [2024-07-26 09:06:31.032675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.771 [2024-07-26 09:06:31.032701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.771 qpair failed and we were unable to recover it. 00:33:12.771 [2024-07-26 09:06:31.032843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.771 [2024-07-26 09:06:31.032869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.771 qpair failed and we were unable to recover it. 00:33:12.771 [2024-07-26 09:06:31.033012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.771 [2024-07-26 09:06:31.033038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.771 qpair failed and we were unable to recover it. 00:33:12.771 [2024-07-26 09:06:31.033186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.771 [2024-07-26 09:06:31.033211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.771 qpair failed and we were unable to recover it. 00:33:12.771 [2024-07-26 09:06:31.033323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.771 [2024-07-26 09:06:31.033349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.771 qpair failed and we were unable to recover it. 00:33:12.771 [2024-07-26 09:06:31.033520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.771 [2024-07-26 09:06:31.033546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.771 qpair failed and we were unable to recover it. 00:33:12.771 [2024-07-26 09:06:31.033697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.771 [2024-07-26 09:06:31.033722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.771 qpair failed and we were unable to recover it. 00:33:12.771 [2024-07-26 09:06:31.033862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.771 [2024-07-26 09:06:31.033888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.771 qpair failed and we were unable to recover it. 00:33:12.771 [2024-07-26 09:06:31.034039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.771 [2024-07-26 09:06:31.034068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.771 qpair failed and we were unable to recover it. 00:33:12.771 [2024-07-26 09:06:31.034195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.771 [2024-07-26 09:06:31.034220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.771 qpair failed and we were unable to recover it. 00:33:12.771 [2024-07-26 09:06:31.034341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.771 [2024-07-26 09:06:31.034368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.771 qpair failed and we were unable to recover it. 00:33:12.771 [2024-07-26 09:06:31.034499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.771 [2024-07-26 09:06:31.034525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.771 qpair failed and we were unable to recover it. 00:33:12.771 [2024-07-26 09:06:31.034677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.772 [2024-07-26 09:06:31.034702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.772 qpair failed and we were unable to recover it. 00:33:12.772 [2024-07-26 09:06:31.034843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.772 [2024-07-26 09:06:31.034868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.772 qpair failed and we were unable to recover it. 00:33:12.772 [2024-07-26 09:06:31.035018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.772 [2024-07-26 09:06:31.035044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.772 qpair failed and we were unable to recover it. 00:33:12.772 [2024-07-26 09:06:31.035180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.772 [2024-07-26 09:06:31.035205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.772 qpair failed and we were unable to recover it. 00:33:12.772 [2024-07-26 09:06:31.035347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.772 [2024-07-26 09:06:31.035372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.772 qpair failed and we were unable to recover it. 00:33:12.772 [2024-07-26 09:06:31.035514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.772 [2024-07-26 09:06:31.035540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.772 qpair failed and we were unable to recover it. 00:33:12.772 [2024-07-26 09:06:31.035658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.772 [2024-07-26 09:06:31.035683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.772 qpair failed and we were unable to recover it. 00:33:12.772 [2024-07-26 09:06:31.035858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.772 [2024-07-26 09:06:31.035888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.772 qpair failed and we were unable to recover it. 00:33:12.772 [2024-07-26 09:06:31.036012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.772 [2024-07-26 09:06:31.036039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.772 qpair failed and we were unable to recover it. 00:33:12.772 [2024-07-26 09:06:31.036195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.772 [2024-07-26 09:06:31.036221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.772 qpair failed and we were unable to recover it. 00:33:12.772 [2024-07-26 09:06:31.036400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.772 [2024-07-26 09:06:31.036425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.772 qpair failed and we were unable to recover it. 00:33:12.772 [2024-07-26 09:06:31.036553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.772 [2024-07-26 09:06:31.036581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.772 qpair failed and we were unable to recover it. 00:33:12.772 [2024-07-26 09:06:31.036704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.772 [2024-07-26 09:06:31.036730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.772 qpair failed and we were unable to recover it. 00:33:12.772 [2024-07-26 09:06:31.036873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.772 [2024-07-26 09:06:31.036899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.772 qpair failed and we were unable to recover it. 00:33:12.772 [2024-07-26 09:06:31.037051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.772 [2024-07-26 09:06:31.037084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.772 qpair failed and we were unable to recover it. 00:33:12.772 [2024-07-26 09:06:31.037231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.772 [2024-07-26 09:06:31.037257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.772 qpair failed and we were unable to recover it. 00:33:12.772 [2024-07-26 09:06:31.037374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.772 [2024-07-26 09:06:31.037401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.772 qpair failed and we were unable to recover it. 00:33:12.772 [2024-07-26 09:06:31.037573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.772 [2024-07-26 09:06:31.037600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.772 qpair failed and we were unable to recover it. 00:33:12.772 [2024-07-26 09:06:31.037717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.772 [2024-07-26 09:06:31.037744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.772 qpair failed and we were unable to recover it. 00:33:12.772 [2024-07-26 09:06:31.037883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.772 [2024-07-26 09:06:31.037914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.772 qpair failed and we were unable to recover it. 00:33:12.772 [2024-07-26 09:06:31.038104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.772 [2024-07-26 09:06:31.038131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.772 qpair failed and we were unable to recover it. 00:33:12.772 [2024-07-26 09:06:31.038277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.772 [2024-07-26 09:06:31.038305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.772 qpair failed and we were unable to recover it. 00:33:12.772 [2024-07-26 09:06:31.038461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.772 [2024-07-26 09:06:31.038488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.772 qpair failed and we were unable to recover it. 00:33:12.772 [2024-07-26 09:06:31.038600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.772 [2024-07-26 09:06:31.038626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.772 qpair failed and we were unable to recover it. 00:33:12.772 [2024-07-26 09:06:31.038799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.772 [2024-07-26 09:06:31.038828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.772 qpair failed and we were unable to recover it. 00:33:12.772 [2024-07-26 09:06:31.038978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.772 [2024-07-26 09:06:31.039006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.772 qpair failed and we were unable to recover it. 00:33:12.772 [2024-07-26 09:06:31.039171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.772 [2024-07-26 09:06:31.039198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.772 qpair failed and we were unable to recover it. 00:33:12.772 [2024-07-26 09:06:31.039344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.772 [2024-07-26 09:06:31.039370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.772 qpair failed and we were unable to recover it. 00:33:12.772 [2024-07-26 09:06:31.039522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.772 [2024-07-26 09:06:31.039549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.772 qpair failed and we were unable to recover it. 00:33:12.772 [2024-07-26 09:06:31.039695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.772 [2024-07-26 09:06:31.039723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.772 qpair failed and we were unable to recover it. 00:33:12.772 [2024-07-26 09:06:31.039840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.772 [2024-07-26 09:06:31.039869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.772 qpair failed and we were unable to recover it. 00:33:12.772 [2024-07-26 09:06:31.040021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.772 [2024-07-26 09:06:31.040047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.772 qpair failed and we were unable to recover it. 00:33:12.772 [2024-07-26 09:06:31.040168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.772 [2024-07-26 09:06:31.040194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.772 qpair failed and we were unable to recover it. 00:33:12.772 [2024-07-26 09:06:31.040339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.772 [2024-07-26 09:06:31.040365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.772 qpair failed and we were unable to recover it. 00:33:12.772 [2024-07-26 09:06:31.040498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.772 [2024-07-26 09:06:31.040526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.772 qpair failed and we were unable to recover it. 00:33:12.772 [2024-07-26 09:06:31.040701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.772 [2024-07-26 09:06:31.040727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.772 qpair failed and we were unable to recover it. 00:33:12.772 [2024-07-26 09:06:31.040917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.772 [2024-07-26 09:06:31.040945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.772 qpair failed and we were unable to recover it. 00:33:12.772 [2024-07-26 09:06:31.041097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.772 [2024-07-26 09:06:31.041125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.773 qpair failed and we were unable to recover it. 00:33:12.773 [2024-07-26 09:06:31.041291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.773 [2024-07-26 09:06:31.041317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.773 qpair failed and we were unable to recover it. 00:33:12.773 [2024-07-26 09:06:31.041465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.773 [2024-07-26 09:06:31.041491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.773 qpair failed and we were unable to recover it. 00:33:12.773 [2024-07-26 09:06:31.041660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.773 [2024-07-26 09:06:31.041686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.773 qpair failed and we were unable to recover it. 00:33:12.773 [2024-07-26 09:06:31.041825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.773 [2024-07-26 09:06:31.041855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.773 qpair failed and we were unable to recover it. 00:33:12.773 [2024-07-26 09:06:31.042046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.773 [2024-07-26 09:06:31.042078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.773 qpair failed and we were unable to recover it. 00:33:12.773 [2024-07-26 09:06:31.042221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.773 [2024-07-26 09:06:31.042247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.773 qpair failed and we were unable to recover it. 00:33:12.773 [2024-07-26 09:06:31.042370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.773 [2024-07-26 09:06:31.042397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.773 qpair failed and we were unable to recover it. 00:33:12.773 [2024-07-26 09:06:31.042517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.773 [2024-07-26 09:06:31.042543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.773 qpair failed and we were unable to recover it. 00:33:12.773 [2024-07-26 09:06:31.042661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.773 [2024-07-26 09:06:31.042688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.773 qpair failed and we were unable to recover it. 00:33:12.773 [2024-07-26 09:06:31.042848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.773 [2024-07-26 09:06:31.042879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.773 qpair failed and we were unable to recover it. 00:33:12.773 [2024-07-26 09:06:31.043006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.773 [2024-07-26 09:06:31.043032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.773 qpair failed and we were unable to recover it. 00:33:12.773 [2024-07-26 09:06:31.043226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.773 [2024-07-26 09:06:31.043254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.773 qpair failed and we were unable to recover it. 00:33:12.773 [2024-07-26 09:06:31.043412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.773 [2024-07-26 09:06:31.043439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.773 qpair failed and we were unable to recover it. 00:33:12.773 [2024-07-26 09:06:31.043616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.773 [2024-07-26 09:06:31.043642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.773 qpair failed and we were unable to recover it. 00:33:12.773 [2024-07-26 09:06:31.043793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.773 [2024-07-26 09:06:31.043820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.773 qpair failed and we were unable to recover it. 00:33:12.773 [2024-07-26 09:06:31.043938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.773 [2024-07-26 09:06:31.043964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.773 qpair failed and we were unable to recover it. 00:33:12.773 [2024-07-26 09:06:31.044117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.773 [2024-07-26 09:06:31.044145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.773 qpair failed and we were unable to recover it. 00:33:12.773 [2024-07-26 09:06:31.044307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.773 [2024-07-26 09:06:31.044334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.773 qpair failed and we were unable to recover it. 00:33:12.773 [2024-07-26 09:06:31.044485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.773 [2024-07-26 09:06:31.044512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.773 qpair failed and we were unable to recover it. 00:33:12.773 [2024-07-26 09:06:31.044661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.773 [2024-07-26 09:06:31.044690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.773 qpair failed and we were unable to recover it. 00:33:12.773 [2024-07-26 09:06:31.044844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.773 [2024-07-26 09:06:31.044872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.773 qpair failed and we were unable to recover it. 00:33:12.773 [2024-07-26 09:06:31.045017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.773 [2024-07-26 09:06:31.045043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.773 qpair failed and we were unable to recover it. 00:33:12.773 [2024-07-26 09:06:31.045162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.773 [2024-07-26 09:06:31.045189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.773 qpair failed and we were unable to recover it. 00:33:12.773 [2024-07-26 09:06:31.045307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.773 [2024-07-26 09:06:31.045334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.773 qpair failed and we were unable to recover it. 00:33:12.773 [2024-07-26 09:06:31.045479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.773 [2024-07-26 09:06:31.045505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.773 qpair failed and we were unable to recover it. 00:33:12.773 [2024-07-26 09:06:31.045683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.773 [2024-07-26 09:06:31.045710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.773 qpair failed and we were unable to recover it. 00:33:12.773 [2024-07-26 09:06:31.045899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.773 [2024-07-26 09:06:31.045926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.773 qpair failed and we were unable to recover it. 00:33:12.773 [2024-07-26 09:06:31.046080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.773 [2024-07-26 09:06:31.046109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.773 qpair failed and we were unable to recover it. 00:33:12.773 [2024-07-26 09:06:31.046265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.773 [2024-07-26 09:06:31.046295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.773 qpair failed and we were unable to recover it. 00:33:12.773 [2024-07-26 09:06:31.046438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.773 [2024-07-26 09:06:31.046464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.773 qpair failed and we were unable to recover it. 00:33:12.773 [2024-07-26 09:06:31.046607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.773 [2024-07-26 09:06:31.046632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.773 qpair failed and we were unable to recover it. 00:33:12.773 [2024-07-26 09:06:31.046779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.773 [2024-07-26 09:06:31.046806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.773 qpair failed and we were unable to recover it. 00:33:12.773 [2024-07-26 09:06:31.046980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.773 [2024-07-26 09:06:31.047007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.773 qpair failed and we were unable to recover it. 00:33:12.773 [2024-07-26 09:06:31.047162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.773 [2024-07-26 09:06:31.047192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.773 qpair failed and we were unable to recover it. 00:33:12.773 [2024-07-26 09:06:31.047367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.773 [2024-07-26 09:06:31.047393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.773 qpair failed and we were unable to recover it. 00:33:12.773 [2024-07-26 09:06:31.047566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.773 [2024-07-26 09:06:31.047592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.773 qpair failed and we were unable to recover it. 00:33:12.773 [2024-07-26 09:06:31.047743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.773 [2024-07-26 09:06:31.047770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.773 qpair failed and we were unable to recover it. 00:33:12.774 [2024-07-26 09:06:31.047918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.774 [2024-07-26 09:06:31.047944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.774 qpair failed and we were unable to recover it. 00:33:12.774 [2024-07-26 09:06:31.048092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.774 [2024-07-26 09:06:31.048119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.774 qpair failed and we were unable to recover it. 00:33:12.774 [2024-07-26 09:06:31.048273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.774 [2024-07-26 09:06:31.048299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.774 qpair failed and we were unable to recover it. 00:33:12.774 [2024-07-26 09:06:31.048456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.774 [2024-07-26 09:06:31.048482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.774 qpair failed and we were unable to recover it. 00:33:12.774 [2024-07-26 09:06:31.048633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.774 [2024-07-26 09:06:31.048663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.774 qpair failed and we were unable to recover it. 00:33:12.774 [2024-07-26 09:06:31.048812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.774 [2024-07-26 09:06:31.048842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.774 qpair failed and we were unable to recover it. 00:33:12.774 [2024-07-26 09:06:31.048965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.774 [2024-07-26 09:06:31.048992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.774 qpair failed and we were unable to recover it. 00:33:12.774 [2024-07-26 09:06:31.049149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.774 [2024-07-26 09:06:31.049176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.774 qpair failed and we were unable to recover it. 00:33:12.774 [2024-07-26 09:06:31.049324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.774 [2024-07-26 09:06:31.049351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.774 qpair failed and we were unable to recover it. 00:33:12.774 [2024-07-26 09:06:31.049500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.774 [2024-07-26 09:06:31.049527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.774 qpair failed and we were unable to recover it. 00:33:12.774 [2024-07-26 09:06:31.049673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.774 [2024-07-26 09:06:31.049699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.774 qpair failed and we were unable to recover it. 00:33:12.774 [2024-07-26 09:06:31.049894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.774 [2024-07-26 09:06:31.049925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.774 qpair failed and we were unable to recover it. 00:33:12.774 [2024-07-26 09:06:31.050104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.774 [2024-07-26 09:06:31.050139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.774 qpair failed and we were unable to recover it. 00:33:12.774 [2024-07-26 09:06:31.050288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.774 [2024-07-26 09:06:31.050316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.774 qpair failed and we were unable to recover it. 00:33:12.774 [2024-07-26 09:06:31.050437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.774 [2024-07-26 09:06:31.050462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.774 qpair failed and we were unable to recover it. 00:33:12.774 [2024-07-26 09:06:31.050611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.774 [2024-07-26 09:06:31.050637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.774 qpair failed and we were unable to recover it. 00:33:12.774 [2024-07-26 09:06:31.050785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.774 [2024-07-26 09:06:31.050814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.774 qpair failed and we were unable to recover it. 00:33:12.774 [2024-07-26 09:06:31.051011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.774 [2024-07-26 09:06:31.051040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.774 qpair failed and we were unable to recover it. 00:33:12.774 [2024-07-26 09:06:31.051240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.774 [2024-07-26 09:06:31.051269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.774 qpair failed and we were unable to recover it. 00:33:12.774 [2024-07-26 09:06:31.051438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.774 [2024-07-26 09:06:31.051481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.774 qpair failed and we were unable to recover it. 00:33:12.774 [2024-07-26 09:06:31.051642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.774 [2024-07-26 09:06:31.051668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.774 qpair failed and we were unable to recover it. 00:33:12.774 [2024-07-26 09:06:31.051846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.774 [2024-07-26 09:06:31.051877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.774 qpair failed and we were unable to recover it. 00:33:12.774 [2024-07-26 09:06:31.052073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.774 [2024-07-26 09:06:31.052103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.774 qpair failed and we were unable to recover it. 00:33:12.774 [2024-07-26 09:06:31.052268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.774 [2024-07-26 09:06:31.052293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.774 qpair failed and we were unable to recover it. 00:33:12.774 [2024-07-26 09:06:31.052414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.774 [2024-07-26 09:06:31.052442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.774 qpair failed and we were unable to recover it. 00:33:12.774 [2024-07-26 09:06:31.052584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.774 [2024-07-26 09:06:31.052610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.774 qpair failed and we were unable to recover it. 00:33:12.774 [2024-07-26 09:06:31.052797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.774 [2024-07-26 09:06:31.052822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.774 qpair failed and we were unable to recover it. 00:33:12.774 [2024-07-26 09:06:31.052961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.774 [2024-07-26 09:06:31.052986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.774 qpair failed and we were unable to recover it. 00:33:12.774 [2024-07-26 09:06:31.053133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.774 [2024-07-26 09:06:31.053161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.774 qpair failed and we were unable to recover it. 00:33:12.774 [2024-07-26 09:06:31.053310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.774 [2024-07-26 09:06:31.053335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.774 qpair failed and we were unable to recover it. 00:33:12.774 [2024-07-26 09:06:31.053480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.774 [2024-07-26 09:06:31.053505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.774 qpair failed and we were unable to recover it. 00:33:12.774 [2024-07-26 09:06:31.053629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.774 [2024-07-26 09:06:31.053656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.774 qpair failed and we were unable to recover it. 00:33:12.775 [2024-07-26 09:06:31.053803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.775 [2024-07-26 09:06:31.053832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.775 qpair failed and we were unable to recover it. 00:33:12.775 [2024-07-26 09:06:31.053980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.775 [2024-07-26 09:06:31.054005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.775 qpair failed and we were unable to recover it. 00:33:12.775 [2024-07-26 09:06:31.054130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.775 [2024-07-26 09:06:31.054156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.775 qpair failed and we were unable to recover it. 00:33:12.775 [2024-07-26 09:06:31.054299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.775 [2024-07-26 09:06:31.054325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.775 qpair failed and we were unable to recover it. 00:33:12.775 [2024-07-26 09:06:31.054470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.775 [2024-07-26 09:06:31.054495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.775 qpair failed and we were unable to recover it. 00:33:12.775 [2024-07-26 09:06:31.054666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.775 [2024-07-26 09:06:31.054692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.775 qpair failed and we were unable to recover it. 00:33:12.775 [2024-07-26 09:06:31.054850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.775 [2024-07-26 09:06:31.054875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.775 qpair failed and we were unable to recover it. 00:33:12.775 [2024-07-26 09:06:31.055033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.775 [2024-07-26 09:06:31.055066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.775 qpair failed and we were unable to recover it. 00:33:12.775 [2024-07-26 09:06:31.055211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.775 [2024-07-26 09:06:31.055235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.775 qpair failed and we were unable to recover it. 00:33:12.775 [2024-07-26 09:06:31.055386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.775 [2024-07-26 09:06:31.055413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.775 qpair failed and we were unable to recover it. 00:33:12.775 [2024-07-26 09:06:31.055538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.775 [2024-07-26 09:06:31.055563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.775 qpair failed and we were unable to recover it. 00:33:12.775 [2024-07-26 09:06:31.055686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.775 [2024-07-26 09:06:31.055711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.775 qpair failed and we were unable to recover it. 00:33:12.775 [2024-07-26 09:06:31.055860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.775 [2024-07-26 09:06:31.055906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.775 qpair failed and we were unable to recover it. 00:33:12.775 [2024-07-26 09:06:31.056100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.775 [2024-07-26 09:06:31.056127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.775 qpair failed and we were unable to recover it. 00:33:12.775 [2024-07-26 09:06:31.056273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.775 [2024-07-26 09:06:31.056299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.775 qpair failed and we were unable to recover it. 00:33:12.775 [2024-07-26 09:06:31.056447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.775 [2024-07-26 09:06:31.056473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.775 qpair failed and we were unable to recover it. 00:33:12.775 [2024-07-26 09:06:31.056645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.775 [2024-07-26 09:06:31.056672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.775 qpair failed and we were unable to recover it. 00:33:12.775 [2024-07-26 09:06:31.056796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.775 [2024-07-26 09:06:31.056821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.775 qpair failed and we were unable to recover it. 00:33:12.775 [2024-07-26 09:06:31.056984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.775 [2024-07-26 09:06:31.057013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.775 qpair failed and we were unable to recover it. 00:33:12.775 [2024-07-26 09:06:31.057210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.775 [2024-07-26 09:06:31.057237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.775 qpair failed and we were unable to recover it. 00:33:12.775 [2024-07-26 09:06:31.057390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.775 [2024-07-26 09:06:31.057439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.775 qpair failed and we were unable to recover it. 00:33:12.775 [2024-07-26 09:06:31.057600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.775 [2024-07-26 09:06:31.057625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.775 qpair failed and we were unable to recover it. 00:33:12.775 [2024-07-26 09:06:31.057799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.775 [2024-07-26 09:06:31.057825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.775 qpair failed and we were unable to recover it. 00:33:12.775 [2024-07-26 09:06:31.057947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.775 [2024-07-26 09:06:31.057973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.775 qpair failed and we were unable to recover it. 00:33:12.775 [2024-07-26 09:06:31.058102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.775 [2024-07-26 09:06:31.058129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.775 qpair failed and we were unable to recover it. 00:33:12.775 [2024-07-26 09:06:31.058247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.775 [2024-07-26 09:06:31.058273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.775 qpair failed and we were unable to recover it. 00:33:12.775 [2024-07-26 09:06:31.058416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.775 [2024-07-26 09:06:31.058445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.775 qpair failed and we were unable to recover it. 00:33:12.775 [2024-07-26 09:06:31.058565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.775 [2024-07-26 09:06:31.058590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.775 qpair failed and we were unable to recover it. 00:33:12.775 [2024-07-26 09:06:31.058744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.775 [2024-07-26 09:06:31.058772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.775 qpair failed and we were unable to recover it. 00:33:12.775 [2024-07-26 09:06:31.058948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.775 [2024-07-26 09:06:31.058974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.775 qpair failed and we were unable to recover it. 00:33:12.775 [2024-07-26 09:06:31.059122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.775 [2024-07-26 09:06:31.059149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.775 qpair failed and we were unable to recover it. 00:33:12.775 [2024-07-26 09:06:31.059322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.775 [2024-07-26 09:06:31.059348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.775 qpair failed and we were unable to recover it. 00:33:12.775 [2024-07-26 09:06:31.059525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.775 [2024-07-26 09:06:31.059552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.775 qpair failed and we were unable to recover it. 00:33:12.775 [2024-07-26 09:06:31.059676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.775 [2024-07-26 09:06:31.059704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.775 qpair failed and we were unable to recover it. 00:33:12.775 [2024-07-26 09:06:31.059859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.775 [2024-07-26 09:06:31.059886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.775 qpair failed and we were unable to recover it. 00:33:12.775 [2024-07-26 09:06:31.060048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.775 [2024-07-26 09:06:31.060081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.775 qpair failed and we were unable to recover it. 00:33:12.775 [2024-07-26 09:06:31.060208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.775 [2024-07-26 09:06:31.060234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.775 qpair failed and we were unable to recover it. 00:33:12.775 [2024-07-26 09:06:31.060407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.776 [2024-07-26 09:06:31.060432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.776 qpair failed and we were unable to recover it. 00:33:12.776 [2024-07-26 09:06:31.060579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.776 [2024-07-26 09:06:31.060605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.776 qpair failed and we were unable to recover it. 00:33:12.776 [2024-07-26 09:06:31.060731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.776 [2024-07-26 09:06:31.060758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.776 qpair failed and we were unable to recover it. 00:33:12.776 [2024-07-26 09:06:31.060922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.776 [2024-07-26 09:06:31.060951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.776 qpair failed and we were unable to recover it. 00:33:12.776 [2024-07-26 09:06:31.061075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.776 [2024-07-26 09:06:31.061101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.776 qpair failed and we were unable to recover it. 00:33:12.776 [2024-07-26 09:06:31.061263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.776 [2024-07-26 09:06:31.061291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.776 qpair failed and we were unable to recover it. 00:33:12.776 [2024-07-26 09:06:31.061485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.776 [2024-07-26 09:06:31.061515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.776 qpair failed and we were unable to recover it. 00:33:12.776 [2024-07-26 09:06:31.061728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.776 [2024-07-26 09:06:31.061758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.776 qpair failed and we were unable to recover it. 00:33:12.776 [2024-07-26 09:06:31.061964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.776 [2024-07-26 09:06:31.061992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.776 qpair failed and we were unable to recover it. 00:33:12.776 [2024-07-26 09:06:31.062145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.776 [2024-07-26 09:06:31.062170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.776 qpair failed and we were unable to recover it. 00:33:12.776 [2024-07-26 09:06:31.062324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.776 [2024-07-26 09:06:31.062350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.776 qpair failed and we were unable to recover it. 00:33:12.776 [2024-07-26 09:06:31.062494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.776 [2024-07-26 09:06:31.062520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.776 qpair failed and we were unable to recover it. 00:33:12.776 [2024-07-26 09:06:31.062668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.776 [2024-07-26 09:06:31.062693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.776 qpair failed and we were unable to recover it. 00:33:12.776 [2024-07-26 09:06:31.062831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.776 [2024-07-26 09:06:31.062858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.776 qpair failed and we were unable to recover it. 00:33:12.776 [2024-07-26 09:06:31.062975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.776 [2024-07-26 09:06:31.063021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.776 qpair failed and we were unable to recover it. 00:33:12.776 [2024-07-26 09:06:31.063225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.776 [2024-07-26 09:06:31.063251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.776 qpair failed and we were unable to recover it. 00:33:12.776 [2024-07-26 09:06:31.063410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.776 [2024-07-26 09:06:31.063436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.776 qpair failed and we were unable to recover it. 00:33:12.776 [2024-07-26 09:06:31.063580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.776 [2024-07-26 09:06:31.063607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.776 qpair failed and we were unable to recover it. 00:33:12.776 [2024-07-26 09:06:31.063755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.776 [2024-07-26 09:06:31.063780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.776 qpair failed and we were unable to recover it. 00:33:12.776 [2024-07-26 09:06:31.063938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.776 [2024-07-26 09:06:31.063965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.776 qpair failed and we were unable to recover it. 00:33:12.776 [2024-07-26 09:06:31.064142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.776 [2024-07-26 09:06:31.064169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.776 qpair failed and we were unable to recover it. 00:33:12.776 [2024-07-26 09:06:31.064333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.776 [2024-07-26 09:06:31.064359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.776 qpair failed and we were unable to recover it. 00:33:12.776 [2024-07-26 09:06:31.064480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.776 [2024-07-26 09:06:31.064505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.776 qpair failed and we were unable to recover it. 00:33:12.776 [2024-07-26 09:06:31.064633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.776 [2024-07-26 09:06:31.064663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.776 qpair failed and we were unable to recover it. 00:33:12.776 [2024-07-26 09:06:31.064810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.776 [2024-07-26 09:06:31.064835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.776 qpair failed and we were unable to recover it. 00:33:12.776 [2024-07-26 09:06:31.064978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.776 [2024-07-26 09:06:31.065003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.776 qpair failed and we were unable to recover it. 00:33:12.776 [2024-07-26 09:06:31.065119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.776 [2024-07-26 09:06:31.065146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.776 qpair failed and we were unable to recover it. 00:33:12.776 [2024-07-26 09:06:31.065295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.776 [2024-07-26 09:06:31.065321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.776 qpair failed and we were unable to recover it. 00:33:12.776 [2024-07-26 09:06:31.065463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.776 [2024-07-26 09:06:31.065491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.776 qpair failed and we were unable to recover it. 00:33:12.776 [2024-07-26 09:06:31.065639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.776 [2024-07-26 09:06:31.065665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.776 qpair failed and we were unable to recover it. 00:33:12.776 [2024-07-26 09:06:31.065787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.776 [2024-07-26 09:06:31.065812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.776 qpair failed and we were unable to recover it. 00:33:12.776 [2024-07-26 09:06:31.065935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.776 [2024-07-26 09:06:31.065963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.776 qpair failed and we were unable to recover it. 00:33:12.776 [2024-07-26 09:06:31.066105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.776 [2024-07-26 09:06:31.066149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.776 qpair failed and we were unable to recover it. 00:33:12.776 [2024-07-26 09:06:31.066300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.776 [2024-07-26 09:06:31.066325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.776 qpair failed and we were unable to recover it. 00:33:12.776 [2024-07-26 09:06:31.066439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.776 [2024-07-26 09:06:31.066464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.776 qpair failed and we were unable to recover it. 00:33:12.776 [2024-07-26 09:06:31.066609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.776 [2024-07-26 09:06:31.066653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.776 qpair failed and we were unable to recover it. 00:33:12.776 [2024-07-26 09:06:31.066825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.776 [2024-07-26 09:06:31.066852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.776 qpair failed and we were unable to recover it. 00:33:12.776 [2024-07-26 09:06:31.067005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.776 [2024-07-26 09:06:31.067032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.776 qpair failed and we were unable to recover it. 00:33:12.777 [2024-07-26 09:06:31.067180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.777 [2024-07-26 09:06:31.067206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.777 qpair failed and we were unable to recover it. 00:33:12.777 [2024-07-26 09:06:31.067331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.777 [2024-07-26 09:06:31.067359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.777 qpair failed and we were unable to recover it. 00:33:12.777 [2024-07-26 09:06:31.067545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.777 [2024-07-26 09:06:31.067572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.777 qpair failed and we were unable to recover it. 00:33:12.777 [2024-07-26 09:06:31.067719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.777 [2024-07-26 09:06:31.067746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.777 qpair failed and we were unable to recover it. 00:33:12.777 [2024-07-26 09:06:31.067889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.777 [2024-07-26 09:06:31.067913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.777 qpair failed and we were unable to recover it. 00:33:12.777 [2024-07-26 09:06:31.068097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.777 [2024-07-26 09:06:31.068124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.777 qpair failed and we were unable to recover it. 00:33:12.777 [2024-07-26 09:06:31.068302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.777 [2024-07-26 09:06:31.068329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.777 qpair failed and we were unable to recover it. 00:33:12.777 [2024-07-26 09:06:31.068506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.777 [2024-07-26 09:06:31.068532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.777 qpair failed and we were unable to recover it. 00:33:12.777 [2024-07-26 09:06:31.068689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.777 [2024-07-26 09:06:31.068714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.777 qpair failed and we were unable to recover it. 00:33:12.777 [2024-07-26 09:06:31.068881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.777 [2024-07-26 09:06:31.068912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.777 qpair failed and we were unable to recover it. 00:33:12.777 [2024-07-26 09:06:31.069083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.777 [2024-07-26 09:06:31.069110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.777 qpair failed and we were unable to recover it. 00:33:12.777 [2024-07-26 09:06:31.069232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.777 [2024-07-26 09:06:31.069258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.777 qpair failed and we were unable to recover it. 00:33:12.777 [2024-07-26 09:06:31.069407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.777 [2024-07-26 09:06:31.069433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.777 qpair failed and we were unable to recover it. 00:33:12.777 [2024-07-26 09:06:31.069588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.777 [2024-07-26 09:06:31.069614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.777 qpair failed and we were unable to recover it. 00:33:12.777 [2024-07-26 09:06:31.069792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.777 [2024-07-26 09:06:31.069818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.777 qpair failed and we were unable to recover it. 00:33:12.777 [2024-07-26 09:06:31.069963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.777 [2024-07-26 09:06:31.069990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.777 qpair failed and we were unable to recover it. 00:33:12.777 [2024-07-26 09:06:31.070147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.777 [2024-07-26 09:06:31.070173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.777 qpair failed and we were unable to recover it. 00:33:12.777 [2024-07-26 09:06:31.070313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.777 [2024-07-26 09:06:31.070338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.777 qpair failed and we were unable to recover it. 00:33:12.777 [2024-07-26 09:06:31.070461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.777 [2024-07-26 09:06:31.070487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.777 qpair failed and we were unable to recover it. 00:33:12.777 [2024-07-26 09:06:31.070614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.777 [2024-07-26 09:06:31.070639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.777 qpair failed and we were unable to recover it. 00:33:12.777 [2024-07-26 09:06:31.070757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.777 [2024-07-26 09:06:31.070783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.777 qpair failed and we were unable to recover it. 00:33:12.777 [2024-07-26 09:06:31.070956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.777 [2024-07-26 09:06:31.070981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.777 qpair failed and we were unable to recover it. 00:33:12.777 [2024-07-26 09:06:31.071103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.777 [2024-07-26 09:06:31.071130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.777 qpair failed and we were unable to recover it. 00:33:12.777 [2024-07-26 09:06:31.071333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.777 [2024-07-26 09:06:31.071363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.777 qpair failed and we were unable to recover it. 00:33:12.777 [2024-07-26 09:06:31.071524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.777 [2024-07-26 09:06:31.071550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.777 qpair failed and we were unable to recover it. 00:33:12.777 [2024-07-26 09:06:31.071696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.777 [2024-07-26 09:06:31.071726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.777 qpair failed and we were unable to recover it. 00:33:12.777 [2024-07-26 09:06:31.071886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.777 [2024-07-26 09:06:31.071914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.777 qpair failed and we were unable to recover it. 00:33:12.777 [2024-07-26 09:06:31.072084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.777 [2024-07-26 09:06:31.072130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.777 qpair failed and we were unable to recover it. 00:33:12.777 [2024-07-26 09:06:31.072283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.777 [2024-07-26 09:06:31.072309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.777 qpair failed and we were unable to recover it. 00:33:12.777 [2024-07-26 09:06:31.072428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.777 [2024-07-26 09:06:31.072454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.777 qpair failed and we were unable to recover it. 00:33:12.777 [2024-07-26 09:06:31.072600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.777 [2024-07-26 09:06:31.072628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.777 qpair failed and we were unable to recover it. 00:33:12.777 [2024-07-26 09:06:31.072752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.777 [2024-07-26 09:06:31.072777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.777 qpair failed and we were unable to recover it. 00:33:12.777 [2024-07-26 09:06:31.072951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.777 [2024-07-26 09:06:31.072976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.777 qpair failed and we were unable to recover it. 00:33:12.777 [2024-07-26 09:06:31.073136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.777 [2024-07-26 09:06:31.073162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.777 qpair failed and we were unable to recover it. 00:33:12.777 [2024-07-26 09:06:31.073316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.777 [2024-07-26 09:06:31.073342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.777 qpair failed and we were unable to recover it. 00:33:12.777 [2024-07-26 09:06:31.073465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.777 [2024-07-26 09:06:31.073492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.777 qpair failed and we were unable to recover it. 00:33:12.777 [2024-07-26 09:06:31.073640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.777 [2024-07-26 09:06:31.073667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.777 qpair failed and we were unable to recover it. 00:33:12.778 [2024-07-26 09:06:31.073813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.778 [2024-07-26 09:06:31.073839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.778 qpair failed and we were unable to recover it. 00:33:12.778 [2024-07-26 09:06:31.074079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.778 [2024-07-26 09:06:31.074123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.778 qpair failed and we were unable to recover it. 00:33:12.778 [2024-07-26 09:06:31.074294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.778 [2024-07-26 09:06:31.074319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.778 qpair failed and we were unable to recover it. 00:33:12.778 [2024-07-26 09:06:31.074470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.778 [2024-07-26 09:06:31.074496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.778 qpair failed and we were unable to recover it. 00:33:12.778 [2024-07-26 09:06:31.074644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.778 [2024-07-26 09:06:31.074671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.778 qpair failed and we were unable to recover it. 00:33:12.778 [2024-07-26 09:06:31.074791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.778 [2024-07-26 09:06:31.074819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.778 qpair failed and we were unable to recover it. 00:33:12.778 [2024-07-26 09:06:31.074970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.778 [2024-07-26 09:06:31.074997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.778 qpair failed and we were unable to recover it. 00:33:12.778 [2024-07-26 09:06:31.075146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.778 [2024-07-26 09:06:31.075172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.778 qpair failed and we were unable to recover it. 00:33:12.778 [2024-07-26 09:06:31.075301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.778 [2024-07-26 09:06:31.075342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.778 qpair failed and we were unable to recover it. 00:33:12.778 [2024-07-26 09:06:31.075493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.778 [2024-07-26 09:06:31.075517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.778 qpair failed and we were unable to recover it. 00:33:12.778 [2024-07-26 09:06:31.075670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.778 [2024-07-26 09:06:31.075714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.778 qpair failed and we were unable to recover it. 00:33:12.778 [2024-07-26 09:06:31.075881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.778 [2024-07-26 09:06:31.075910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.778 qpair failed and we were unable to recover it. 00:33:12.778 [2024-07-26 09:06:31.076070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.778 [2024-07-26 09:06:31.076099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.778 qpair failed and we were unable to recover it. 00:33:12.778 [2024-07-26 09:06:31.076245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.778 [2024-07-26 09:06:31.076270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.778 qpair failed and we were unable to recover it. 00:33:12.778 [2024-07-26 09:06:31.076398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.778 [2024-07-26 09:06:31.076439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.778 qpair failed and we were unable to recover it. 00:33:12.778 [2024-07-26 09:06:31.076644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.778 [2024-07-26 09:06:31.076687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.778 qpair failed and we were unable to recover it. 00:33:12.778 [2024-07-26 09:06:31.076857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.778 [2024-07-26 09:06:31.076887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.778 qpair failed and we were unable to recover it. 00:33:12.778 [2024-07-26 09:06:31.077056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.778 [2024-07-26 09:06:31.077095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.778 qpair failed and we were unable to recover it. 00:33:12.778 [2024-07-26 09:06:31.077223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.778 [2024-07-26 09:06:31.077249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.778 qpair failed and we were unable to recover it. 00:33:12.778 [2024-07-26 09:06:31.077378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.778 [2024-07-26 09:06:31.077405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.778 qpair failed and we were unable to recover it. 00:33:12.778 [2024-07-26 09:06:31.077523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.778 [2024-07-26 09:06:31.077548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.778 qpair failed and we were unable to recover it. 00:33:12.778 [2024-07-26 09:06:31.077698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.778 [2024-07-26 09:06:31.077723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.778 qpair failed and we were unable to recover it. 00:33:12.778 [2024-07-26 09:06:31.077883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.778 [2024-07-26 09:06:31.077912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.778 qpair failed and we were unable to recover it. 00:33:12.778 [2024-07-26 09:06:31.078075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.778 [2024-07-26 09:06:31.078105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.778 qpair failed and we were unable to recover it. 00:33:12.778 [2024-07-26 09:06:31.078249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.778 [2024-07-26 09:06:31.078278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.778 qpair failed and we were unable to recover it. 00:33:12.778 [2024-07-26 09:06:31.078476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.778 [2024-07-26 09:06:31.078502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.778 qpair failed and we were unable to recover it. 00:33:12.778 [2024-07-26 09:06:31.078630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.778 [2024-07-26 09:06:31.078656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.778 qpair failed and we were unable to recover it. 00:33:12.778 [2024-07-26 09:06:31.078833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.778 [2024-07-26 09:06:31.078858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.778 qpair failed and we were unable to recover it. 00:33:12.778 [2024-07-26 09:06:31.078978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.778 [2024-07-26 09:06:31.079004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.778 qpair failed and we were unable to recover it. 00:33:12.778 [2024-07-26 09:06:31.079173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.778 [2024-07-26 09:06:31.079199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.778 qpair failed and we were unable to recover it. 00:33:12.778 [2024-07-26 09:06:31.079350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.778 [2024-07-26 09:06:31.079375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.778 qpair failed and we were unable to recover it. 00:33:12.778 [2024-07-26 09:06:31.079523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.778 [2024-07-26 09:06:31.079548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.778 qpair failed and we were unable to recover it. 00:33:12.778 [2024-07-26 09:06:31.079687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.778 [2024-07-26 09:06:31.079715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.778 qpair failed and we were unable to recover it. 00:33:12.778 [2024-07-26 09:06:31.079909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.778 [2024-07-26 09:06:31.079935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.778 qpair failed and we were unable to recover it. 00:33:12.778 [2024-07-26 09:06:31.080084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.778 [2024-07-26 09:06:31.080110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.778 qpair failed and we were unable to recover it. 00:33:12.778 [2024-07-26 09:06:31.080262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.778 [2024-07-26 09:06:31.080287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.778 qpair failed and we were unable to recover it. 00:33:12.778 [2024-07-26 09:06:31.080441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.778 [2024-07-26 09:06:31.080467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.778 qpair failed and we were unable to recover it. 00:33:12.778 [2024-07-26 09:06:31.080615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.778 [2024-07-26 09:06:31.080640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.778 qpair failed and we were unable to recover it. 00:33:12.779 [2024-07-26 09:06:31.080802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.779 [2024-07-26 09:06:31.080830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.779 qpair failed and we were unable to recover it. 00:33:12.779 [2024-07-26 09:06:31.081053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.779 [2024-07-26 09:06:31.081105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.779 qpair failed and we were unable to recover it. 00:33:12.779 [2024-07-26 09:06:31.081258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.779 [2024-07-26 09:06:31.081285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.779 qpair failed and we were unable to recover it. 00:33:12.779 [2024-07-26 09:06:31.081436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.779 [2024-07-26 09:06:31.081464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.779 qpair failed and we were unable to recover it. 00:33:12.779 [2024-07-26 09:06:31.081579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.779 [2024-07-26 09:06:31.081609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.779 qpair failed and we were unable to recover it. 00:33:12.779 [2024-07-26 09:06:31.081783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.779 [2024-07-26 09:06:31.081818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.779 qpair failed and we were unable to recover it. 00:33:12.779 [2024-07-26 09:06:31.081942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.779 [2024-07-26 09:06:31.081969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.779 qpair failed and we were unable to recover it. 00:33:12.779 [2024-07-26 09:06:31.082152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.779 [2024-07-26 09:06:31.082179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.779 qpair failed and we were unable to recover it. 00:33:12.779 [2024-07-26 09:06:31.082336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.779 [2024-07-26 09:06:31.082381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.779 qpair failed and we were unable to recover it. 00:33:12.779 [2024-07-26 09:06:31.082552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.779 [2024-07-26 09:06:31.082583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.779 qpair failed and we were unable to recover it. 00:33:12.779 [2024-07-26 09:06:31.082747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.779 [2024-07-26 09:06:31.082786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.779 qpair failed and we were unable to recover it. 00:33:12.779 [2024-07-26 09:06:31.082953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.779 [2024-07-26 09:06:31.082979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.779 qpair failed and we were unable to recover it. 00:33:12.779 [2024-07-26 09:06:31.083129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.779 [2024-07-26 09:06:31.083155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.779 qpair failed and we were unable to recover it. 00:33:12.779 [2024-07-26 09:06:31.083308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.779 [2024-07-26 09:06:31.083336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.779 qpair failed and we were unable to recover it. 00:33:12.779 [2024-07-26 09:06:31.083479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.779 [2024-07-26 09:06:31.083506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.779 qpair failed and we were unable to recover it. 00:33:12.779 [2024-07-26 09:06:31.083685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.779 [2024-07-26 09:06:31.083712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.779 qpair failed and we were unable to recover it. 00:33:12.779 [2024-07-26 09:06:31.083855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.779 [2024-07-26 09:06:31.083881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.779 qpair failed and we were unable to recover it. 00:33:12.779 [2024-07-26 09:06:31.084033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.779 [2024-07-26 09:06:31.084065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.779 qpair failed and we were unable to recover it. 00:33:12.779 [2024-07-26 09:06:31.084247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.779 [2024-07-26 09:06:31.084274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.779 qpair failed and we were unable to recover it. 00:33:12.779 [2024-07-26 09:06:31.084434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.779 [2024-07-26 09:06:31.084460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.779 qpair failed and we were unable to recover it. 00:33:12.779 [2024-07-26 09:06:31.084606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.779 [2024-07-26 09:06:31.084632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.779 qpair failed and we were unable to recover it. 00:33:12.779 [2024-07-26 09:06:31.084771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.779 [2024-07-26 09:06:31.084798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.779 qpair failed and we were unable to recover it. 00:33:12.779 [2024-07-26 09:06:31.084943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.779 [2024-07-26 09:06:31.084969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.779 qpair failed and we were unable to recover it. 00:33:12.779 [2024-07-26 09:06:31.085118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.779 [2024-07-26 09:06:31.085146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.779 qpair failed and we were unable to recover it. 00:33:12.779 [2024-07-26 09:06:31.085297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.779 [2024-07-26 09:06:31.085323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.779 qpair failed and we were unable to recover it. 00:33:12.779 [2024-07-26 09:06:31.085494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.779 [2024-07-26 09:06:31.085521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.779 qpair failed and we were unable to recover it. 00:33:12.779 [2024-07-26 09:06:31.085647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.779 [2024-07-26 09:06:31.085673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.779 qpair failed and we were unable to recover it. 00:33:12.779 [2024-07-26 09:06:31.085873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.779 [2024-07-26 09:06:31.085903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.779 qpair failed and we were unable to recover it. 00:33:12.779 [2024-07-26 09:06:31.086044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.779 [2024-07-26 09:06:31.086076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.779 qpair failed and we were unable to recover it. 00:33:12.779 [2024-07-26 09:06:31.086216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.779 [2024-07-26 09:06:31.086242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.779 qpair failed and we were unable to recover it. 00:33:12.779 [2024-07-26 09:06:31.086364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.779 [2024-07-26 09:06:31.086390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.779 qpair failed and we were unable to recover it. 00:33:12.779 [2024-07-26 09:06:31.086540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.779 [2024-07-26 09:06:31.086567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.779 qpair failed and we were unable to recover it. 00:33:12.780 [2024-07-26 09:06:31.086691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.780 [2024-07-26 09:06:31.086729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.780 qpair failed and we were unable to recover it. 00:33:12.780 [2024-07-26 09:06:31.086861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.780 [2024-07-26 09:06:31.086888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.780 qpair failed and we were unable to recover it. 00:33:12.780 [2024-07-26 09:06:31.087053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.780 [2024-07-26 09:06:31.087108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.780 qpair failed and we were unable to recover it. 00:33:12.780 [2024-07-26 09:06:31.087260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.780 [2024-07-26 09:06:31.087290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.780 qpair failed and we were unable to recover it. 00:33:12.780 [2024-07-26 09:06:31.087439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.780 [2024-07-26 09:06:31.087468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.780 qpair failed and we were unable to recover it. 00:33:12.780 [2024-07-26 09:06:31.087601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.780 [2024-07-26 09:06:31.087627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.780 qpair failed and we were unable to recover it. 00:33:12.780 [2024-07-26 09:06:31.087770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.780 [2024-07-26 09:06:31.087796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.780 qpair failed and we were unable to recover it. 00:33:12.780 [2024-07-26 09:06:31.087969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.780 [2024-07-26 09:06:31.087997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.780 qpair failed and we were unable to recover it. 00:33:12.780 [2024-07-26 09:06:31.088123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.780 [2024-07-26 09:06:31.088151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.780 qpair failed and we were unable to recover it. 00:33:12.780 [2024-07-26 09:06:31.088295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.780 [2024-07-26 09:06:31.088321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.780 qpair failed and we were unable to recover it. 00:33:12.780 [2024-07-26 09:06:31.088441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.780 [2024-07-26 09:06:31.088467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.780 qpair failed and we were unable to recover it. 00:33:12.780 [2024-07-26 09:06:31.088618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.780 [2024-07-26 09:06:31.088646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.780 qpair failed and we were unable to recover it. 00:33:12.780 [2024-07-26 09:06:31.088788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.780 [2024-07-26 09:06:31.088819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.780 qpair failed and we were unable to recover it. 00:33:12.780 [2024-07-26 09:06:31.088998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.780 [2024-07-26 09:06:31.089024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.780 qpair failed and we were unable to recover it. 00:33:12.780 [2024-07-26 09:06:31.089154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.780 [2024-07-26 09:06:31.089181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.780 qpair failed and we were unable to recover it. 00:33:12.780 [2024-07-26 09:06:31.089336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.780 [2024-07-26 09:06:31.089363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.780 qpair failed and we were unable to recover it. 00:33:12.780 [2024-07-26 09:06:31.089521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.780 [2024-07-26 09:06:31.089550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.780 qpair failed and we were unable to recover it. 00:33:12.780 [2024-07-26 09:06:31.089684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.780 [2024-07-26 09:06:31.089727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.780 qpair failed and we were unable to recover it. 00:33:12.780 [2024-07-26 09:06:31.089866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.780 [2024-07-26 09:06:31.089892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.780 qpair failed and we were unable to recover it. 00:33:12.780 [2024-07-26 09:06:31.090036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.780 [2024-07-26 09:06:31.090075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.780 qpair failed and we were unable to recover it. 00:33:12.780 [2024-07-26 09:06:31.090192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.780 [2024-07-26 09:06:31.090218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.780 qpair failed and we were unable to recover it. 00:33:12.780 [2024-07-26 09:06:31.090374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.780 [2024-07-26 09:06:31.090403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.780 qpair failed and we were unable to recover it. 00:33:12.780 [2024-07-26 09:06:31.090573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.780 [2024-07-26 09:06:31.090600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.780 qpair failed and we were unable to recover it. 00:33:12.780 [2024-07-26 09:06:31.090748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.780 [2024-07-26 09:06:31.090775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.780 qpair failed and we were unable to recover it. 00:33:12.780 [2024-07-26 09:06:31.090897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.780 [2024-07-26 09:06:31.090923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.780 qpair failed and we were unable to recover it. 00:33:12.780 [2024-07-26 09:06:31.091045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.780 [2024-07-26 09:06:31.091079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.780 qpair failed and we were unable to recover it. 00:33:12.780 [2024-07-26 09:06:31.091263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.780 [2024-07-26 09:06:31.091290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.780 qpair failed and we were unable to recover it. 00:33:12.780 [2024-07-26 09:06:31.091448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.780 [2024-07-26 09:06:31.091475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.780 qpair failed and we were unable to recover it. 00:33:12.780 [2024-07-26 09:06:31.091597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.780 [2024-07-26 09:06:31.091623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.780 qpair failed and we were unable to recover it. 00:33:12.780 [2024-07-26 09:06:31.091774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.780 [2024-07-26 09:06:31.091803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.780 qpair failed and we were unable to recover it. 00:33:12.780 [2024-07-26 09:06:31.091997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.780 [2024-07-26 09:06:31.092027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.780 qpair failed and we were unable to recover it. 00:33:12.780 [2024-07-26 09:06:31.092205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.780 [2024-07-26 09:06:31.092235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.780 qpair failed and we were unable to recover it. 00:33:12.780 [2024-07-26 09:06:31.092398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.780 [2024-07-26 09:06:31.092427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.780 qpair failed and we were unable to recover it. 00:33:12.780 [2024-07-26 09:06:31.092588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.780 [2024-07-26 09:06:31.092619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.780 qpair failed and we were unable to recover it. 00:33:12.780 [2024-07-26 09:06:31.092790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.780 [2024-07-26 09:06:31.092816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.780 qpair failed and we were unable to recover it. 00:33:12.780 [2024-07-26 09:06:31.092987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.780 [2024-07-26 09:06:31.093017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.780 qpair failed and we were unable to recover it. 00:33:12.780 [2024-07-26 09:06:31.093187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.780 [2024-07-26 09:06:31.093214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.780 qpair failed and we were unable to recover it. 00:33:12.780 [2024-07-26 09:06:31.093388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.781 [2024-07-26 09:06:31.093414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.781 qpair failed and we were unable to recover it. 00:33:12.781 [2024-07-26 09:06:31.093552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.781 [2024-07-26 09:06:31.093577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.781 qpair failed and we were unable to recover it. 00:33:12.781 [2024-07-26 09:06:31.093709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.781 [2024-07-26 09:06:31.093737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.781 qpair failed and we were unable to recover it. 00:33:12.781 [2024-07-26 09:06:31.093910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.781 [2024-07-26 09:06:31.093939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.781 qpair failed and we were unable to recover it. 00:33:12.781 [2024-07-26 09:06:31.094125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.781 [2024-07-26 09:06:31.094153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.781 qpair failed and we were unable to recover it. 00:33:12.781 [2024-07-26 09:06:31.094302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.781 [2024-07-26 09:06:31.094328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.781 qpair failed and we were unable to recover it. 00:33:12.781 [2024-07-26 09:06:31.094473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.781 [2024-07-26 09:06:31.094500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.781 qpair failed and we were unable to recover it. 00:33:12.781 [2024-07-26 09:06:31.094651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.781 [2024-07-26 09:06:31.094677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.781 qpair failed and we were unable to recover it. 00:33:12.781 [2024-07-26 09:06:31.094842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.781 [2024-07-26 09:06:31.094872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.781 qpair failed and we were unable to recover it. 00:33:12.781 [2024-07-26 09:06:31.095039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.781 [2024-07-26 09:06:31.095089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.781 qpair failed and we were unable to recover it. 00:33:12.781 [2024-07-26 09:06:31.095281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.781 [2024-07-26 09:06:31.095308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.781 qpair failed and we were unable to recover it. 00:33:12.781 [2024-07-26 09:06:31.095462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.781 [2024-07-26 09:06:31.095488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.781 qpair failed and we were unable to recover it. 00:33:12.781 [2024-07-26 09:06:31.095632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.781 [2024-07-26 09:06:31.095659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.781 qpair failed and we were unable to recover it. 00:33:12.781 [2024-07-26 09:06:31.095799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.781 [2024-07-26 09:06:31.095825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.781 qpair failed and we were unable to recover it. 00:33:12.781 [2024-07-26 09:06:31.095973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.781 [2024-07-26 09:06:31.095999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.781 qpair failed and we were unable to recover it. 00:33:12.781 [2024-07-26 09:06:31.096155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.781 [2024-07-26 09:06:31.096188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.781 qpair failed and we were unable to recover it. 00:33:12.781 [2024-07-26 09:06:31.096308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.781 [2024-07-26 09:06:31.096335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.781 qpair failed and we were unable to recover it. 00:33:12.781 [2024-07-26 09:06:31.096492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.781 [2024-07-26 09:06:31.096534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.781 qpair failed and we were unable to recover it. 00:33:12.781 [2024-07-26 09:06:31.096727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.781 [2024-07-26 09:06:31.096753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.781 qpair failed and we were unable to recover it. 00:33:12.781 [2024-07-26 09:06:31.096892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.781 [2024-07-26 09:06:31.096918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.781 qpair failed and we were unable to recover it. 00:33:12.781 [2024-07-26 09:06:31.097070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.781 [2024-07-26 09:06:31.097100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.781 qpair failed and we were unable to recover it. 00:33:12.781 [2024-07-26 09:06:31.097275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.781 [2024-07-26 09:06:31.097302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.781 qpair failed and we were unable to recover it. 00:33:12.781 [2024-07-26 09:06:31.097486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.781 [2024-07-26 09:06:31.097514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.781 qpair failed and we were unable to recover it. 00:33:12.781 [2024-07-26 09:06:31.097653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.781 [2024-07-26 09:06:31.097683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.781 qpair failed and we were unable to recover it. 00:33:12.781 [2024-07-26 09:06:31.097840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.781 [2024-07-26 09:06:31.097869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.781 qpair failed and we were unable to recover it. 00:33:12.781 [2024-07-26 09:06:31.098026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.781 [2024-07-26 09:06:31.098057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.781 qpair failed and we were unable to recover it. 00:33:12.781 [2024-07-26 09:06:31.098258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.781 [2024-07-26 09:06:31.098285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.781 qpair failed and we were unable to recover it. 00:33:12.781 [2024-07-26 09:06:31.098434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.781 [2024-07-26 09:06:31.098460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.781 qpair failed and we were unable to recover it. 00:33:12.781 [2024-07-26 09:06:31.098618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.781 [2024-07-26 09:06:31.098644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.781 qpair failed and we were unable to recover it. 00:33:12.781 [2024-07-26 09:06:31.098763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.781 [2024-07-26 09:06:31.098791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.781 qpair failed and we were unable to recover it. 00:33:12.781 [2024-07-26 09:06:31.098941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.781 [2024-07-26 09:06:31.098968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.781 qpair failed and we were unable to recover it. 00:33:12.781 [2024-07-26 09:06:31.099120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.781 [2024-07-26 09:06:31.099147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.781 qpair failed and we were unable to recover it. 00:33:12.781 [2024-07-26 09:06:31.099308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.781 [2024-07-26 09:06:31.099337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.781 qpair failed and we were unable to recover it. 00:33:12.781 [2024-07-26 09:06:31.099501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.781 [2024-07-26 09:06:31.099525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.781 qpair failed and we were unable to recover it. 00:33:12.781 [2024-07-26 09:06:31.099697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.781 [2024-07-26 09:06:31.099724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.781 qpair failed and we were unable to recover it. 00:33:12.781 [2024-07-26 09:06:31.099866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.781 [2024-07-26 09:06:31.099895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.781 qpair failed and we were unable to recover it. 00:33:12.781 [2024-07-26 09:06:31.100040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.781 [2024-07-26 09:06:31.100077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.781 qpair failed and we were unable to recover it. 00:33:12.781 [2024-07-26 09:06:31.100275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.781 [2024-07-26 09:06:31.100304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.781 qpair failed and we were unable to recover it. 00:33:12.782 [2024-07-26 09:06:31.100451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.782 [2024-07-26 09:06:31.100477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.782 qpair failed and we were unable to recover it. 00:33:12.782 [2024-07-26 09:06:31.100596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.782 [2024-07-26 09:06:31.100622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.782 qpair failed and we were unable to recover it. 00:33:12.782 [2024-07-26 09:06:31.100774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.782 [2024-07-26 09:06:31.100804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.782 qpair failed and we were unable to recover it. 00:33:12.782 [2024-07-26 09:06:31.100973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.782 [2024-07-26 09:06:31.101021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:12.782 qpair failed and we were unable to recover it. 00:33:12.782 [2024-07-26 09:06:31.101238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.782 [2024-07-26 09:06:31.101266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:12.782 qpair failed and we were unable to recover it. 00:33:12.782 [2024-07-26 09:06:31.101432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.782 [2024-07-26 09:06:31.101463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:12.782 qpair failed and we were unable to recover it. 00:33:12.782 [2024-07-26 09:06:31.101618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.782 [2024-07-26 09:06:31.101644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:12.782 qpair failed and we were unable to recover it. 00:33:12.782 [2024-07-26 09:06:31.101814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.782 [2024-07-26 09:06:31.101871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:12.782 qpair failed and we were unable to recover it. 00:33:12.782 [2024-07-26 09:06:31.102021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.782 [2024-07-26 09:06:31.102046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:12.782 qpair failed and we were unable to recover it. 00:33:12.782 [2024-07-26 09:06:31.102205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.782 [2024-07-26 09:06:31.102231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:12.782 qpair failed and we were unable to recover it. 00:33:12.782 [2024-07-26 09:06:31.102352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.782 [2024-07-26 09:06:31.102384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:12.782 qpair failed and we were unable to recover it. 00:33:12.782 [2024-07-26 09:06:31.102536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.782 [2024-07-26 09:06:31.102569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:12.782 qpair failed and we were unable to recover it. 00:33:12.782 [2024-07-26 09:06:31.102716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.782 [2024-07-26 09:06:31.102742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:12.782 qpair failed and we were unable to recover it. 00:33:12.782 [2024-07-26 09:06:31.102855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.782 [2024-07-26 09:06:31.102880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:12.782 qpair failed and we were unable to recover it. 00:33:12.782 [2024-07-26 09:06:31.103005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.782 [2024-07-26 09:06:31.103032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:12.782 qpair failed and we were unable to recover it. 00:33:12.782 [2024-07-26 09:06:31.103200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.782 [2024-07-26 09:06:31.103227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.782 qpair failed and we were unable to recover it. 00:33:12.782 [2024-07-26 09:06:31.103375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.782 [2024-07-26 09:06:31.103401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.782 qpair failed and we were unable to recover it. 00:33:12.782 [2024-07-26 09:06:31.103555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.782 [2024-07-26 09:06:31.103586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.782 qpair failed and we were unable to recover it. 00:33:12.782 [2024-07-26 09:06:31.103734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.782 [2024-07-26 09:06:31.103761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.782 qpair failed and we were unable to recover it. 00:33:12.782 [2024-07-26 09:06:31.103910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.782 [2024-07-26 09:06:31.103936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.782 qpair failed and we were unable to recover it. 00:33:12.782 [2024-07-26 09:06:31.104143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.782 [2024-07-26 09:06:31.104170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.782 qpair failed and we were unable to recover it. 00:33:12.782 [2024-07-26 09:06:31.104324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.782 [2024-07-26 09:06:31.104366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.782 qpair failed and we were unable to recover it. 00:33:12.782 [2024-07-26 09:06:31.104533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.782 [2024-07-26 09:06:31.104563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.782 qpair failed and we were unable to recover it. 00:33:12.782 [2024-07-26 09:06:31.104694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.782 [2024-07-26 09:06:31.104724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.782 qpair failed and we were unable to recover it. 00:33:12.782 [2024-07-26 09:06:31.104920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.782 [2024-07-26 09:06:31.104949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.782 qpair failed and we were unable to recover it. 00:33:12.782 [2024-07-26 09:06:31.105078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.782 [2024-07-26 09:06:31.105105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.782 qpair failed and we were unable to recover it. 00:33:12.782 [2024-07-26 09:06:31.105276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.782 [2024-07-26 09:06:31.105302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.782 qpair failed and we were unable to recover it. 00:33:12.782 [2024-07-26 09:06:31.105464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.782 [2024-07-26 09:06:31.105491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.782 qpair failed and we were unable to recover it. 00:33:12.782 [2024-07-26 09:06:31.105645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.782 [2024-07-26 09:06:31.105672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.782 qpair failed and we were unable to recover it. 00:33:12.782 [2024-07-26 09:06:31.105821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.782 [2024-07-26 09:06:31.105847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.782 qpair failed and we were unable to recover it. 00:33:12.782 [2024-07-26 09:06:31.105993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.782 [2024-07-26 09:06:31.106023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.782 qpair failed and we were unable to recover it. 00:33:12.782 [2024-07-26 09:06:31.106181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.782 [2024-07-26 09:06:31.106212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.782 qpair failed and we were unable to recover it. 00:33:12.782 [2024-07-26 09:06:31.106356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.782 [2024-07-26 09:06:31.106385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.782 qpair failed and we were unable to recover it. 00:33:12.782 [2024-07-26 09:06:31.106562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.782 [2024-07-26 09:06:31.106589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.782 qpair failed and we were unable to recover it. 00:33:12.782 [2024-07-26 09:06:31.106735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.782 [2024-07-26 09:06:31.106761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.782 qpair failed and we were unable to recover it. 00:33:12.782 [2024-07-26 09:06:31.106901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.782 [2024-07-26 09:06:31.106927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.782 qpair failed and we were unable to recover it. 00:33:12.782 [2024-07-26 09:06:31.107049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.782 [2024-07-26 09:06:31.107086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.782 qpair failed and we were unable to recover it. 00:33:12.782 [2024-07-26 09:06:31.107235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.783 [2024-07-26 09:06:31.107262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.783 qpair failed and we were unable to recover it. 00:33:12.783 [2024-07-26 09:06:31.107478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.783 [2024-07-26 09:06:31.107507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.783 qpair failed and we were unable to recover it. 00:33:12.783 [2024-07-26 09:06:31.107633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.783 [2024-07-26 09:06:31.107659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.783 qpair failed and we were unable to recover it. 00:33:12.783 [2024-07-26 09:06:31.107811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.783 [2024-07-26 09:06:31.107837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.783 qpair failed and we were unable to recover it. 00:33:12.783 [2024-07-26 09:06:31.107979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.783 [2024-07-26 09:06:31.108005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.783 qpair failed and we were unable to recover it. 00:33:12.783 [2024-07-26 09:06:31.108131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.783 [2024-07-26 09:06:31.108158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.783 qpair failed and we were unable to recover it. 00:33:12.783 [2024-07-26 09:06:31.108308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.783 [2024-07-26 09:06:31.108335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.783 qpair failed and we were unable to recover it. 00:33:12.783 [2024-07-26 09:06:31.108486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.783 [2024-07-26 09:06:31.108512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.783 qpair failed and we were unable to recover it. 00:33:12.783 [2024-07-26 09:06:31.108644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.783 [2024-07-26 09:06:31.108670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.783 qpair failed and we were unable to recover it. 00:33:12.783 [2024-07-26 09:06:31.108786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.783 [2024-07-26 09:06:31.108813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.783 qpair failed and we were unable to recover it. 00:33:12.783 [2024-07-26 09:06:31.108959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.783 [2024-07-26 09:06:31.108986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.783 qpair failed and we were unable to recover it. 00:33:12.783 [2024-07-26 09:06:31.109136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.783 [2024-07-26 09:06:31.109162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.783 qpair failed and we were unable to recover it. 00:33:12.783 [2024-07-26 09:06:31.109306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.783 [2024-07-26 09:06:31.109332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.783 qpair failed and we were unable to recover it. 00:33:12.783 [2024-07-26 09:06:31.109506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.783 [2024-07-26 09:06:31.109534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.783 qpair failed and we were unable to recover it. 00:33:12.783 [2024-07-26 09:06:31.109693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.783 [2024-07-26 09:06:31.109722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.783 qpair failed and we were unable to recover it. 00:33:12.783 [2024-07-26 09:06:31.109891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.783 [2024-07-26 09:06:31.109920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.783 qpair failed and we were unable to recover it. 00:33:12.783 [2024-07-26 09:06:31.110040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.783 [2024-07-26 09:06:31.110072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.783 qpair failed and we were unable to recover it. 00:33:12.783 [2024-07-26 09:06:31.110238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.783 [2024-07-26 09:06:31.110277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.783 qpair failed and we were unable to recover it. 00:33:12.783 [2024-07-26 09:06:31.110412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.783 [2024-07-26 09:06:31.110438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.783 qpair failed and we were unable to recover it. 00:33:12.783 [2024-07-26 09:06:31.110589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.783 [2024-07-26 09:06:31.110614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.783 qpair failed and we were unable to recover it. 00:33:12.783 [2024-07-26 09:06:31.110746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.783 [2024-07-26 09:06:31.110779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.783 qpair failed and we were unable to recover it. 00:33:12.783 [2024-07-26 09:06:31.110934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.783 [2024-07-26 09:06:31.110962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.783 qpair failed and we were unable to recover it. 00:33:12.783 [2024-07-26 09:06:31.111098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.783 [2024-07-26 09:06:31.111128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.783 qpair failed and we were unable to recover it. 00:33:12.783 [2024-07-26 09:06:31.111298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.783 [2024-07-26 09:06:31.111323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.783 qpair failed and we were unable to recover it. 00:33:12.783 [2024-07-26 09:06:31.111470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.783 [2024-07-26 09:06:31.111495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.783 qpair failed and we were unable to recover it. 00:33:12.783 [2024-07-26 09:06:31.111624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.783 [2024-07-26 09:06:31.111651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.783 qpair failed and we were unable to recover it. 00:33:12.783 [2024-07-26 09:06:31.111797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.783 [2024-07-26 09:06:31.111822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.783 qpair failed and we were unable to recover it. 00:33:12.783 [2024-07-26 09:06:31.111973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.783 [2024-07-26 09:06:31.111999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.783 qpair failed and we were unable to recover it. 00:33:12.783 [2024-07-26 09:06:31.112137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.783 [2024-07-26 09:06:31.112163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.783 qpair failed and we were unable to recover it. 00:33:12.783 [2024-07-26 09:06:31.112353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.783 [2024-07-26 09:06:31.112381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.783 qpair failed and we were unable to recover it. 00:33:12.783 [2024-07-26 09:06:31.112534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.783 [2024-07-26 09:06:31.112582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.783 qpair failed and we were unable to recover it. 00:33:12.783 [2024-07-26 09:06:31.112725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.783 [2024-07-26 09:06:31.112750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.783 qpair failed and we were unable to recover it. 00:33:12.783 [2024-07-26 09:06:31.112893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.783 [2024-07-26 09:06:31.112935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.783 qpair failed and we were unable to recover it. 00:33:12.783 [2024-07-26 09:06:31.113094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.783 [2024-07-26 09:06:31.113124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.783 qpair failed and we were unable to recover it. 00:33:12.783 [2024-07-26 09:06:31.113285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.784 [2024-07-26 09:06:31.113331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:12.784 qpair failed and we were unable to recover it. 00:33:12.784 [2024-07-26 09:06:31.113477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.784 [2024-07-26 09:06:31.113505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:12.784 qpair failed and we were unable to recover it. 00:33:12.784 [2024-07-26 09:06:31.113677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.784 [2024-07-26 09:06:31.113703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:12.784 qpair failed and we were unable to recover it. 00:33:12.784 [2024-07-26 09:06:31.113850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.784 [2024-07-26 09:06:31.113876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:12.784 qpair failed and we were unable to recover it. 00:33:12.784 [2024-07-26 09:06:31.114005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.784 [2024-07-26 09:06:31.114031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.784 qpair failed and we were unable to recover it. 00:33:12.784 [2024-07-26 09:06:31.114202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.784 [2024-07-26 09:06:31.114228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.784 qpair failed and we were unable to recover it. 00:33:12.784 [2024-07-26 09:06:31.114373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.784 [2024-07-26 09:06:31.114415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.784 qpair failed and we were unable to recover it. 00:33:12.784 [2024-07-26 09:06:31.114594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.784 [2024-07-26 09:06:31.114627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.784 qpair failed and we were unable to recover it. 00:33:12.784 [2024-07-26 09:06:31.114862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.784 [2024-07-26 09:06:31.114893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:12.784 qpair failed and we were unable to recover it. 00:33:12.784 [2024-07-26 09:06:31.115073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.784 [2024-07-26 09:06:31.115100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:12.784 qpair failed and we were unable to recover it. 00:33:12.784 [2024-07-26 09:06:31.115244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.784 [2024-07-26 09:06:31.115270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:12.784 qpair failed and we were unable to recover it. 00:33:12.784 [2024-07-26 09:06:31.115430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.784 [2024-07-26 09:06:31.115456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:12.784 qpair failed and we were unable to recover it. 00:33:12.784 [2024-07-26 09:06:31.115572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.784 [2024-07-26 09:06:31.115598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.784 qpair failed and we were unable to recover it. 00:33:12.784 [2024-07-26 09:06:31.115715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.784 [2024-07-26 09:06:31.115740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.784 qpair failed and we were unable to recover it. 00:33:12.784 [2024-07-26 09:06:31.115892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.784 [2024-07-26 09:06:31.115917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.784 qpair failed and we were unable to recover it. 00:33:12.784 [2024-07-26 09:06:31.116080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.784 [2024-07-26 09:06:31.116109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.784 qpair failed and we were unable to recover it. 00:33:12.784 [2024-07-26 09:06:31.116267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.784 [2024-07-26 09:06:31.116292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.784 qpair failed and we were unable to recover it. 00:33:12.784 [2024-07-26 09:06:31.116478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.784 [2024-07-26 09:06:31.116503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.784 qpair failed and we were unable to recover it. 00:33:12.784 [2024-07-26 09:06:31.116647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.784 [2024-07-26 09:06:31.116672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.784 qpair failed and we were unable to recover it. 00:33:12.784 [2024-07-26 09:06:31.116797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.784 [2024-07-26 09:06:31.116822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.784 qpair failed and we were unable to recover it. 00:33:12.784 [2024-07-26 09:06:31.116935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.784 [2024-07-26 09:06:31.116961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.784 qpair failed and we were unable to recover it. 00:33:12.784 [2024-07-26 09:06:31.117110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.784 [2024-07-26 09:06:31.117137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.784 qpair failed and we were unable to recover it. 00:33:12.784 [2024-07-26 09:06:31.117283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.784 [2024-07-26 09:06:31.117308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.784 qpair failed and we were unable to recover it. 00:33:12.784 [2024-07-26 09:06:31.117444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.784 [2024-07-26 09:06:31.117473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.784 qpair failed and we were unable to recover it. 00:33:12.784 [2024-07-26 09:06:31.117619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.784 [2024-07-26 09:06:31.117648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.784 qpair failed and we were unable to recover it. 00:33:12.784 [2024-07-26 09:06:31.117818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.784 [2024-07-26 09:06:31.117843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.784 qpair failed and we were unable to recover it. 00:33:12.784 [2024-07-26 09:06:31.117956] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3a470 is same with the state(5) to be set 00:33:12.784 [2024-07-26 09:06:31.118156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.784 [2024-07-26 09:06:31.118198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.784 qpair failed and we were unable to recover it. 00:33:12.784 [2024-07-26 09:06:31.118357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.784 [2024-07-26 09:06:31.118389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.784 qpair failed and we were unable to recover it. 00:33:12.784 [2024-07-26 09:06:31.118558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.784 [2024-07-26 09:06:31.118584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.784 qpair failed and we were unable to recover it. 00:33:12.784 [2024-07-26 09:06:31.118732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.784 [2024-07-26 09:06:31.118759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.784 qpair failed and we were unable to recover it. 00:33:12.784 [2024-07-26 09:06:31.118880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.784 [2024-07-26 09:06:31.118908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.784 qpair failed and we were unable to recover it. 00:33:12.784 [2024-07-26 09:06:31.119055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.784 [2024-07-26 09:06:31.119088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.784 qpair failed and we were unable to recover it. 00:33:12.784 [2024-07-26 09:06:31.119200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.784 [2024-07-26 09:06:31.119226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.784 qpair failed and we were unable to recover it. 00:33:12.784 [2024-07-26 09:06:31.119398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.784 [2024-07-26 09:06:31.119427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.784 qpair failed and we were unable to recover it. 00:33:12.784 [2024-07-26 09:06:31.119607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.784 [2024-07-26 09:06:31.119633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.784 qpair failed and we were unable to recover it. 00:33:12.784 [2024-07-26 09:06:31.119749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.784 [2024-07-26 09:06:31.119775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.785 qpair failed and we were unable to recover it. 00:33:12.785 [2024-07-26 09:06:31.119926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.785 [2024-07-26 09:06:31.119952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.785 qpair failed and we were unable to recover it. 00:33:12.785 [2024-07-26 09:06:31.120132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.785 [2024-07-26 09:06:31.120161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.785 qpair failed and we were unable to recover it. 00:33:12.785 [2024-07-26 09:06:31.120294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.785 [2024-07-26 09:06:31.120321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.785 qpair failed and we were unable to recover it. 00:33:12.785 [2024-07-26 09:06:31.120485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.785 [2024-07-26 09:06:31.120515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.785 qpair failed and we were unable to recover it. 00:33:12.785 [2024-07-26 09:06:31.120695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.785 [2024-07-26 09:06:31.120722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.785 qpair failed and we were unable to recover it. 00:33:12.785 [2024-07-26 09:06:31.120887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.785 [2024-07-26 09:06:31.120917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.785 qpair failed and we were unable to recover it. 00:33:12.785 [2024-07-26 09:06:31.121077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.785 [2024-07-26 09:06:31.121120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.785 qpair failed and we were unable to recover it. 00:33:12.785 [2024-07-26 09:06:31.121264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.785 [2024-07-26 09:06:31.121291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.785 qpair failed and we were unable to recover it. 00:33:12.785 [2024-07-26 09:06:31.121449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.785 [2024-07-26 09:06:31.121475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.785 qpair failed and we were unable to recover it. 00:33:12.785 [2024-07-26 09:06:31.121645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.785 [2024-07-26 09:06:31.121671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.785 qpair failed and we were unable to recover it. 00:33:12.785 [2024-07-26 09:06:31.121828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.785 [2024-07-26 09:06:31.121854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.785 qpair failed and we were unable to recover it. 00:33:12.785 [2024-07-26 09:06:31.121982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.785 [2024-07-26 09:06:31.122008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.785 qpair failed and we were unable to recover it. 00:33:12.785 [2024-07-26 09:06:31.122165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.785 [2024-07-26 09:06:31.122191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.785 qpair failed and we were unable to recover it. 00:33:12.785 [2024-07-26 09:06:31.122342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.785 [2024-07-26 09:06:31.122369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.785 qpair failed and we were unable to recover it. 00:33:12.785 [2024-07-26 09:06:31.122528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.785 [2024-07-26 09:06:31.122554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.785 qpair failed and we were unable to recover it. 00:33:12.785 [2024-07-26 09:06:31.122731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.785 [2024-07-26 09:06:31.122760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.785 qpair failed and we were unable to recover it. 00:33:12.785 [2024-07-26 09:06:31.122949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.785 [2024-07-26 09:06:31.122979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.785 qpair failed and we were unable to recover it. 00:33:12.785 [2024-07-26 09:06:31.123173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.785 [2024-07-26 09:06:31.123214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.785 qpair failed and we were unable to recover it. 00:33:12.785 [2024-07-26 09:06:31.123326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.785 [2024-07-26 09:06:31.123352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.785 qpair failed and we were unable to recover it. 00:33:12.785 [2024-07-26 09:06:31.123473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.785 [2024-07-26 09:06:31.123499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.785 qpair failed and we were unable to recover it. 00:33:12.785 [2024-07-26 09:06:31.123628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.785 [2024-07-26 09:06:31.123654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.785 qpair failed and we were unable to recover it. 00:33:12.785 [2024-07-26 09:06:31.123830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.785 [2024-07-26 09:06:31.123857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.785 qpair failed and we were unable to recover it. 00:33:12.785 [2024-07-26 09:06:31.124003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.785 [2024-07-26 09:06:31.124029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.785 qpair failed and we were unable to recover it. 00:33:12.785 [2024-07-26 09:06:31.124162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.785 [2024-07-26 09:06:31.124201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.785 qpair failed and we were unable to recover it. 00:33:12.785 [2024-07-26 09:06:31.124353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.785 [2024-07-26 09:06:31.124380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.785 qpair failed and we were unable to recover it. 00:33:12.785 [2024-07-26 09:06:31.124526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.785 [2024-07-26 09:06:31.124552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.785 qpair failed and we were unable to recover it. 00:33:12.785 [2024-07-26 09:06:31.124707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.785 [2024-07-26 09:06:31.124740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.785 qpair failed and we were unable to recover it. 00:33:12.785 [2024-07-26 09:06:31.124935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.785 [2024-07-26 09:06:31.124980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.785 qpair failed and we were unable to recover it. 00:33:12.785 [2024-07-26 09:06:31.125129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.785 [2024-07-26 09:06:31.125157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.785 qpair failed and we were unable to recover it. 00:33:12.785 [2024-07-26 09:06:31.125316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.785 [2024-07-26 09:06:31.125342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.785 qpair failed and we were unable to recover it. 00:33:12.785 [2024-07-26 09:06:31.125478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.785 [2024-07-26 09:06:31.125503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.785 qpair failed and we were unable to recover it. 00:33:12.785 [2024-07-26 09:06:31.125626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.785 [2024-07-26 09:06:31.125651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.785 qpair failed and we were unable to recover it. 00:33:12.785 [2024-07-26 09:06:31.125771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.785 [2024-07-26 09:06:31.125799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.785 qpair failed and we were unable to recover it. 00:33:12.785 [2024-07-26 09:06:31.125925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.785 [2024-07-26 09:06:31.125952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.785 qpair failed and we were unable to recover it. 00:33:12.785 [2024-07-26 09:06:31.126127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.785 [2024-07-26 09:06:31.126153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.785 qpair failed and we were unable to recover it. 00:33:12.785 [2024-07-26 09:06:31.126295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.785 [2024-07-26 09:06:31.126322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.785 qpair failed and we were unable to recover it. 00:33:12.785 [2024-07-26 09:06:31.126466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.785 [2024-07-26 09:06:31.126492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.785 qpair failed and we were unable to recover it. 00:33:12.785 [2024-07-26 09:06:31.126640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.786 [2024-07-26 09:06:31.126666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.786 qpair failed and we were unable to recover it. 00:33:12.786 [2024-07-26 09:06:31.126846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.786 [2024-07-26 09:06:31.126873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.786 qpair failed and we were unable to recover it. 00:33:12.786 [2024-07-26 09:06:31.127010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.786 [2024-07-26 09:06:31.127036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.786 qpair failed and we were unable to recover it. 00:33:12.786 [2024-07-26 09:06:31.127173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.786 [2024-07-26 09:06:31.127199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.786 qpair failed and we were unable to recover it. 00:33:12.786 [2024-07-26 09:06:31.127371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.786 [2024-07-26 09:06:31.127396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.786 qpair failed and we were unable to recover it. 00:33:12.786 [2024-07-26 09:06:31.127556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.786 [2024-07-26 09:06:31.127584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.786 qpair failed and we were unable to recover it. 00:33:12.786 [2024-07-26 09:06:31.127755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.786 [2024-07-26 09:06:31.127781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:12.786 qpair failed and we were unable to recover it. 00:33:12.786 [2024-07-26 09:06:31.127932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.786 [2024-07-26 09:06:31.127982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.786 qpair failed and we were unable to recover it. 00:33:12.786 [2024-07-26 09:06:31.128159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.786 [2024-07-26 09:06:31.128187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.786 qpair failed and we were unable to recover it. 00:33:12.786 [2024-07-26 09:06:31.128308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.786 [2024-07-26 09:06:31.128335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.786 qpair failed and we were unable to recover it. 00:33:12.786 [2024-07-26 09:06:31.128510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.786 [2024-07-26 09:06:31.128536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.786 qpair failed and we were unable to recover it. 00:33:12.786 [2024-07-26 09:06:31.128684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.786 [2024-07-26 09:06:31.128713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.786 qpair failed and we were unable to recover it. 00:33:12.786 [2024-07-26 09:06:31.128858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.786 [2024-07-26 09:06:31.128885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.786 qpair failed and we were unable to recover it. 00:33:12.786 [2024-07-26 09:06:31.129000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.786 [2024-07-26 09:06:31.129042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.786 qpair failed and we were unable to recover it. 00:33:12.786 [2024-07-26 09:06:31.129184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.786 [2024-07-26 09:06:31.129210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.786 qpair failed and we were unable to recover it. 00:33:12.786 [2024-07-26 09:06:31.129384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.786 [2024-07-26 09:06:31.129410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.786 qpair failed and we were unable to recover it. 00:33:12.786 [2024-07-26 09:06:31.129570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.786 [2024-07-26 09:06:31.129597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.786 qpair failed and we were unable to recover it. 00:33:12.786 [2024-07-26 09:06:31.129713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.786 [2024-07-26 09:06:31.129740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.786 qpair failed and we were unable to recover it. 00:33:12.786 [2024-07-26 09:06:31.129871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.786 [2024-07-26 09:06:31.129897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.786 qpair failed and we were unable to recover it. 00:33:12.786 [2024-07-26 09:06:31.130072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.786 [2024-07-26 09:06:31.130100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.786 qpair failed and we were unable to recover it. 00:33:12.786 [2024-07-26 09:06:31.130238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.786 [2024-07-26 09:06:31.130264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.786 qpair failed and we were unable to recover it. 00:33:12.786 [2024-07-26 09:06:31.130383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.786 [2024-07-26 09:06:31.130409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.786 qpair failed and we were unable to recover it. 00:33:12.786 [2024-07-26 09:06:31.130571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.786 [2024-07-26 09:06:31.130600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.786 qpair failed and we were unable to recover it. 00:33:12.786 [2024-07-26 09:06:31.130719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.786 [2024-07-26 09:06:31.130744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.786 qpair failed and we were unable to recover it. 00:33:12.786 [2024-07-26 09:06:31.130894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.786 [2024-07-26 09:06:31.130920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.786 qpair failed and we were unable to recover it. 00:33:12.786 [2024-07-26 09:06:31.131070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.786 [2024-07-26 09:06:31.131096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.786 qpair failed and we were unable to recover it. 00:33:12.786 [2024-07-26 09:06:31.131210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.786 [2024-07-26 09:06:31.131237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.786 qpair failed and we were unable to recover it. 00:33:12.786 [2024-07-26 09:06:31.131348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.786 [2024-07-26 09:06:31.131375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.786 qpair failed and we were unable to recover it. 00:33:12.786 [2024-07-26 09:06:31.131508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.786 [2024-07-26 09:06:31.131535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.786 qpair failed and we were unable to recover it. 00:33:12.786 [2024-07-26 09:06:31.131650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.786 [2024-07-26 09:06:31.131678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.786 qpair failed and we were unable to recover it. 00:33:12.786 [2024-07-26 09:06:31.131822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.786 [2024-07-26 09:06:31.131848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.786 qpair failed and we were unable to recover it. 00:33:12.786 [2024-07-26 09:06:31.131996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.786 [2024-07-26 09:06:31.132023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.786 qpair failed and we were unable to recover it. 00:33:12.786 [2024-07-26 09:06:31.132159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.786 [2024-07-26 09:06:31.132185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.786 qpair failed and we were unable to recover it. 00:33:12.786 [2024-07-26 09:06:31.132305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.786 [2024-07-26 09:06:31.132332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.786 qpair failed and we were unable to recover it. 00:33:12.786 [2024-07-26 09:06:31.132499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.786 [2024-07-26 09:06:31.132529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.786 qpair failed and we were unable to recover it. 00:33:12.786 [2024-07-26 09:06:31.132681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.786 [2024-07-26 09:06:31.132710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.786 qpair failed and we were unable to recover it. 00:33:12.786 [2024-07-26 09:06:31.132881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.786 [2024-07-26 09:06:31.132907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.786 qpair failed and we were unable to recover it. 00:33:12.786 [2024-07-26 09:06:31.133076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.786 [2024-07-26 09:06:31.133103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.787 qpair failed and we were unable to recover it. 00:33:12.787 [2024-07-26 09:06:31.133230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.787 [2024-07-26 09:06:31.133257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.787 qpair failed and we were unable to recover it. 00:33:12.787 [2024-07-26 09:06:31.133428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.787 [2024-07-26 09:06:31.133455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.787 qpair failed and we were unable to recover it. 00:33:12.787 [2024-07-26 09:06:31.133582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.787 [2024-07-26 09:06:31.133608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.787 qpair failed and we were unable to recover it. 00:33:12.787 [2024-07-26 09:06:31.133755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.787 [2024-07-26 09:06:31.133782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.787 qpair failed and we were unable to recover it. 00:33:12.787 [2024-07-26 09:06:31.133940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.787 [2024-07-26 09:06:31.133966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.787 qpair failed and we were unable to recover it. 00:33:12.787 [2024-07-26 09:06:31.134149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.787 [2024-07-26 09:06:31.134179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.787 qpair failed and we were unable to recover it. 00:33:12.787 [2024-07-26 09:06:31.134348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.787 [2024-07-26 09:06:31.134377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.787 qpair failed and we were unable to recover it. 00:33:12.787 [2024-07-26 09:06:31.134514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.787 [2024-07-26 09:06:31.134543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.787 qpair failed and we were unable to recover it. 00:33:12.787 [2024-07-26 09:06:31.134716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.787 [2024-07-26 09:06:31.134745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.787 qpair failed and we were unable to recover it. 00:33:12.787 [2024-07-26 09:06:31.134866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.787 [2024-07-26 09:06:31.134897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.787 qpair failed and we were unable to recover it. 00:33:12.787 [2024-07-26 09:06:31.135017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.787 [2024-07-26 09:06:31.135043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.787 qpair failed and we were unable to recover it. 00:33:12.787 [2024-07-26 09:06:31.135197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.787 [2024-07-26 09:06:31.135223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.787 qpair failed and we were unable to recover it. 00:33:12.787 [2024-07-26 09:06:31.135373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.787 [2024-07-26 09:06:31.135399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.787 qpair failed and we were unable to recover it. 00:33:12.787 [2024-07-26 09:06:31.135572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.787 [2024-07-26 09:06:31.135598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.787 qpair failed and we were unable to recover it. 00:33:12.787 [2024-07-26 09:06:31.135720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.787 [2024-07-26 09:06:31.135746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.787 qpair failed and we were unable to recover it. 00:33:12.787 [2024-07-26 09:06:31.135893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.787 [2024-07-26 09:06:31.135919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.787 qpair failed and we were unable to recover it. 00:33:12.787 [2024-07-26 09:06:31.136075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.787 [2024-07-26 09:06:31.136106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.787 qpair failed and we were unable to recover it. 00:33:12.787 [2024-07-26 09:06:31.136253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.787 [2024-07-26 09:06:31.136280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.787 qpair failed and we were unable to recover it. 00:33:12.787 [2024-07-26 09:06:31.136404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.787 [2024-07-26 09:06:31.136432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.787 qpair failed and we were unable to recover it. 00:33:12.787 [2024-07-26 09:06:31.136580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.787 [2024-07-26 09:06:31.136606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.787 qpair failed and we were unable to recover it. 00:33:12.787 [2024-07-26 09:06:31.136733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.787 [2024-07-26 09:06:31.136763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.787 qpair failed and we were unable to recover it. 00:33:12.787 [2024-07-26 09:06:31.136904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.787 [2024-07-26 09:06:31.136930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.787 qpair failed and we were unable to recover it. 00:33:12.787 [2024-07-26 09:06:31.137085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.787 [2024-07-26 09:06:31.137112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.787 qpair failed and we were unable to recover it. 00:33:12.787 [2024-07-26 09:06:31.137263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.787 [2024-07-26 09:06:31.137291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.787 qpair failed and we were unable to recover it. 00:33:12.787 [2024-07-26 09:06:31.137423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.787 [2024-07-26 09:06:31.137453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.787 qpair failed and we were unable to recover it. 00:33:12.787 [2024-07-26 09:06:31.137627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.787 [2024-07-26 09:06:31.137655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.787 qpair failed and we were unable to recover it. 00:33:12.787 [2024-07-26 09:06:31.137800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.787 [2024-07-26 09:06:31.137844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.787 qpair failed and we were unable to recover it. 00:33:12.787 [2024-07-26 09:06:31.138015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.787 [2024-07-26 09:06:31.138047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.787 qpair failed and we were unable to recover it. 00:33:12.787 [2024-07-26 09:06:31.138223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.787 [2024-07-26 09:06:31.138249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.787 qpair failed and we were unable to recover it. 00:33:12.787 [2024-07-26 09:06:31.138406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.787 [2024-07-26 09:06:31.138432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.787 qpair failed and we were unable to recover it. 00:33:12.787 [2024-07-26 09:06:31.138554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.787 [2024-07-26 09:06:31.138580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.787 qpair failed and we were unable to recover it. 00:33:12.787 [2024-07-26 09:06:31.138729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.787 [2024-07-26 09:06:31.138755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.787 qpair failed and we were unable to recover it. 00:33:12.787 [2024-07-26 09:06:31.138892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.787 [2024-07-26 09:06:31.138920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.787 qpair failed and we were unable to recover it. 00:33:12.787 [2024-07-26 09:06:31.139106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.787 [2024-07-26 09:06:31.139135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.787 qpair failed and we were unable to recover it. 00:33:12.787 [2024-07-26 09:06:31.139276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.787 [2024-07-26 09:06:31.139301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.787 qpair failed and we were unable to recover it. 00:33:12.787 [2024-07-26 09:06:31.139421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.787 [2024-07-26 09:06:31.139450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.787 qpair failed and we were unable to recover it. 00:33:12.787 [2024-07-26 09:06:31.139580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.787 [2024-07-26 09:06:31.139610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.787 qpair failed and we were unable to recover it. 00:33:12.788 [2024-07-26 09:06:31.139786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.788 [2024-07-26 09:06:31.139812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.788 qpair failed and we were unable to recover it. 00:33:12.788 [2024-07-26 09:06:31.139939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.788 [2024-07-26 09:06:31.139966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.788 qpair failed and we were unable to recover it. 00:33:12.788 [2024-07-26 09:06:31.140127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.788 [2024-07-26 09:06:31.140154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.788 qpair failed and we were unable to recover it. 00:33:12.788 [2024-07-26 09:06:31.140331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.788 [2024-07-26 09:06:31.140357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.788 qpair failed and we were unable to recover it. 00:33:12.788 [2024-07-26 09:06:31.140491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.788 [2024-07-26 09:06:31.140521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.788 qpair failed and we were unable to recover it. 00:33:12.788 [2024-07-26 09:06:31.140669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.788 [2024-07-26 09:06:31.140695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.788 qpair failed and we were unable to recover it. 00:33:12.788 [2024-07-26 09:06:31.140841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.788 [2024-07-26 09:06:31.140868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.788 qpair failed and we were unable to recover it. 00:33:12.788 [2024-07-26 09:06:31.141016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.788 [2024-07-26 09:06:31.141044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.788 qpair failed and we were unable to recover it. 00:33:12.788 [2024-07-26 09:06:31.141187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.788 [2024-07-26 09:06:31.141214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.788 qpair failed and we were unable to recover it. 00:33:12.788 [2024-07-26 09:06:31.141377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.788 [2024-07-26 09:06:31.141403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.788 qpair failed and we were unable to recover it. 00:33:12.788 [2024-07-26 09:06:31.141572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.788 [2024-07-26 09:06:31.141598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.788 qpair failed and we were unable to recover it. 00:33:12.788 [2024-07-26 09:06:31.141719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.788 [2024-07-26 09:06:31.141746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.788 qpair failed and we were unable to recover it. 00:33:12.788 [2024-07-26 09:06:31.141907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.788 [2024-07-26 09:06:31.141937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.788 qpair failed and we were unable to recover it. 00:33:12.788 [2024-07-26 09:06:31.142172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.788 [2024-07-26 09:06:31.142202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.788 qpair failed and we were unable to recover it. 00:33:12.788 [2024-07-26 09:06:31.142390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.788 [2024-07-26 09:06:31.142419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.788 qpair failed and we were unable to recover it. 00:33:12.788 [2024-07-26 09:06:31.142589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.788 [2024-07-26 09:06:31.142616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.788 qpair failed and we were unable to recover it. 00:33:12.788 [2024-07-26 09:06:31.142760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.788 [2024-07-26 09:06:31.142787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.788 qpair failed and we were unable to recover it. 00:33:12.788 [2024-07-26 09:06:31.142952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.788 [2024-07-26 09:06:31.142981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.788 qpair failed and we were unable to recover it. 00:33:12.788 [2024-07-26 09:06:31.143154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.788 [2024-07-26 09:06:31.143183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.788 qpair failed and we were unable to recover it. 00:33:12.788 [2024-07-26 09:06:31.143334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.788 [2024-07-26 09:06:31.143364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.788 qpair failed and we were unable to recover it. 00:33:12.788 [2024-07-26 09:06:31.143570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.788 [2024-07-26 09:06:31.143597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.788 qpair failed and we were unable to recover it. 00:33:12.788 [2024-07-26 09:06:31.143748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.788 [2024-07-26 09:06:31.143774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.788 qpair failed and we were unable to recover it. 00:33:12.788 [2024-07-26 09:06:31.143922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.788 [2024-07-26 09:06:31.143965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.788 qpair failed and we were unable to recover it. 00:33:12.788 [2024-07-26 09:06:31.144132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.788 [2024-07-26 09:06:31.144161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.788 qpair failed and we were unable to recover it. 00:33:12.788 [2024-07-26 09:06:31.144356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.788 [2024-07-26 09:06:31.144382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.788 qpair failed and we were unable to recover it. 00:33:12.788 [2024-07-26 09:06:31.144510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.788 [2024-07-26 09:06:31.144536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.788 qpair failed and we were unable to recover it. 00:33:12.788 [2024-07-26 09:06:31.144691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.788 [2024-07-26 09:06:31.144717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.788 qpair failed and we were unable to recover it. 00:33:12.788 [2024-07-26 09:06:31.144869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.788 [2024-07-26 09:06:31.144896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.788 qpair failed and we were unable to recover it. 00:33:12.788 [2024-07-26 09:06:31.145071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.788 [2024-07-26 09:06:31.145097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.788 qpair failed and we were unable to recover it. 00:33:12.788 [2024-07-26 09:06:31.145246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.788 [2024-07-26 09:06:31.145272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.788 qpair failed and we were unable to recover it. 00:33:12.788 [2024-07-26 09:06:31.145420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.788 [2024-07-26 09:06:31.145446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.788 qpair failed and we were unable to recover it. 00:33:12.788 [2024-07-26 09:06:31.145610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.788 [2024-07-26 09:06:31.145639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.788 qpair failed and we were unable to recover it. 00:33:12.789 [2024-07-26 09:06:31.145778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.789 [2024-07-26 09:06:31.145804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.789 qpair failed and we were unable to recover it. 00:33:12.789 [2024-07-26 09:06:31.145949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.789 [2024-07-26 09:06:31.145979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.789 qpair failed and we were unable to recover it. 00:33:12.789 [2024-07-26 09:06:31.146127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.789 [2024-07-26 09:06:31.146155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.789 qpair failed and we were unable to recover it. 00:33:12.789 [2024-07-26 09:06:31.146330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.789 [2024-07-26 09:06:31.146356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.789 qpair failed and we were unable to recover it. 00:33:12.789 [2024-07-26 09:06:31.146480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.789 [2024-07-26 09:06:31.146506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.789 qpair failed and we were unable to recover it. 00:33:12.789 [2024-07-26 09:06:31.146623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.789 [2024-07-26 09:06:31.146649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.789 qpair failed and we were unable to recover it. 00:33:12.789 [2024-07-26 09:06:31.146795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.789 [2024-07-26 09:06:31.146824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.789 qpair failed and we were unable to recover it. 00:33:12.789 [2024-07-26 09:06:31.146960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.789 [2024-07-26 09:06:31.146986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.789 qpair failed and we were unable to recover it. 00:33:12.789 [2024-07-26 09:06:31.147109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.789 [2024-07-26 09:06:31.147135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.789 qpair failed and we were unable to recover it. 00:33:12.789 [2024-07-26 09:06:31.147296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.789 [2024-07-26 09:06:31.147322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.789 qpair failed and we were unable to recover it. 00:33:12.789 [2024-07-26 09:06:31.147440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.789 [2024-07-26 09:06:31.147467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.789 qpair failed and we were unable to recover it. 00:33:12.789 [2024-07-26 09:06:31.147615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.789 [2024-07-26 09:06:31.147641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.789 qpair failed and we were unable to recover it. 00:33:12.789 [2024-07-26 09:06:31.147798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.789 [2024-07-26 09:06:31.147826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.789 qpair failed and we were unable to recover it. 00:33:12.789 [2024-07-26 09:06:31.147991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.789 [2024-07-26 09:06:31.148020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.789 qpair failed and we were unable to recover it. 00:33:12.789 [2024-07-26 09:06:31.148185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.789 [2024-07-26 09:06:31.148211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.789 qpair failed and we were unable to recover it. 00:33:12.789 [2024-07-26 09:06:31.148331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.789 [2024-07-26 09:06:31.148360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.789 qpair failed and we were unable to recover it. 00:33:12.789 [2024-07-26 09:06:31.148508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.789 [2024-07-26 09:06:31.148535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.789 qpair failed and we were unable to recover it. 00:33:12.789 [2024-07-26 09:06:31.148682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.789 [2024-07-26 09:06:31.148726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.789 qpair failed and we were unable to recover it. 00:33:12.789 [2024-07-26 09:06:31.148922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.789 [2024-07-26 09:06:31.148948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.789 qpair failed and we were unable to recover it. 00:33:12.789 [2024-07-26 09:06:31.149098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.789 [2024-07-26 09:06:31.149124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.789 qpair failed and we were unable to recover it. 00:33:12.789 [2024-07-26 09:06:31.149297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.789 [2024-07-26 09:06:31.149329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.789 qpair failed and we were unable to recover it. 00:33:12.789 [2024-07-26 09:06:31.149480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.789 [2024-07-26 09:06:31.149506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.789 qpair failed and we were unable to recover it. 00:33:12.789 [2024-07-26 09:06:31.149629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.789 [2024-07-26 09:06:31.149655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.789 qpair failed and we were unable to recover it. 00:33:12.789 [2024-07-26 09:06:31.149779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.789 [2024-07-26 09:06:31.149808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.789 qpair failed and we were unable to recover it. 00:33:12.789 [2024-07-26 09:06:31.149932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.789 [2024-07-26 09:06:31.149961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.789 qpair failed and we were unable to recover it. 00:33:12.789 [2024-07-26 09:06:31.150084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.789 [2024-07-26 09:06:31.150110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.789 qpair failed and we were unable to recover it. 00:33:12.789 [2024-07-26 09:06:31.150284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.789 [2024-07-26 09:06:31.150309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.789 qpair failed and we were unable to recover it. 00:33:12.789 [2024-07-26 09:06:31.150447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.789 [2024-07-26 09:06:31.150476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.789 qpair failed and we were unable to recover it. 00:33:12.789 [2024-07-26 09:06:31.150635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.789 [2024-07-26 09:06:31.150661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.789 qpair failed and we were unable to recover it. 00:33:12.789 [2024-07-26 09:06:31.150837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.789 [2024-07-26 09:06:31.150865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.789 qpair failed and we were unable to recover it. 00:33:12.789 [2024-07-26 09:06:31.151014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.789 [2024-07-26 09:06:31.151040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.789 qpair failed and we were unable to recover it. 00:33:12.789 [2024-07-26 09:06:31.151174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.789 [2024-07-26 09:06:31.151200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.789 qpair failed and we were unable to recover it. 00:33:12.789 [2024-07-26 09:06:31.151363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.789 [2024-07-26 09:06:31.151389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.789 qpair failed and we were unable to recover it. 00:33:12.789 [2024-07-26 09:06:31.151533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.789 [2024-07-26 09:06:31.151559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.789 qpair failed and we were unable to recover it. 00:33:12.789 [2024-07-26 09:06:31.151680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.789 [2024-07-26 09:06:31.151706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.789 qpair failed and we were unable to recover it. 00:33:12.789 [2024-07-26 09:06:31.151877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.789 [2024-07-26 09:06:31.151903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.789 qpair failed and we were unable to recover it. 00:33:12.789 [2024-07-26 09:06:31.152054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.789 [2024-07-26 09:06:31.152090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.789 qpair failed and we were unable to recover it. 00:33:12.789 [2024-07-26 09:06:31.152241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.790 [2024-07-26 09:06:31.152270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.790 qpair failed and we were unable to recover it. 00:33:12.790 [2024-07-26 09:06:31.152392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.790 [2024-07-26 09:06:31.152418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.790 qpair failed and we were unable to recover it. 00:33:12.790 [2024-07-26 09:06:31.152568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.790 [2024-07-26 09:06:31.152594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.790 qpair failed and we were unable to recover it. 00:33:12.790 [2024-07-26 09:06:31.152745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.790 [2024-07-26 09:06:31.152771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.790 qpair failed and we were unable to recover it. 00:33:12.790 [2024-07-26 09:06:31.152917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.790 [2024-07-26 09:06:31.152943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.790 qpair failed and we were unable to recover it. 00:33:12.790 [2024-07-26 09:06:31.153075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.790 [2024-07-26 09:06:31.153118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.790 qpair failed and we were unable to recover it. 00:33:12.790 [2024-07-26 09:06:31.153283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.790 [2024-07-26 09:06:31.153309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.790 qpair failed and we were unable to recover it. 00:33:12.790 [2024-07-26 09:06:31.153468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.790 [2024-07-26 09:06:31.153497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.790 qpair failed and we were unable to recover it. 00:33:12.790 [2024-07-26 09:06:31.153674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.790 [2024-07-26 09:06:31.153701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.790 qpair failed and we were unable to recover it. 00:33:12.790 [2024-07-26 09:06:31.153853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.790 [2024-07-26 09:06:31.153880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.790 qpair failed and we were unable to recover it. 00:33:12.790 [2024-07-26 09:06:31.154030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.790 [2024-07-26 09:06:31.154056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.790 qpair failed and we were unable to recover it. 00:33:12.790 [2024-07-26 09:06:31.154193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.790 [2024-07-26 09:06:31.154222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.790 qpair failed and we were unable to recover it. 00:33:12.790 [2024-07-26 09:06:31.154371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.790 [2024-07-26 09:06:31.154399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.790 qpair failed and we were unable to recover it. 00:33:12.790 [2024-07-26 09:06:31.154522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.790 [2024-07-26 09:06:31.154548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.790 qpair failed and we were unable to recover it. 00:33:12.790 [2024-07-26 09:06:31.154714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.790 [2024-07-26 09:06:31.154740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.790 qpair failed and we were unable to recover it. 00:33:12.790 [2024-07-26 09:06:31.154888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.790 [2024-07-26 09:06:31.154913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.790 qpair failed and we were unable to recover it. 00:33:12.790 [2024-07-26 09:06:31.155079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.790 [2024-07-26 09:06:31.155106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.790 qpair failed and we were unable to recover it. 00:33:12.790 [2024-07-26 09:06:31.155259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.790 [2024-07-26 09:06:31.155285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.790 qpair failed and we were unable to recover it. 00:33:12.790 [2024-07-26 09:06:31.155417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.790 [2024-07-26 09:06:31.155443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.790 qpair failed and we were unable to recover it. 00:33:12.790 [2024-07-26 09:06:31.155593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.790 [2024-07-26 09:06:31.155624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.790 qpair failed and we were unable to recover it. 00:33:12.790 [2024-07-26 09:06:31.155808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.790 [2024-07-26 09:06:31.155834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.790 qpair failed and we were unable to recover it. 00:33:12.790 [2024-07-26 09:06:31.155956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.790 [2024-07-26 09:06:31.155981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.790 qpair failed and we were unable to recover it. 00:33:12.790 [2024-07-26 09:06:31.156155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.790 [2024-07-26 09:06:31.156185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.790 qpair failed and we were unable to recover it. 00:33:12.790 [2024-07-26 09:06:31.156302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.790 [2024-07-26 09:06:31.156332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.790 qpair failed and we were unable to recover it. 00:33:12.790 [2024-07-26 09:06:31.156486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.790 [2024-07-26 09:06:31.156511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.790 qpair failed and we were unable to recover it. 00:33:12.790 [2024-07-26 09:06:31.156633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.790 [2024-07-26 09:06:31.156659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.790 qpair failed and we were unable to recover it. 00:33:12.790 [2024-07-26 09:06:31.156829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.790 [2024-07-26 09:06:31.156871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.790 qpair failed and we were unable to recover it. 00:33:12.790 [2024-07-26 09:06:31.157114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.790 [2024-07-26 09:06:31.157140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.790 qpair failed and we were unable to recover it. 00:33:12.790 [2024-07-26 09:06:31.157284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.790 [2024-07-26 09:06:31.157310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.790 qpair failed and we were unable to recover it. 00:33:12.790 [2024-07-26 09:06:31.157470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.790 [2024-07-26 09:06:31.157496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.790 qpair failed and we were unable to recover it. 00:33:12.790 [2024-07-26 09:06:31.157655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.790 [2024-07-26 09:06:31.157682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.790 qpair failed and we were unable to recover it. 00:33:12.790 [2024-07-26 09:06:31.157832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.790 [2024-07-26 09:06:31.157861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.790 qpair failed and we were unable to recover it. 00:33:12.790 [2024-07-26 09:06:31.158053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.790 [2024-07-26 09:06:31.158087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.790 qpair failed and we were unable to recover it. 00:33:12.790 [2024-07-26 09:06:31.158234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.790 [2024-07-26 09:06:31.158261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.790 qpair failed and we were unable to recover it. 00:33:12.790 [2024-07-26 09:06:31.158392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.790 [2024-07-26 09:06:31.158418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.790 qpair failed and we were unable to recover it. 00:33:12.790 [2024-07-26 09:06:31.158590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.790 [2024-07-26 09:06:31.158617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.790 qpair failed and we were unable to recover it. 00:33:12.790 [2024-07-26 09:06:31.158800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.790 [2024-07-26 09:06:31.158827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.790 qpair failed and we were unable to recover it. 00:33:12.790 [2024-07-26 09:06:31.159005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.791 [2024-07-26 09:06:31.159034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.791 qpair failed and we were unable to recover it. 00:33:12.791 [2024-07-26 09:06:31.159221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.791 [2024-07-26 09:06:31.159250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.791 qpair failed and we were unable to recover it. 00:33:12.791 [2024-07-26 09:06:31.159423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.791 [2024-07-26 09:06:31.159449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.791 qpair failed and we were unable to recover it. 00:33:12.791 [2024-07-26 09:06:31.159582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.791 [2024-07-26 09:06:31.159609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.791 qpair failed and we were unable to recover it. 00:33:12.791 [2024-07-26 09:06:31.159762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.791 [2024-07-26 09:06:31.159788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.791 qpair failed and we were unable to recover it. 00:33:12.791 [2024-07-26 09:06:31.159937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.791 [2024-07-26 09:06:31.159963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.791 qpair failed and we were unable to recover it. 00:33:12.791 [2024-07-26 09:06:31.160109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.791 [2024-07-26 09:06:31.160138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.791 qpair failed and we were unable to recover it. 00:33:12.791 [2024-07-26 09:06:31.160316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.791 [2024-07-26 09:06:31.160342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.791 qpair failed and we were unable to recover it. 00:33:12.791 [2024-07-26 09:06:31.160458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.791 [2024-07-26 09:06:31.160484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.791 qpair failed and we were unable to recover it. 00:33:12.791 [2024-07-26 09:06:31.160603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.791 [2024-07-26 09:06:31.160629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.791 qpair failed and we were unable to recover it. 00:33:12.791 [2024-07-26 09:06:31.160779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.791 [2024-07-26 09:06:31.160805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.791 qpair failed and we were unable to recover it. 00:33:12.791 [2024-07-26 09:06:31.160968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.791 [2024-07-26 09:06:31.160998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.791 qpair failed and we were unable to recover it. 00:33:12.791 [2024-07-26 09:06:31.161149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.791 [2024-07-26 09:06:31.161176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.791 qpair failed and we were unable to recover it. 00:33:12.791 [2024-07-26 09:06:31.161313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.791 [2024-07-26 09:06:31.161352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:12.791 qpair failed and we were unable to recover it. 00:33:12.791 [2024-07-26 09:06:31.161511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.791 [2024-07-26 09:06:31.161539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:12.791 qpair failed and we were unable to recover it. 00:33:12.791 [2024-07-26 09:06:31.161691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.791 [2024-07-26 09:06:31.161717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:12.791 qpair failed and we were unable to recover it. 00:33:12.791 [2024-07-26 09:06:31.161863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.791 [2024-07-26 09:06:31.161895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:12.791 qpair failed and we were unable to recover it. 00:33:12.791 [2024-07-26 09:06:31.162076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.791 [2024-07-26 09:06:31.162127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:12.791 qpair failed and we were unable to recover it. 00:33:12.791 [2024-07-26 09:06:31.162251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.791 [2024-07-26 09:06:31.162278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:12.791 qpair failed and we were unable to recover it. 00:33:12.791 [2024-07-26 09:06:31.162509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.791 [2024-07-26 09:06:31.162535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:12.791 qpair failed and we were unable to recover it. 00:33:12.791 [2024-07-26 09:06:31.162712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.791 [2024-07-26 09:06:31.162738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:12.791 qpair failed and we were unable to recover it. 00:33:12.791 [2024-07-26 09:06:31.162900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.791 [2024-07-26 09:06:31.162929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:12.791 qpair failed and we were unable to recover it. 00:33:12.791 [2024-07-26 09:06:31.163117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.791 [2024-07-26 09:06:31.163147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:12.791 qpair failed and we were unable to recover it. 00:33:12.791 [2024-07-26 09:06:31.163316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.791 [2024-07-26 09:06:31.163341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:12.791 qpair failed and we were unable to recover it. 00:33:12.791 [2024-07-26 09:06:31.163488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.791 [2024-07-26 09:06:31.163515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:12.791 qpair failed and we were unable to recover it. 00:33:12.791 [2024-07-26 09:06:31.163669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.791 [2024-07-26 09:06:31.163696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:12.791 qpair failed and we were unable to recover it. 00:33:12.791 [2024-07-26 09:06:31.163821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.791 [2024-07-26 09:06:31.163861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:12.791 qpair failed and we were unable to recover it. 00:33:12.791 [2024-07-26 09:06:31.164024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.791 [2024-07-26 09:06:31.164050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:12.791 qpair failed and we were unable to recover it. 00:33:12.791 [2024-07-26 09:06:31.164210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.791 [2024-07-26 09:06:31.164237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.791 qpair failed and we were unable to recover it. 00:33:12.791 [2024-07-26 09:06:31.164387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.791 [2024-07-26 09:06:31.164414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.791 qpair failed and we were unable to recover it. 00:33:12.791 [2024-07-26 09:06:31.164561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.791 [2024-07-26 09:06:31.164588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.791 qpair failed and we were unable to recover it. 00:33:12.791 [2024-07-26 09:06:31.164732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.791 [2024-07-26 09:06:31.164758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.791 qpair failed and we were unable to recover it. 00:33:12.791 [2024-07-26 09:06:31.164869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.791 [2024-07-26 09:06:31.164898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.791 qpair failed and we were unable to recover it. 00:33:12.791 [2024-07-26 09:06:31.165027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.791 [2024-07-26 09:06:31.165053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.791 qpair failed and we were unable to recover it. 00:33:12.791 [2024-07-26 09:06:31.165209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.791 [2024-07-26 09:06:31.165234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.791 qpair failed and we were unable to recover it. 00:33:12.791 [2024-07-26 09:06:31.165351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.791 [2024-07-26 09:06:31.165378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.791 qpair failed and we were unable to recover it. 00:33:12.791 [2024-07-26 09:06:31.165529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.791 [2024-07-26 09:06:31.165555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.791 qpair failed and we were unable to recover it. 00:33:12.791 [2024-07-26 09:06:31.165748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.792 [2024-07-26 09:06:31.165792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.792 qpair failed and we were unable to recover it. 00:33:12.792 [2024-07-26 09:06:31.165942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.792 [2024-07-26 09:06:31.165968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.792 qpair failed and we were unable to recover it. 00:33:12.792 [2024-07-26 09:06:31.166115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.792 [2024-07-26 09:06:31.166142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.792 qpair failed and we were unable to recover it. 00:33:12.792 [2024-07-26 09:06:31.166354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.792 [2024-07-26 09:06:31.166385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.792 qpair failed and we were unable to recover it. 00:33:12.792 [2024-07-26 09:06:31.166555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.792 [2024-07-26 09:06:31.166581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.792 qpair failed and we were unable to recover it. 00:33:12.792 [2024-07-26 09:06:31.166728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.792 [2024-07-26 09:06:31.166754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.792 qpair failed and we were unable to recover it. 00:33:12.792 [2024-07-26 09:06:31.166881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.792 [2024-07-26 09:06:31.166908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.792 qpair failed and we were unable to recover it. 00:33:12.792 [2024-07-26 09:06:31.167064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.792 [2024-07-26 09:06:31.167091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.792 qpair failed and we were unable to recover it. 00:33:12.792 [2024-07-26 09:06:31.167272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.792 [2024-07-26 09:06:31.167301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.792 qpair failed and we were unable to recover it. 00:33:12.792 [2024-07-26 09:06:31.167465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.792 [2024-07-26 09:06:31.167494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.792 qpair failed and we were unable to recover it. 00:33:12.792 [2024-07-26 09:06:31.167686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.792 [2024-07-26 09:06:31.167712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.792 qpair failed and we were unable to recover it. 00:33:12.792 [2024-07-26 09:06:31.167878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.792 [2024-07-26 09:06:31.167911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.792 qpair failed and we were unable to recover it. 00:33:12.792 [2024-07-26 09:06:31.168065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.792 [2024-07-26 09:06:31.168097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.792 qpair failed and we were unable to recover it. 00:33:12.792 [2024-07-26 09:06:31.168242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.792 [2024-07-26 09:06:31.168269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.792 qpair failed and we were unable to recover it. 00:33:12.792 [2024-07-26 09:06:31.168442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.792 [2024-07-26 09:06:31.168468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.792 qpair failed and we were unable to recover it. 00:33:12.792 [2024-07-26 09:06:31.168621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.792 [2024-07-26 09:06:31.168648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.792 qpair failed and we were unable to recover it. 00:33:12.792 [2024-07-26 09:06:31.168784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.792 [2024-07-26 09:06:31.168822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:12.792 qpair failed and we were unable to recover it. 00:33:12.792 [2024-07-26 09:06:31.168964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.792 [2024-07-26 09:06:31.169008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:12.792 qpair failed and we were unable to recover it. 00:33:12.792 [2024-07-26 09:06:31.169133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.792 [2024-07-26 09:06:31.169160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:12.792 qpair failed and we were unable to recover it. 00:33:12.792 [2024-07-26 09:06:31.169315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.792 [2024-07-26 09:06:31.169340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:12.792 qpair failed and we were unable to recover it. 00:33:12.792 [2024-07-26 09:06:31.169544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.792 [2024-07-26 09:06:31.169598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:12.792 qpair failed and we were unable to recover it. 00:33:12.792 [2024-07-26 09:06:31.169761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.792 [2024-07-26 09:06:31.169814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:12.792 qpair failed and we were unable to recover it. 00:33:12.792 [2024-07-26 09:06:31.169992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.792 [2024-07-26 09:06:31.170019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.792 qpair failed and we were unable to recover it. 00:33:12.792 [2024-07-26 09:06:31.170162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.792 [2024-07-26 09:06:31.170189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.792 qpair failed and we were unable to recover it. 00:33:12.792 [2024-07-26 09:06:31.170369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.792 [2024-07-26 09:06:31.170397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.792 qpair failed and we were unable to recover it. 00:33:12.792 [2024-07-26 09:06:31.170559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.792 [2024-07-26 09:06:31.170588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.792 qpair failed and we were unable to recover it. 00:33:12.792 [2024-07-26 09:06:31.170719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.792 [2024-07-26 09:06:31.170749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.792 qpair failed and we were unable to recover it. 00:33:12.792 [2024-07-26 09:06:31.170902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.792 [2024-07-26 09:06:31.170931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.792 qpair failed and we were unable to recover it. 00:33:12.792 [2024-07-26 09:06:31.171096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.792 [2024-07-26 09:06:31.171135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:12.792 qpair failed and we were unable to recover it. 00:33:12.792 [2024-07-26 09:06:31.171302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.792 [2024-07-26 09:06:31.171361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:12.792 qpair failed and we were unable to recover it. 00:33:12.792 [2024-07-26 09:06:31.171584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.792 [2024-07-26 09:06:31.171637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:12.792 qpair failed and we were unable to recover it. 00:33:12.792 [2024-07-26 09:06:31.171821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.792 [2024-07-26 09:06:31.171851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.792 qpair failed and we were unable to recover it. 00:33:12.792 [2024-07-26 09:06:31.172017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.792 [2024-07-26 09:06:31.172046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.792 qpair failed and we were unable to recover it. 00:33:12.792 [2024-07-26 09:06:31.172222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.792 [2024-07-26 09:06:31.172249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.792 qpair failed and we were unable to recover it. 00:33:12.792 [2024-07-26 09:06:31.172389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.792 [2024-07-26 09:06:31.172419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.792 qpair failed and we were unable to recover it. 00:33:12.792 [2024-07-26 09:06:31.172597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.792 [2024-07-26 09:06:31.172638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.792 qpair failed and we were unable to recover it. 00:33:12.792 [2024-07-26 09:06:31.172823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.792 [2024-07-26 09:06:31.172852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.792 qpair failed and we were unable to recover it. 00:33:12.792 [2024-07-26 09:06:31.173027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.792 [2024-07-26 09:06:31.173056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.793 qpair failed and we were unable to recover it. 00:33:12.793 [2024-07-26 09:06:31.173212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.793 [2024-07-26 09:06:31.173240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.793 qpair failed and we were unable to recover it. 00:33:12.793 [2024-07-26 09:06:31.173359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.793 [2024-07-26 09:06:31.173385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.793 qpair failed and we were unable to recover it. 00:33:12.793 [2024-07-26 09:06:31.173553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.793 [2024-07-26 09:06:31.173582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.793 qpair failed and we were unable to recover it. 00:33:12.793 [2024-07-26 09:06:31.173756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.793 [2024-07-26 09:06:31.173787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.793 qpair failed and we were unable to recover it. 00:33:12.793 [2024-07-26 09:06:31.173953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.793 [2024-07-26 09:06:31.173980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.793 qpair failed and we were unable to recover it. 00:33:12.793 [2024-07-26 09:06:31.174157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.793 [2024-07-26 09:06:31.174184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.793 qpair failed and we were unable to recover it. 00:33:12.793 [2024-07-26 09:06:31.174327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.793 [2024-07-26 09:06:31.174374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.793 qpair failed and we were unable to recover it. 00:33:12.793 [2024-07-26 09:06:31.174511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.793 [2024-07-26 09:06:31.174542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.793 qpair failed and we were unable to recover it. 00:33:12.793 [2024-07-26 09:06:31.174797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.793 [2024-07-26 09:06:31.174843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.793 qpair failed and we were unable to recover it. 00:33:12.793 [2024-07-26 09:06:31.174978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.793 [2024-07-26 09:06:31.175008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.793 qpair failed and we were unable to recover it. 00:33:12.793 [2024-07-26 09:06:31.175180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.793 [2024-07-26 09:06:31.175207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.793 qpair failed and we were unable to recover it. 00:33:12.793 [2024-07-26 09:06:31.175392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.793 [2024-07-26 09:06:31.175419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.793 qpair failed and we were unable to recover it. 00:33:12.793 [2024-07-26 09:06:31.175591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.793 [2024-07-26 09:06:31.175620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.793 qpair failed and we were unable to recover it. 00:33:12.793 [2024-07-26 09:06:31.175754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.793 [2024-07-26 09:06:31.175783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.793 qpair failed and we were unable to recover it. 00:33:12.793 [2024-07-26 09:06:31.176010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.793 [2024-07-26 09:06:31.176038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.793 qpair failed and we were unable to recover it. 00:33:12.793 [2024-07-26 09:06:31.176191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.793 [2024-07-26 09:06:31.176220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.793 qpair failed and we were unable to recover it. 00:33:12.793 [2024-07-26 09:06:31.176365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.793 [2024-07-26 09:06:31.176394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.793 qpair failed and we were unable to recover it. 00:33:12.793 [2024-07-26 09:06:31.176567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.793 [2024-07-26 09:06:31.176599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.793 qpair failed and we were unable to recover it. 00:33:12.793 [2024-07-26 09:06:31.176758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.793 [2024-07-26 09:06:31.176787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.793 qpair failed and we were unable to recover it. 00:33:12.793 [2024-07-26 09:06:31.176923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.793 [2024-07-26 09:06:31.176951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.793 qpair failed and we were unable to recover it. 00:33:12.793 [2024-07-26 09:06:31.177099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.793 [2024-07-26 09:06:31.177126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.793 qpair failed and we were unable to recover it. 00:33:12.793 [2024-07-26 09:06:31.177273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.793 [2024-07-26 09:06:31.177300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.793 qpair failed and we were unable to recover it. 00:33:12.793 [2024-07-26 09:06:31.177462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.793 [2024-07-26 09:06:31.177488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.793 qpair failed and we were unable to recover it. 00:33:12.793 [2024-07-26 09:06:31.177690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.793 [2024-07-26 09:06:31.177720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.793 qpair failed and we were unable to recover it. 00:33:12.793 [2024-07-26 09:06:31.177897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.793 [2024-07-26 09:06:31.177926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.793 qpair failed and we were unable to recover it. 00:33:12.793 [2024-07-26 09:06:31.178110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.793 [2024-07-26 09:06:31.178137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.793 qpair failed and we were unable to recover it. 00:33:12.793 [2024-07-26 09:06:31.178255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.793 [2024-07-26 09:06:31.178281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.793 qpair failed and we were unable to recover it. 00:33:12.793 [2024-07-26 09:06:31.178426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.793 [2024-07-26 09:06:31.178452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.793 qpair failed and we were unable to recover it. 00:33:12.793 [2024-07-26 09:06:31.178621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.793 [2024-07-26 09:06:31.178651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.793 qpair failed and we were unable to recover it. 00:33:12.793 [2024-07-26 09:06:31.178778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.793 [2024-07-26 09:06:31.178807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.793 qpair failed and we were unable to recover it. 00:33:12.793 [2024-07-26 09:06:31.178946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.793 [2024-07-26 09:06:31.178972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.793 qpair failed and we were unable to recover it. 00:33:12.793 [2024-07-26 09:06:31.179148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.794 [2024-07-26 09:06:31.179179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.794 qpair failed and we were unable to recover it. 00:33:12.794 [2024-07-26 09:06:31.179341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.794 [2024-07-26 09:06:31.179368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.794 qpair failed and we were unable to recover it. 00:33:12.794 [2024-07-26 09:06:31.179490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.794 [2024-07-26 09:06:31.179519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.794 qpair failed and we were unable to recover it. 00:33:12.794 [2024-07-26 09:06:31.179683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.794 [2024-07-26 09:06:31.179710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.794 qpair failed and we were unable to recover it. 00:33:12.794 [2024-07-26 09:06:31.179847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.794 [2024-07-26 09:06:31.179876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.794 qpair failed and we were unable to recover it. 00:33:12.794 [2024-07-26 09:06:31.180009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.794 [2024-07-26 09:06:31.180039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.794 qpair failed and we were unable to recover it. 00:33:12.794 [2024-07-26 09:06:31.180217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.794 [2024-07-26 09:06:31.180243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.794 qpair failed and we were unable to recover it. 00:33:12.794 [2024-07-26 09:06:31.180372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.794 [2024-07-26 09:06:31.180397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.794 qpair failed and we were unable to recover it. 00:33:12.794 [2024-07-26 09:06:31.180556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.794 [2024-07-26 09:06:31.180585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.794 qpair failed and we were unable to recover it. 00:33:12.794 [2024-07-26 09:06:31.180755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.794 [2024-07-26 09:06:31.180799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.794 qpair failed and we were unable to recover it. 00:33:12.794 [2024-07-26 09:06:31.181008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.794 [2024-07-26 09:06:31.181038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.794 qpair failed and we were unable to recover it. 00:33:12.794 [2024-07-26 09:06:31.181205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.794 [2024-07-26 09:06:31.181231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.794 qpair failed and we were unable to recover it. 00:33:12.794 [2024-07-26 09:06:31.181373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.794 [2024-07-26 09:06:31.181399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.794 qpair failed and we were unable to recover it. 00:33:12.794 [2024-07-26 09:06:31.181593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.794 [2024-07-26 09:06:31.181622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.794 qpair failed and we were unable to recover it. 00:33:12.794 [2024-07-26 09:06:31.181794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.794 [2024-07-26 09:06:31.181824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.794 qpair failed and we were unable to recover it. 00:33:12.794 [2024-07-26 09:06:31.182004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.794 [2024-07-26 09:06:31.182033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.794 qpair failed and we were unable to recover it. 00:33:12.794 [2024-07-26 09:06:31.182176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.794 [2024-07-26 09:06:31.182205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.794 qpair failed and we were unable to recover it. 00:33:12.794 [2024-07-26 09:06:31.182353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.794 [2024-07-26 09:06:31.182383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.794 qpair failed and we were unable to recover it. 00:33:12.794 [2024-07-26 09:06:31.182570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.794 [2024-07-26 09:06:31.182603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.794 qpair failed and we were unable to recover it. 00:33:12.794 [2024-07-26 09:06:31.182812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.794 [2024-07-26 09:06:31.182842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.794 qpair failed and we were unable to recover it. 00:33:12.794 [2024-07-26 09:06:31.183026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.794 [2024-07-26 09:06:31.183055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.794 qpair failed and we were unable to recover it. 00:33:12.794 [2024-07-26 09:06:31.183193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.794 [2024-07-26 09:06:31.183219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.794 qpair failed and we were unable to recover it. 00:33:12.794 [2024-07-26 09:06:31.183369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.794 [2024-07-26 09:06:31.183395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.794 qpair failed and we were unable to recover it. 00:33:12.794 [2024-07-26 09:06:31.183540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.794 [2024-07-26 09:06:31.183567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.794 qpair failed and we were unable to recover it. 00:33:12.794 [2024-07-26 09:06:31.183714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.794 [2024-07-26 09:06:31.183740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.794 qpair failed and we were unable to recover it. 00:33:12.794 [2024-07-26 09:06:31.183914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.794 [2024-07-26 09:06:31.183946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.794 qpair failed and we were unable to recover it. 00:33:12.794 [2024-07-26 09:06:31.184120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.794 [2024-07-26 09:06:31.184147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.794 qpair failed and we were unable to recover it. 00:33:12.794 [2024-07-26 09:06:31.184324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.794 [2024-07-26 09:06:31.184350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.794 qpair failed and we were unable to recover it. 00:33:12.794 [2024-07-26 09:06:31.184533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.794 [2024-07-26 09:06:31.184559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.794 qpair failed and we were unable to recover it. 00:33:12.794 [2024-07-26 09:06:31.184731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.794 [2024-07-26 09:06:31.184757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.794 qpair failed and we were unable to recover it. 00:33:12.794 [2024-07-26 09:06:31.184880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.794 [2024-07-26 09:06:31.184908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.794 qpair failed and we were unable to recover it. 00:33:12.794 [2024-07-26 09:06:31.185028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.794 [2024-07-26 09:06:31.185054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.794 qpair failed and we were unable to recover it. 00:33:12.794 [2024-07-26 09:06:31.185211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.794 [2024-07-26 09:06:31.185238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.794 qpair failed and we were unable to recover it. 00:33:12.794 [2024-07-26 09:06:31.185383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.794 [2024-07-26 09:06:31.185411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.794 qpair failed and we were unable to recover it. 00:33:12.794 [2024-07-26 09:06:31.185598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.794 [2024-07-26 09:06:31.185624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.794 qpair failed and we were unable to recover it. 00:33:12.794 [2024-07-26 09:06:31.185788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.794 [2024-07-26 09:06:31.185817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.794 qpair failed and we were unable to recover it. 00:33:12.794 [2024-07-26 09:06:31.185985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.794 [2024-07-26 09:06:31.186012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.794 qpair failed and we were unable to recover it. 00:33:12.794 [2024-07-26 09:06:31.186175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.794 [2024-07-26 09:06:31.186202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.794 qpair failed and we were unable to recover it. 00:33:12.794 [2024-07-26 09:06:31.186355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.794 [2024-07-26 09:06:31.186383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.794 qpair failed and we were unable to recover it. 00:33:12.794 [2024-07-26 09:06:31.186510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.795 [2024-07-26 09:06:31.186536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.795 qpair failed and we were unable to recover it. 00:33:12.795 [2024-07-26 09:06:31.186676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.795 [2024-07-26 09:06:31.186710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.795 qpair failed and we were unable to recover it. 00:33:12.795 [2024-07-26 09:06:31.186915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.795 [2024-07-26 09:06:31.186946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.795 qpair failed and we were unable to recover it. 00:33:12.795 [2024-07-26 09:06:31.187103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.795 [2024-07-26 09:06:31.187129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.795 qpair failed and we were unable to recover it. 00:33:12.795 [2024-07-26 09:06:31.187297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.795 [2024-07-26 09:06:31.187324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.795 qpair failed and we were unable to recover it. 00:33:12.795 [2024-07-26 09:06:31.187486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.795 [2024-07-26 09:06:31.187520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.795 qpair failed and we were unable to recover it. 00:33:12.795 [2024-07-26 09:06:31.187652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.795 [2024-07-26 09:06:31.187681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.795 qpair failed and we were unable to recover it. 00:33:12.795 [2024-07-26 09:06:31.187911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.795 [2024-07-26 09:06:31.187941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.795 qpair failed and we were unable to recover it. 00:33:12.795 [2024-07-26 09:06:31.188121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.795 [2024-07-26 09:06:31.188147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.795 qpair failed and we were unable to recover it. 00:33:12.795 [2024-07-26 09:06:31.188289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.795 [2024-07-26 09:06:31.188315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.795 qpair failed and we were unable to recover it. 00:33:12.795 [2024-07-26 09:06:31.188468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.795 [2024-07-26 09:06:31.188498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:12.795 qpair failed and we were unable to recover it. 00:33:13.078 [2024-07-26 09:06:31.188668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.078 [2024-07-26 09:06:31.188698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.078 qpair failed and we were unable to recover it. 00:33:13.078 [2024-07-26 09:06:31.188833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.078 [2024-07-26 09:06:31.188863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.078 qpair failed and we were unable to recover it. 00:33:13.078 [2024-07-26 09:06:31.189063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.078 [2024-07-26 09:06:31.189103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:13.078 qpair failed and we were unable to recover it. 00:33:13.078 [2024-07-26 09:06:31.189263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.078 [2024-07-26 09:06:31.189300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:13.078 qpair failed and we were unable to recover it. 00:33:13.078 [2024-07-26 09:06:31.189514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.078 [2024-07-26 09:06:31.189567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:13.078 qpair failed and we were unable to recover it. 00:33:13.078 [2024-07-26 09:06:31.189755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.078 [2024-07-26 09:06:31.189809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:13.078 qpair failed and we were unable to recover it. 00:33:13.078 [2024-07-26 09:06:31.189962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.078 [2024-07-26 09:06:31.189998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:13.078 qpair failed and we were unable to recover it. 00:33:13.078 [2024-07-26 09:06:31.190176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.078 [2024-07-26 09:06:31.190205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:13.078 qpair failed and we were unable to recover it. 00:33:13.078 [2024-07-26 09:06:31.190342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.078 [2024-07-26 09:06:31.190391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:13.078 qpair failed and we were unable to recover it. 00:33:13.078 [2024-07-26 09:06:31.190543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.078 [2024-07-26 09:06:31.190589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:13.078 qpair failed and we were unable to recover it. 00:33:13.078 [2024-07-26 09:06:31.190717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.078 [2024-07-26 09:06:31.190743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:13.078 qpair failed and we were unable to recover it. 00:33:13.078 [2024-07-26 09:06:31.190888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.078 [2024-07-26 09:06:31.190914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:13.078 qpair failed and we were unable to recover it. 00:33:13.078 [2024-07-26 09:06:31.191030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.078 [2024-07-26 09:06:31.191073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:13.078 qpair failed and we were unable to recover it. 00:33:13.078 [2024-07-26 09:06:31.191215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.078 [2024-07-26 09:06:31.191251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:13.078 qpair failed and we were unable to recover it. 00:33:13.078 [2024-07-26 09:06:31.191414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.078 [2024-07-26 09:06:31.191442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.078 qpair failed and we were unable to recover it. 00:33:13.078 [2024-07-26 09:06:31.191574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.078 [2024-07-26 09:06:31.191600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.078 qpair failed and we were unable to recover it. 00:33:13.078 [2024-07-26 09:06:31.191747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.078 [2024-07-26 09:06:31.191774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.078 qpair failed and we were unable to recover it. 00:33:13.078 [2024-07-26 09:06:31.191923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.078 [2024-07-26 09:06:31.191954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.078 qpair failed and we were unable to recover it. 00:33:13.078 [2024-07-26 09:06:31.192099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.078 [2024-07-26 09:06:31.192126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.078 qpair failed and we were unable to recover it. 00:33:13.078 [2024-07-26 09:06:31.192279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.078 [2024-07-26 09:06:31.192305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.078 qpair failed and we were unable to recover it. 00:33:13.078 [2024-07-26 09:06:31.192440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.078 [2024-07-26 09:06:31.192466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.078 qpair failed and we were unable to recover it. 00:33:13.078 [2024-07-26 09:06:31.192620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.078 [2024-07-26 09:06:31.192646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.078 qpair failed and we were unable to recover it. 00:33:13.078 [2024-07-26 09:06:31.192794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.078 [2024-07-26 09:06:31.192821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.078 qpair failed and we were unable to recover it. 00:33:13.078 [2024-07-26 09:06:31.192981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.078 [2024-07-26 09:06:31.193010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.078 qpair failed and we were unable to recover it. 00:33:13.078 [2024-07-26 09:06:31.193196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.078 [2024-07-26 09:06:31.193234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.078 qpair failed and we were unable to recover it. 00:33:13.078 [2024-07-26 09:06:31.193393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.078 [2024-07-26 09:06:31.193420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.078 qpair failed and we were unable to recover it. 00:33:13.078 [2024-07-26 09:06:31.193594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.078 [2024-07-26 09:06:31.193622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.078 qpair failed and we were unable to recover it. 00:33:13.078 [2024-07-26 09:06:31.193777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.078 [2024-07-26 09:06:31.193824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.078 qpair failed and we were unable to recover it. 00:33:13.078 [2024-07-26 09:06:31.193953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.078 [2024-07-26 09:06:31.193981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.078 qpair failed and we were unable to recover it. 00:33:13.078 [2024-07-26 09:06:31.194125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.078 [2024-07-26 09:06:31.194152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.078 qpair failed and we were unable to recover it. 00:33:13.078 [2024-07-26 09:06:31.194295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.078 [2024-07-26 09:06:31.194321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.078 qpair failed and we were unable to recover it. 00:33:13.078 [2024-07-26 09:06:31.194485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.078 [2024-07-26 09:06:31.194528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.078 qpair failed and we were unable to recover it. 00:33:13.078 [2024-07-26 09:06:31.194734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.078 [2024-07-26 09:06:31.194780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.078 qpair failed and we were unable to recover it. 00:33:13.078 [2024-07-26 09:06:31.194929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.078 [2024-07-26 09:06:31.194958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.078 qpair failed and we were unable to recover it. 00:33:13.078 [2024-07-26 09:06:31.195121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.078 [2024-07-26 09:06:31.195148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.078 qpair failed and we were unable to recover it. 00:33:13.078 [2024-07-26 09:06:31.195320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.078 [2024-07-26 09:06:31.195345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.078 qpair failed and we were unable to recover it. 00:33:13.078 [2024-07-26 09:06:31.195484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.078 [2024-07-26 09:06:31.195513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.078 qpair failed and we were unable to recover it. 00:33:13.078 [2024-07-26 09:06:31.195672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.078 [2024-07-26 09:06:31.195701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.078 qpair failed and we were unable to recover it. 00:33:13.078 [2024-07-26 09:06:31.195892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.078 [2024-07-26 09:06:31.195920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.078 qpair failed and we were unable to recover it. 00:33:13.078 [2024-07-26 09:06:31.196083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.078 [2024-07-26 09:06:31.196126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.078 qpair failed and we were unable to recover it. 00:33:13.078 [2024-07-26 09:06:31.196302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.078 [2024-07-26 09:06:31.196328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.078 qpair failed and we were unable to recover it. 00:33:13.078 [2024-07-26 09:06:31.196473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.078 [2024-07-26 09:06:31.196502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.078 qpair failed and we were unable to recover it. 00:33:13.078 [2024-07-26 09:06:31.196660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.078 [2024-07-26 09:06:31.196689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.078 qpair failed and we were unable to recover it. 00:33:13.078 [2024-07-26 09:06:31.196845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.078 [2024-07-26 09:06:31.196872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.078 qpair failed and we were unable to recover it. 00:33:13.078 [2024-07-26 09:06:31.197039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.078 [2024-07-26 09:06:31.197074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.078 qpair failed and we were unable to recover it. 00:33:13.078 [2024-07-26 09:06:31.197199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.078 [2024-07-26 09:06:31.197225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.078 qpair failed and we were unable to recover it. 00:33:13.078 [2024-07-26 09:06:31.197345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.078 [2024-07-26 09:06:31.197371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.078 qpair failed and we were unable to recover it. 00:33:13.078 [2024-07-26 09:06:31.197513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.078 [2024-07-26 09:06:31.197541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.078 qpair failed and we were unable to recover it. 00:33:13.078 [2024-07-26 09:06:31.197728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.078 [2024-07-26 09:06:31.197756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.078 qpair failed and we were unable to recover it. 00:33:13.078 [2024-07-26 09:06:31.197913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.078 [2024-07-26 09:06:31.197942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.078 qpair failed and we were unable to recover it. 00:33:13.078 [2024-07-26 09:06:31.198106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.078 [2024-07-26 09:06:31.198132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.078 qpair failed and we were unable to recover it. 00:33:13.078 [2024-07-26 09:06:31.198270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.078 [2024-07-26 09:06:31.198295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.079 qpair failed and we were unable to recover it. 00:33:13.079 [2024-07-26 09:06:31.198464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.079 [2024-07-26 09:06:31.198492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.079 qpair failed and we were unable to recover it. 00:33:13.079 [2024-07-26 09:06:31.198743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.079 [2024-07-26 09:06:31.198788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.079 qpair failed and we were unable to recover it. 00:33:13.079 [2024-07-26 09:06:31.198960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.079 [2024-07-26 09:06:31.198985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.079 qpair failed and we were unable to recover it. 00:33:13.079 [2024-07-26 09:06:31.199122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.079 [2024-07-26 09:06:31.199149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.079 qpair failed and we were unable to recover it. 00:33:13.079 [2024-07-26 09:06:31.199272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.079 [2024-07-26 09:06:31.199297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.079 qpair failed and we were unable to recover it. 00:33:13.079 [2024-07-26 09:06:31.199445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.079 [2024-07-26 09:06:31.199471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.079 qpair failed and we were unable to recover it. 00:33:13.079 [2024-07-26 09:06:31.199598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.079 [2024-07-26 09:06:31.199623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.079 qpair failed and we were unable to recover it. 00:33:13.079 [2024-07-26 09:06:31.199796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.079 [2024-07-26 09:06:31.199823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.079 qpair failed and we were unable to recover it. 00:33:13.079 [2024-07-26 09:06:31.199984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.079 [2024-07-26 09:06:31.200012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.079 qpair failed and we were unable to recover it. 00:33:13.079 [2024-07-26 09:06:31.200166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.079 [2024-07-26 09:06:31.200192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.079 qpair failed and we were unable to recover it. 00:33:13.079 [2024-07-26 09:06:31.200340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.079 [2024-07-26 09:06:31.200366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.079 qpair failed and we were unable to recover it. 00:33:13.079 [2024-07-26 09:06:31.200519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.079 [2024-07-26 09:06:31.200547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.079 qpair failed and we were unable to recover it. 00:33:13.079 [2024-07-26 09:06:31.200731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.079 [2024-07-26 09:06:31.200759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.079 qpair failed and we were unable to recover it. 00:33:13.079 [2024-07-26 09:06:31.201008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.079 [2024-07-26 09:06:31.201035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.079 qpair failed and we were unable to recover it. 00:33:13.079 [2024-07-26 09:06:31.201203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.079 [2024-07-26 09:06:31.201228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.079 qpair failed and we were unable to recover it. 00:33:13.079 [2024-07-26 09:06:31.201402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.079 [2024-07-26 09:06:31.201427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.079 qpair failed and we were unable to recover it. 00:33:13.079 [2024-07-26 09:06:31.201597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.079 [2024-07-26 09:06:31.201624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.079 qpair failed and we were unable to recover it. 00:33:13.079 [2024-07-26 09:06:31.201804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.079 [2024-07-26 09:06:31.201836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.079 qpair failed and we were unable to recover it. 00:33:13.079 [2024-07-26 09:06:31.202017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.079 [2024-07-26 09:06:31.202045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.079 qpair failed and we were unable to recover it. 00:33:13.079 [2024-07-26 09:06:31.202220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.079 [2024-07-26 09:06:31.202249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.079 qpair failed and we were unable to recover it. 00:33:13.079 [2024-07-26 09:06:31.202370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.079 [2024-07-26 09:06:31.202416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.079 qpair failed and we were unable to recover it. 00:33:13.079 [2024-07-26 09:06:31.202577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.079 [2024-07-26 09:06:31.202605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.079 qpair failed and we were unable to recover it. 00:33:13.079 [2024-07-26 09:06:31.202747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.079 [2024-07-26 09:06:31.202790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.079 qpair failed and we were unable to recover it. 00:33:13.079 [2024-07-26 09:06:31.202950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.079 [2024-07-26 09:06:31.202975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.079 qpair failed and we were unable to recover it. 00:33:13.079 [2024-07-26 09:06:31.203099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.079 [2024-07-26 09:06:31.203125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.079 qpair failed and we were unable to recover it. 00:33:13.079 [2024-07-26 09:06:31.203249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.079 [2024-07-26 09:06:31.203274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.079 qpair failed and we were unable to recover it. 00:33:13.079 [2024-07-26 09:06:31.203395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.079 [2024-07-26 09:06:31.203420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.079 qpair failed and we were unable to recover it. 00:33:13.079 [2024-07-26 09:06:31.203539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.079 [2024-07-26 09:06:31.203564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.079 qpair failed and we were unable to recover it. 00:33:13.079 [2024-07-26 09:06:31.203730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.079 [2024-07-26 09:06:31.203758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.079 qpair failed and we were unable to recover it. 00:33:13.079 [2024-07-26 09:06:31.203927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.079 [2024-07-26 09:06:31.203954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.079 qpair failed and we were unable to recover it. 00:33:13.079 [2024-07-26 09:06:31.204102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.079 [2024-07-26 09:06:31.204128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.079 qpair failed and we were unable to recover it. 00:33:13.079 [2024-07-26 09:06:31.204243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.079 [2024-07-26 09:06:31.204268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.079 qpair failed and we were unable to recover it. 00:33:13.079 [2024-07-26 09:06:31.204400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.079 [2024-07-26 09:06:31.204429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.079 qpair failed and we were unable to recover it. 00:33:13.079 [2024-07-26 09:06:31.204584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.079 [2024-07-26 09:06:31.204613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.079 qpair failed and we were unable to recover it. 00:33:13.079 [2024-07-26 09:06:31.204830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.079 [2024-07-26 09:06:31.204858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.079 qpair failed and we were unable to recover it. 00:33:13.079 [2024-07-26 09:06:31.204999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.079 [2024-07-26 09:06:31.205024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.079 qpair failed and we were unable to recover it. 00:33:13.079 [2024-07-26 09:06:31.205167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.079 [2024-07-26 09:06:31.205193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.079 qpair failed and we were unable to recover it. 00:33:13.079 [2024-07-26 09:06:31.205331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.079 [2024-07-26 09:06:31.205357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.079 qpair failed and we were unable to recover it. 00:33:13.079 [2024-07-26 09:06:31.205491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.079 [2024-07-26 09:06:31.205519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.079 qpair failed and we were unable to recover it. 00:33:13.079 [2024-07-26 09:06:31.205671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.079 [2024-07-26 09:06:31.205699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.079 qpair failed and we were unable to recover it. 00:33:13.079 [2024-07-26 09:06:31.205922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.079 [2024-07-26 09:06:31.205950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.079 qpair failed and we were unable to recover it. 00:33:13.079 [2024-07-26 09:06:31.206092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.079 [2024-07-26 09:06:31.206117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.079 qpair failed and we were unable to recover it. 00:33:13.079 [2024-07-26 09:06:31.206264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.079 [2024-07-26 09:06:31.206290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.079 qpair failed and we were unable to recover it. 00:33:13.079 [2024-07-26 09:06:31.206408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.079 [2024-07-26 09:06:31.206433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.079 qpair failed and we were unable to recover it. 00:33:13.079 [2024-07-26 09:06:31.206547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.079 [2024-07-26 09:06:31.206572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.079 qpair failed and we were unable to recover it. 00:33:13.079 [2024-07-26 09:06:31.206724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.079 [2024-07-26 09:06:31.206752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.079 qpair failed and we were unable to recover it. 00:33:13.079 [2024-07-26 09:06:31.206930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.079 [2024-07-26 09:06:31.206957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.079 qpair failed and we were unable to recover it. 00:33:13.079 [2024-07-26 09:06:31.207111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.079 [2024-07-26 09:06:31.207138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.079 qpair failed and we were unable to recover it. 00:33:13.079 [2024-07-26 09:06:31.207285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.079 [2024-07-26 09:06:31.207311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.079 qpair failed and we were unable to recover it. 00:33:13.079 [2024-07-26 09:06:31.207458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.079 [2024-07-26 09:06:31.207484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.079 qpair failed and we were unable to recover it. 00:33:13.079 [2024-07-26 09:06:31.207602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.079 [2024-07-26 09:06:31.207628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.079 qpair failed and we were unable to recover it. 00:33:13.079 [2024-07-26 09:06:31.207781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.079 [2024-07-26 09:06:31.207807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.079 qpair failed and we were unable to recover it. 00:33:13.079 [2024-07-26 09:06:31.207943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.079 [2024-07-26 09:06:31.207971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.079 qpair failed and we were unable to recover it. 00:33:13.079 [2024-07-26 09:06:31.208122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.079 [2024-07-26 09:06:31.208149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.079 qpair failed and we were unable to recover it. 00:33:13.079 [2024-07-26 09:06:31.208295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.079 [2024-07-26 09:06:31.208321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.079 qpair failed and we were unable to recover it. 00:33:13.079 [2024-07-26 09:06:31.208441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.079 [2024-07-26 09:06:31.208466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.079 qpair failed and we were unable to recover it. 00:33:13.079 [2024-07-26 09:06:31.208580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.079 [2024-07-26 09:06:31.208605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.079 qpair failed and we were unable to recover it. 00:33:13.079 [2024-07-26 09:06:31.208760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.079 [2024-07-26 09:06:31.208785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.079 qpair failed and we were unable to recover it. 00:33:13.079 [2024-07-26 09:06:31.208924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.079 [2024-07-26 09:06:31.208952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.079 qpair failed and we were unable to recover it. 00:33:13.079 [2024-07-26 09:06:31.209077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.079 [2024-07-26 09:06:31.209119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.079 qpair failed and we were unable to recover it. 00:33:13.079 [2024-07-26 09:06:31.209265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.079 [2024-07-26 09:06:31.209294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.079 qpair failed and we were unable to recover it. 00:33:13.079 [2024-07-26 09:06:31.209444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.079 [2024-07-26 09:06:31.209470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.079 qpair failed and we were unable to recover it. 00:33:13.079 [2024-07-26 09:06:31.209615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.079 [2024-07-26 09:06:31.209641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.079 qpair failed and we were unable to recover it. 00:33:13.079 [2024-07-26 09:06:31.209756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.080 [2024-07-26 09:06:31.209781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.080 qpair failed and we were unable to recover it. 00:33:13.080 [2024-07-26 09:06:31.209924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.080 [2024-07-26 09:06:31.209950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.080 qpair failed and we were unable to recover it. 00:33:13.080 [2024-07-26 09:06:31.210068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.080 [2024-07-26 09:06:31.210094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.080 qpair failed and we were unable to recover it. 00:33:13.080 [2024-07-26 09:06:31.210236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.080 [2024-07-26 09:06:31.210261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.080 qpair failed and we were unable to recover it. 00:33:13.080 [2024-07-26 09:06:31.210407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.080 [2024-07-26 09:06:31.210433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.080 qpair failed and we were unable to recover it. 00:33:13.080 [2024-07-26 09:06:31.210548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.080 [2024-07-26 09:06:31.210573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.080 qpair failed and we were unable to recover it. 00:33:13.080 [2024-07-26 09:06:31.210687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.080 [2024-07-26 09:06:31.210712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.080 qpair failed and we were unable to recover it. 00:33:13.080 [2024-07-26 09:06:31.210824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.080 [2024-07-26 09:06:31.210849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.080 qpair failed and we were unable to recover it. 00:33:13.080 [2024-07-26 09:06:31.210960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.080 [2024-07-26 09:06:31.210985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.080 qpair failed and we were unable to recover it. 00:33:13.080 [2024-07-26 09:06:31.211130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.080 [2024-07-26 09:06:31.211157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.080 qpair failed and we were unable to recover it. 00:33:13.080 [2024-07-26 09:06:31.211283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.080 [2024-07-26 09:06:31.211310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.080 qpair failed and we were unable to recover it. 00:33:13.080 [2024-07-26 09:06:31.211430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.080 [2024-07-26 09:06:31.211455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.080 qpair failed and we were unable to recover it. 00:33:13.080 [2024-07-26 09:06:31.211627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.080 [2024-07-26 09:06:31.211652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.080 qpair failed and we were unable to recover it. 00:33:13.080 [2024-07-26 09:06:31.211767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.080 [2024-07-26 09:06:31.211792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.080 qpair failed and we were unable to recover it. 00:33:13.080 [2024-07-26 09:06:31.211940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.080 [2024-07-26 09:06:31.211965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.080 qpair failed and we were unable to recover it. 00:33:13.080 [2024-07-26 09:06:31.212093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.080 [2024-07-26 09:06:31.212118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.080 qpair failed and we were unable to recover it. 00:33:13.080 [2024-07-26 09:06:31.212242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.080 [2024-07-26 09:06:31.212268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.080 qpair failed and we were unable to recover it. 00:33:13.080 [2024-07-26 09:06:31.212382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.080 [2024-07-26 09:06:31.212407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.080 qpair failed and we were unable to recover it. 00:33:13.080 [2024-07-26 09:06:31.212553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.080 [2024-07-26 09:06:31.212579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.080 qpair failed and we were unable to recover it. 00:33:13.080 [2024-07-26 09:06:31.212699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.080 [2024-07-26 09:06:31.212724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.080 qpair failed and we were unable to recover it. 00:33:13.080 [2024-07-26 09:06:31.212875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.080 [2024-07-26 09:06:31.212901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.080 qpair failed and we were unable to recover it. 00:33:13.080 [2024-07-26 09:06:31.213014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.080 [2024-07-26 09:06:31.213040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.080 qpair failed and we were unable to recover it. 00:33:13.080 [2024-07-26 09:06:31.213185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.080 [2024-07-26 09:06:31.213224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:13.080 qpair failed and we were unable to recover it. 00:33:13.080 [2024-07-26 09:06:31.213378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.080 [2024-07-26 09:06:31.213407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:13.080 qpair failed and we were unable to recover it. 00:33:13.080 [2024-07-26 09:06:31.213580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.080 [2024-07-26 09:06:31.213623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:13.080 qpair failed and we were unable to recover it. 00:33:13.080 [2024-07-26 09:06:31.213755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.080 [2024-07-26 09:06:31.213781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.080 qpair failed and we were unable to recover it. 00:33:13.080 [2024-07-26 09:06:31.213925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.080 [2024-07-26 09:06:31.213951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.080 qpair failed and we were unable to recover it. 00:33:13.080 [2024-07-26 09:06:31.214099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.080 [2024-07-26 09:06:31.214125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.080 qpair failed and we were unable to recover it. 00:33:13.080 [2024-07-26 09:06:31.214273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.080 [2024-07-26 09:06:31.214299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.080 qpair failed and we were unable to recover it. 00:33:13.080 [2024-07-26 09:06:31.214418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.080 [2024-07-26 09:06:31.214460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.080 qpair failed and we were unable to recover it. 00:33:13.080 [2024-07-26 09:06:31.214590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.080 [2024-07-26 09:06:31.214618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.080 qpair failed and we were unable to recover it. 00:33:13.080 [2024-07-26 09:06:31.214857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.080 [2024-07-26 09:06:31.214882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.080 qpair failed and we were unable to recover it. 00:33:13.080 [2024-07-26 09:06:31.215026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.080 [2024-07-26 09:06:31.215051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.080 qpair failed and we were unable to recover it. 00:33:13.080 [2024-07-26 09:06:31.215186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.080 [2024-07-26 09:06:31.215212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.080 qpair failed and we were unable to recover it. 00:33:13.080 [2024-07-26 09:06:31.215359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.080 [2024-07-26 09:06:31.215384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.080 qpair failed and we were unable to recover it. 00:33:13.080 [2024-07-26 09:06:31.215532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.080 [2024-07-26 09:06:31.215574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.080 qpair failed and we were unable to recover it. 00:33:13.080 [2024-07-26 09:06:31.215706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.080 [2024-07-26 09:06:31.215734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.080 qpair failed and we were unable to recover it. 00:33:13.080 [2024-07-26 09:06:31.215984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.080 [2024-07-26 09:06:31.216012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.080 qpair failed and we were unable to recover it. 00:33:13.080 [2024-07-26 09:06:31.216207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.080 [2024-07-26 09:06:31.216246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.080 qpair failed and we were unable to recover it. 00:33:13.080 [2024-07-26 09:06:31.216399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.080 [2024-07-26 09:06:31.216427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.080 qpair failed and we were unable to recover it. 00:33:13.080 [2024-07-26 09:06:31.216588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.080 [2024-07-26 09:06:31.216631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.080 qpair failed and we were unable to recover it. 00:33:13.080 [2024-07-26 09:06:31.216790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.080 [2024-07-26 09:06:31.216821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.080 qpair failed and we were unable to recover it. 00:33:13.080 [2024-07-26 09:06:31.216959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.080 [2024-07-26 09:06:31.216990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.080 qpair failed and we were unable to recover it. 00:33:13.080 [2024-07-26 09:06:31.217160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.080 [2024-07-26 09:06:31.217187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.080 qpair failed and we were unable to recover it. 00:33:13.080 [2024-07-26 09:06:31.217347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.080 [2024-07-26 09:06:31.217376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.080 qpair failed and we were unable to recover it. 00:33:13.080 [2024-07-26 09:06:31.217505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.080 [2024-07-26 09:06:31.217535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.080 qpair failed and we were unable to recover it. 00:33:13.080 [2024-07-26 09:06:31.217760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.080 [2024-07-26 09:06:31.217788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.080 qpair failed and we were unable to recover it. 00:33:13.080 [2024-07-26 09:06:31.217921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.080 [2024-07-26 09:06:31.217950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.080 qpair failed and we were unable to recover it. 00:33:13.080 [2024-07-26 09:06:31.218085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.080 [2024-07-26 09:06:31.218127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.080 qpair failed and we were unable to recover it. 00:33:13.080 [2024-07-26 09:06:31.218266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.080 [2024-07-26 09:06:31.218292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.080 qpair failed and we were unable to recover it. 00:33:13.080 [2024-07-26 09:06:31.218462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.080 [2024-07-26 09:06:31.218509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.080 qpair failed and we were unable to recover it. 00:33:13.080 [2024-07-26 09:06:31.218695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.080 [2024-07-26 09:06:31.218730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.080 qpair failed and we were unable to recover it. 00:33:13.080 [2024-07-26 09:06:31.218895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.080 [2024-07-26 09:06:31.218924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.080 qpair failed and we were unable to recover it. 00:33:13.080 [2024-07-26 09:06:31.219050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.080 [2024-07-26 09:06:31.219091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.080 qpair failed and we were unable to recover it. 00:33:13.080 [2024-07-26 09:06:31.219233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.080 [2024-07-26 09:06:31.219259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.080 qpair failed and we were unable to recover it. 00:33:13.080 [2024-07-26 09:06:31.219410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.080 [2024-07-26 09:06:31.219436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.080 qpair failed and we were unable to recover it. 00:33:13.080 [2024-07-26 09:06:31.219572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.081 [2024-07-26 09:06:31.219607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.081 qpair failed and we were unable to recover it. 00:33:13.081 [2024-07-26 09:06:31.219779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.081 [2024-07-26 09:06:31.219807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.081 qpair failed and we were unable to recover it. 00:33:13.081 [2024-07-26 09:06:31.219990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.081 [2024-07-26 09:06:31.220018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.081 qpair failed and we were unable to recover it. 00:33:13.081 [2024-07-26 09:06:31.220173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.081 [2024-07-26 09:06:31.220199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.081 qpair failed and we were unable to recover it. 00:33:13.081 [2024-07-26 09:06:31.220361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.081 [2024-07-26 09:06:31.220389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.081 qpair failed and we were unable to recover it. 00:33:13.081 [2024-07-26 09:06:31.220552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.081 [2024-07-26 09:06:31.220578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.081 qpair failed and we were unable to recover it. 00:33:13.081 [2024-07-26 09:06:31.220685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.081 [2024-07-26 09:06:31.220710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.081 qpair failed and we were unable to recover it. 00:33:13.081 [2024-07-26 09:06:31.220912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.081 [2024-07-26 09:06:31.220941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.081 qpair failed and we were unable to recover it. 00:33:13.081 [2024-07-26 09:06:31.221105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.081 [2024-07-26 09:06:31.221131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.081 qpair failed and we were unable to recover it. 00:33:13.081 [2024-07-26 09:06:31.221264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.081 [2024-07-26 09:06:31.221304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.081 qpair failed and we were unable to recover it. 00:33:13.081 [2024-07-26 09:06:31.221455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.081 [2024-07-26 09:06:31.221497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.081 qpair failed and we were unable to recover it. 00:33:13.081 [2024-07-26 09:06:31.221662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.081 [2024-07-26 09:06:31.221692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.081 qpair failed and we were unable to recover it. 00:33:13.081 [2024-07-26 09:06:31.221883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.081 [2024-07-26 09:06:31.221912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.081 qpair failed and we were unable to recover it. 00:33:13.081 [2024-07-26 09:06:31.222075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.081 [2024-07-26 09:06:31.222119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.081 qpair failed and we were unable to recover it. 00:33:13.081 [2024-07-26 09:06:31.222271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.081 [2024-07-26 09:06:31.222296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.081 qpair failed and we were unable to recover it. 00:33:13.081 [2024-07-26 09:06:31.222443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.081 [2024-07-26 09:06:31.222473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.081 qpair failed and we were unable to recover it. 00:33:13.081 [2024-07-26 09:06:31.222604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.081 [2024-07-26 09:06:31.222634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.081 qpair failed and we were unable to recover it. 00:33:13.081 [2024-07-26 09:06:31.222802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.081 [2024-07-26 09:06:31.222857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:13.081 qpair failed and we were unable to recover it. 00:33:13.081 [2024-07-26 09:06:31.223023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.081 [2024-07-26 09:06:31.223068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.081 qpair failed and we were unable to recover it. 00:33:13.081 [2024-07-26 09:06:31.223203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.081 [2024-07-26 09:06:31.223231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.081 qpair failed and we were unable to recover it. 00:33:13.081 [2024-07-26 09:06:31.223413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.081 [2024-07-26 09:06:31.223449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.081 qpair failed and we were unable to recover it. 00:33:13.081 [2024-07-26 09:06:31.223644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.081 [2024-07-26 09:06:31.223693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.081 qpair failed and we were unable to recover it. 00:33:13.081 [2024-07-26 09:06:31.223850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.081 [2024-07-26 09:06:31.223878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.081 qpair failed and we were unable to recover it. 00:33:13.081 [2024-07-26 09:06:31.224035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.081 [2024-07-26 09:06:31.224067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.081 qpair failed and we were unable to recover it. 00:33:13.081 [2024-07-26 09:06:31.224191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.081 [2024-07-26 09:06:31.224217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.081 qpair failed and we were unable to recover it. 00:33:13.081 [2024-07-26 09:06:31.224365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.081 [2024-07-26 09:06:31.224390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.081 qpair failed and we were unable to recover it. 00:33:13.081 [2024-07-26 09:06:31.224599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.081 [2024-07-26 09:06:31.224624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.081 qpair failed and we were unable to recover it. 00:33:13.081 [2024-07-26 09:06:31.224820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.081 [2024-07-26 09:06:31.224871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.081 qpair failed and we were unable to recover it. 00:33:13.081 [2024-07-26 09:06:31.225012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.081 [2024-07-26 09:06:31.225041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.081 qpair failed and we were unable to recover it. 00:33:13.081 [2024-07-26 09:06:31.225225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.081 [2024-07-26 09:06:31.225251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.081 qpair failed and we were unable to recover it. 00:33:13.081 [2024-07-26 09:06:31.225393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.081 [2024-07-26 09:06:31.225420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.081 qpair failed and we were unable to recover it. 00:33:13.081 [2024-07-26 09:06:31.225555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.081 [2024-07-26 09:06:31.225584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.081 qpair failed and we were unable to recover it. 00:33:13.081 [2024-07-26 09:06:31.225766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.081 [2024-07-26 09:06:31.225794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.081 qpair failed and we were unable to recover it. 00:33:13.081 [2024-07-26 09:06:31.225998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.081 [2024-07-26 09:06:31.226037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:13.081 qpair failed and we were unable to recover it. 00:33:13.081 [2024-07-26 09:06:31.226180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.081 [2024-07-26 09:06:31.226209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:13.081 qpair failed and we were unable to recover it. 00:33:13.081 [2024-07-26 09:06:31.226383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.081 [2024-07-26 09:06:31.226427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:13.081 qpair failed and we were unable to recover it. 00:33:13.081 [2024-07-26 09:06:31.226666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.081 [2024-07-26 09:06:31.226696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.081 qpair failed and we were unable to recover it. 00:33:13.081 [2024-07-26 09:06:31.226890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.081 [2024-07-26 09:06:31.226918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.081 qpair failed and we were unable to recover it. 00:33:13.081 [2024-07-26 09:06:31.227083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.081 [2024-07-26 09:06:31.227135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.081 qpair failed and we were unable to recover it. 00:33:13.081 [2024-07-26 09:06:31.227308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.081 [2024-07-26 09:06:31.227334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.081 qpair failed and we were unable to recover it. 00:33:13.081 [2024-07-26 09:06:31.227489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.081 [2024-07-26 09:06:31.227517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.081 qpair failed and we were unable to recover it. 00:33:13.081 [2024-07-26 09:06:31.227675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.081 [2024-07-26 09:06:31.227703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.081 qpair failed and we were unable to recover it. 00:33:13.081 [2024-07-26 09:06:31.227860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.081 [2024-07-26 09:06:31.227889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.081 qpair failed and we were unable to recover it. 00:33:13.081 [2024-07-26 09:06:31.228075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.081 [2024-07-26 09:06:31.228118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.081 qpair failed and we were unable to recover it. 00:33:13.081 [2024-07-26 09:06:31.228261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.081 [2024-07-26 09:06:31.228286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.081 qpair failed and we were unable to recover it. 00:33:13.081 [2024-07-26 09:06:31.228427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.081 [2024-07-26 09:06:31.228455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.081 qpair failed and we were unable to recover it. 00:33:13.081 [2024-07-26 09:06:31.228588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.081 [2024-07-26 09:06:31.228616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.081 qpair failed and we were unable to recover it. 00:33:13.081 [2024-07-26 09:06:31.228783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.081 [2024-07-26 09:06:31.228811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.081 qpair failed and we were unable to recover it. 00:33:13.081 [2024-07-26 09:06:31.229009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.081 [2024-07-26 09:06:31.229038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:13.081 qpair failed and we were unable to recover it. 00:33:13.081 [2024-07-26 09:06:31.229195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.081 [2024-07-26 09:06:31.229223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:13.081 qpair failed and we were unable to recover it. 00:33:13.081 [2024-07-26 09:06:31.229426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.081 [2024-07-26 09:06:31.229469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:13.081 qpair failed and we were unable to recover it. 00:33:13.081 [2024-07-26 09:06:31.229705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.081 [2024-07-26 09:06:31.229749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:13.081 qpair failed and we were unable to recover it. 00:33:13.081 [2024-07-26 09:06:31.229912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.081 [2024-07-26 09:06:31.229956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:13.081 qpair failed and we were unable to recover it. 00:33:13.081 [2024-07-26 09:06:31.230107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.081 [2024-07-26 09:06:31.230133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:13.081 qpair failed and we were unable to recover it. 00:33:13.081 [2024-07-26 09:06:31.230299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.081 [2024-07-26 09:06:31.230344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:13.081 qpair failed and we were unable to recover it. 00:33:13.081 [2024-07-26 09:06:31.230511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.081 [2024-07-26 09:06:31.230554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:13.081 qpair failed and we were unable to recover it. 00:33:13.081 [2024-07-26 09:06:31.230726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.081 [2024-07-26 09:06:31.230769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:13.081 qpair failed and we were unable to recover it. 00:33:13.081 [2024-07-26 09:06:31.230915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.081 [2024-07-26 09:06:31.230942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.081 qpair failed and we were unable to recover it. 00:33:13.081 [2024-07-26 09:06:31.231092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.081 [2024-07-26 09:06:31.231119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.081 qpair failed and we were unable to recover it. 00:33:13.081 [2024-07-26 09:06:31.231237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.081 [2024-07-26 09:06:31.231263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.081 qpair failed and we were unable to recover it. 00:33:13.081 [2024-07-26 09:06:31.231416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.081 [2024-07-26 09:06:31.231443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.081 qpair failed and we were unable to recover it. 00:33:13.081 [2024-07-26 09:06:31.231568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.081 [2024-07-26 09:06:31.231596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.081 qpair failed and we were unable to recover it. 00:33:13.081 [2024-07-26 09:06:31.231797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.081 [2024-07-26 09:06:31.231833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.081 qpair failed and we were unable to recover it. 00:33:13.081 [2024-07-26 09:06:31.232016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.082 [2024-07-26 09:06:31.232043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:13.082 qpair failed and we were unable to recover it. 00:33:13.082 [2024-07-26 09:06:31.232196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.082 [2024-07-26 09:06:31.232222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:13.082 qpair failed and we were unable to recover it. 00:33:13.082 [2024-07-26 09:06:31.232368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.082 [2024-07-26 09:06:31.232414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:13.082 qpair failed and we were unable to recover it. 00:33:13.082 [2024-07-26 09:06:31.232553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.082 [2024-07-26 09:06:31.232596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:13.082 qpair failed and we were unable to recover it. 00:33:13.082 [2024-07-26 09:06:31.232794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.082 [2024-07-26 09:06:31.232837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:13.082 qpair failed and we were unable to recover it. 00:33:13.082 [2024-07-26 09:06:31.232984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.082 [2024-07-26 09:06:31.233010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:13.082 qpair failed and we were unable to recover it. 00:33:13.082 [2024-07-26 09:06:31.233132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.082 [2024-07-26 09:06:31.233159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.082 qpair failed and we were unable to recover it. 00:33:13.082 [2024-07-26 09:06:31.233327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.082 [2024-07-26 09:06:31.233352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.082 qpair failed and we were unable to recover it. 00:33:13.082 [2024-07-26 09:06:31.233517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.082 [2024-07-26 09:06:31.233542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.082 qpair failed and we were unable to recover it. 00:33:13.082 [2024-07-26 09:06:31.233758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.082 [2024-07-26 09:06:31.233806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.082 qpair failed and we were unable to recover it. 00:33:13.082 [2024-07-26 09:06:31.233992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.082 [2024-07-26 09:06:31.234020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.082 qpair failed and we were unable to recover it. 00:33:13.082 [2024-07-26 09:06:31.234190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.082 [2024-07-26 09:06:31.234216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.082 qpair failed and we were unable to recover it. 00:33:13.082 [2024-07-26 09:06:31.234393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.082 [2024-07-26 09:06:31.234418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.082 qpair failed and we were unable to recover it. 00:33:13.082 [2024-07-26 09:06:31.234569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.082 [2024-07-26 09:06:31.234597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.082 qpair failed and we were unable to recover it. 00:33:13.082 [2024-07-26 09:06:31.234738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.082 [2024-07-26 09:06:31.234766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.082 qpair failed and we were unable to recover it. 00:33:13.082 [2024-07-26 09:06:31.234893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.082 [2024-07-26 09:06:31.234921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.082 qpair failed and we were unable to recover it. 00:33:13.082 [2024-07-26 09:06:31.235076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.082 [2024-07-26 09:06:31.235122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.082 qpair failed and we were unable to recover it. 00:33:13.082 [2024-07-26 09:06:31.235235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.082 [2024-07-26 09:06:31.235261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.082 qpair failed and we were unable to recover it. 00:33:13.082 [2024-07-26 09:06:31.235455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.082 [2024-07-26 09:06:31.235506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:13.082 qpair failed and we were unable to recover it. 00:33:13.082 [2024-07-26 09:06:31.235707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.082 [2024-07-26 09:06:31.235750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:13.082 qpair failed and we were unable to recover it. 00:33:13.082 [2024-07-26 09:06:31.235937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.082 [2024-07-26 09:06:31.235992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:13.082 qpair failed and we were unable to recover it. 00:33:13.082 [2024-07-26 09:06:31.236175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.082 [2024-07-26 09:06:31.236201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:13.082 qpair failed and we were unable to recover it. 00:33:13.082 [2024-07-26 09:06:31.236359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.082 [2024-07-26 09:06:31.236403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:13.082 qpair failed and we were unable to recover it. 00:33:13.082 [2024-07-26 09:06:31.236540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.082 [2024-07-26 09:06:31.236583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:13.082 qpair failed and we were unable to recover it. 00:33:13.082 [2024-07-26 09:06:31.236744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.082 [2024-07-26 09:06:31.236787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:13.082 qpair failed and we were unable to recover it. 00:33:13.082 [2024-07-26 09:06:31.236915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.082 [2024-07-26 09:06:31.236941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:13.082 qpair failed and we were unable to recover it. 00:33:13.082 [2024-07-26 09:06:31.237069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.082 [2024-07-26 09:06:31.237095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:13.082 qpair failed and we were unable to recover it. 00:33:13.082 [2024-07-26 09:06:31.237268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.082 [2024-07-26 09:06:31.237297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.082 qpair failed and we were unable to recover it. 00:33:13.082 [2024-07-26 09:06:31.237453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.082 [2024-07-26 09:06:31.237481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.082 qpair failed and we were unable to recover it. 00:33:13.082 [2024-07-26 09:06:31.237670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.082 [2024-07-26 09:06:31.237698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.082 qpair failed and we were unable to recover it. 00:33:13.082 [2024-07-26 09:06:31.237870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.082 [2024-07-26 09:06:31.237908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.082 qpair failed and we were unable to recover it. 00:33:13.082 [2024-07-26 09:06:31.238103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.082 [2024-07-26 09:06:31.238129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.082 qpair failed and we were unable to recover it. 00:33:13.082 [2024-07-26 09:06:31.238278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.082 [2024-07-26 09:06:31.238304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.082 qpair failed and we were unable to recover it. 00:33:13.082 [2024-07-26 09:06:31.238481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.082 [2024-07-26 09:06:31.238513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.082 qpair failed and we were unable to recover it. 00:33:13.082 [2024-07-26 09:06:31.238691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.082 [2024-07-26 09:06:31.238719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.082 qpair failed and we were unable to recover it. 00:33:13.082 [2024-07-26 09:06:31.238878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.082 [2024-07-26 09:06:31.238906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.082 qpair failed and we were unable to recover it. 00:33:13.082 [2024-07-26 09:06:31.239090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.082 [2024-07-26 09:06:31.239117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.082 qpair failed and we were unable to recover it. 00:33:13.082 [2024-07-26 09:06:31.239262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.082 [2024-07-26 09:06:31.239287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.082 qpair failed and we were unable to recover it. 00:33:13.082 [2024-07-26 09:06:31.239482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.082 [2024-07-26 09:06:31.239510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.082 qpair failed and we were unable to recover it. 00:33:13.082 [2024-07-26 09:06:31.239692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.082 [2024-07-26 09:06:31.239740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.082 qpair failed and we were unable to recover it. 00:33:13.082 [2024-07-26 09:06:31.239924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.082 [2024-07-26 09:06:31.239953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.082 qpair failed and we were unable to recover it. 00:33:13.082 [2024-07-26 09:06:31.240112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.082 [2024-07-26 09:06:31.240138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.082 qpair failed and we were unable to recover it. 00:33:13.082 [2024-07-26 09:06:31.240287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.082 [2024-07-26 09:06:31.240312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.082 qpair failed and we were unable to recover it. 00:33:13.082 [2024-07-26 09:06:31.240477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.082 [2024-07-26 09:06:31.240505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.082 qpair failed and we were unable to recover it. 00:33:13.082 [2024-07-26 09:06:31.240663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.082 [2024-07-26 09:06:31.240691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.082 qpair failed and we were unable to recover it. 00:33:13.082 [2024-07-26 09:06:31.240850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.082 [2024-07-26 09:06:31.240877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.082 qpair failed and we were unable to recover it. 00:33:13.082 [2024-07-26 09:06:31.241031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.082 [2024-07-26 09:06:31.241066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.082 qpair failed and we were unable to recover it. 00:33:13.082 [2024-07-26 09:06:31.241203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.082 [2024-07-26 09:06:31.241228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.082 qpair failed and we were unable to recover it. 00:33:13.082 [2024-07-26 09:06:31.241398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.082 [2024-07-26 09:06:31.241423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.082 qpair failed and we were unable to recover it. 00:33:13.082 [2024-07-26 09:06:31.241554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.082 [2024-07-26 09:06:31.241582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.082 qpair failed and we were unable to recover it. 00:33:13.082 [2024-07-26 09:06:31.241713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.082 [2024-07-26 09:06:31.241740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.082 qpair failed and we were unable to recover it. 00:33:13.082 [2024-07-26 09:06:31.241870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.082 [2024-07-26 09:06:31.241913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.082 qpair failed and we were unable to recover it. 00:33:13.082 [2024-07-26 09:06:31.242064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.082 [2024-07-26 09:06:31.242108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.082 qpair failed and we were unable to recover it. 00:33:13.082 [2024-07-26 09:06:31.242255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.082 [2024-07-26 09:06:31.242280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.082 qpair failed and we were unable to recover it. 00:33:13.082 [2024-07-26 09:06:31.242426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.082 [2024-07-26 09:06:31.242455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.082 qpair failed and we were unable to recover it. 00:33:13.082 [2024-07-26 09:06:31.242628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.082 [2024-07-26 09:06:31.242655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.082 qpair failed and we were unable to recover it. 00:33:13.082 [2024-07-26 09:06:31.242840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.082 [2024-07-26 09:06:31.242868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.082 qpair failed and we were unable to recover it. 00:33:13.082 [2024-07-26 09:06:31.243030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.082 [2024-07-26 09:06:31.243055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.082 qpair failed and we were unable to recover it. 00:33:13.082 [2024-07-26 09:06:31.243247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.082 [2024-07-26 09:06:31.243275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.082 qpair failed and we were unable to recover it. 00:33:13.082 [2024-07-26 09:06:31.243389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.082 [2024-07-26 09:06:31.243414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.082 qpair failed and we were unable to recover it. 00:33:13.082 [2024-07-26 09:06:31.243588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.082 [2024-07-26 09:06:31.243613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.082 qpair failed and we were unable to recover it. 00:33:13.082 [2024-07-26 09:06:31.243781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.082 [2024-07-26 09:06:31.243806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.082 qpair failed and we were unable to recover it. 00:33:13.082 [2024-07-26 09:06:31.243945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.083 [2024-07-26 09:06:31.243972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.083 qpair failed and we were unable to recover it. 00:33:13.083 [2024-07-26 09:06:31.244137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.083 [2024-07-26 09:06:31.244163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.083 qpair failed and we were unable to recover it. 00:33:13.083 [2024-07-26 09:06:31.244306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.083 [2024-07-26 09:06:31.244349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.083 qpair failed and we were unable to recover it. 00:33:13.083 [2024-07-26 09:06:31.244482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.083 [2024-07-26 09:06:31.244510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.083 qpair failed and we were unable to recover it. 00:33:13.083 [2024-07-26 09:06:31.244669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.083 [2024-07-26 09:06:31.244697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.083 qpair failed and we were unable to recover it. 00:33:13.083 [2024-07-26 09:06:31.244859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.083 [2024-07-26 09:06:31.244887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.083 qpair failed and we were unable to recover it. 00:33:13.083 [2024-07-26 09:06:31.245039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.083 [2024-07-26 09:06:31.245074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.083 qpair failed and we were unable to recover it. 00:33:13.083 [2024-07-26 09:06:31.245248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.083 [2024-07-26 09:06:31.245274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.083 qpair failed and we were unable to recover it. 00:33:13.083 [2024-07-26 09:06:31.245437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.083 [2024-07-26 09:06:31.245466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.083 qpair failed and we were unable to recover it. 00:33:13.083 [2024-07-26 09:06:31.245593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.083 [2024-07-26 09:06:31.245621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.083 qpair failed and we were unable to recover it. 00:33:13.083 [2024-07-26 09:06:31.245795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.083 [2024-07-26 09:06:31.245835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.083 qpair failed and we were unable to recover it. 00:33:13.083 [2024-07-26 09:06:31.245991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.083 [2024-07-26 09:06:31.246019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.083 qpair failed and we were unable to recover it. 00:33:13.083 [2024-07-26 09:06:31.246168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.083 [2024-07-26 09:06:31.246194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.083 qpair failed and we were unable to recover it. 00:33:13.083 [2024-07-26 09:06:31.246403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.083 [2024-07-26 09:06:31.246441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.083 qpair failed and we were unable to recover it. 00:33:13.083 [2024-07-26 09:06:31.246621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.083 [2024-07-26 09:06:31.246649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.083 qpair failed and we were unable to recover it. 00:33:13.083 [2024-07-26 09:06:31.246778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.083 [2024-07-26 09:06:31.246806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.083 qpair failed and we were unable to recover it. 00:33:13.083 [2024-07-26 09:06:31.246965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.083 [2024-07-26 09:06:31.246992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.083 qpair failed and we were unable to recover it. 00:33:13.083 [2024-07-26 09:06:31.247163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.083 [2024-07-26 09:06:31.247190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.083 qpair failed and we were unable to recover it. 00:33:13.083 [2024-07-26 09:06:31.247390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.083 [2024-07-26 09:06:31.247418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.083 qpair failed and we were unable to recover it. 00:33:13.083 [2024-07-26 09:06:31.247663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.083 [2024-07-26 09:06:31.247715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.083 qpair failed and we were unable to recover it. 00:33:13.083 [2024-07-26 09:06:31.247855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.083 [2024-07-26 09:06:31.247884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.083 qpair failed and we were unable to recover it. 00:33:13.083 [2024-07-26 09:06:31.248017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.083 [2024-07-26 09:06:31.248044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.083 qpair failed and we were unable to recover it. 00:33:13.083 [2024-07-26 09:06:31.248242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.083 [2024-07-26 09:06:31.248268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.083 qpair failed and we were unable to recover it. 00:33:13.083 [2024-07-26 09:06:31.248456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.083 [2024-07-26 09:06:31.248481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.083 qpair failed and we were unable to recover it. 00:33:13.083 [2024-07-26 09:06:31.248677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.083 [2024-07-26 09:06:31.248705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.083 qpair failed and we were unable to recover it. 00:33:13.083 [2024-07-26 09:06:31.248969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.083 [2024-07-26 09:06:31.248994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.083 qpair failed and we were unable to recover it. 00:33:13.083 [2024-07-26 09:06:31.249142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.083 [2024-07-26 09:06:31.249169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.083 qpair failed and we were unable to recover it. 00:33:13.083 [2024-07-26 09:06:31.249286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.083 [2024-07-26 09:06:31.249311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.083 qpair failed and we were unable to recover it. 00:33:13.083 [2024-07-26 09:06:31.249496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.083 [2024-07-26 09:06:31.249524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.083 qpair failed and we were unable to recover it. 00:33:13.083 [2024-07-26 09:06:31.249651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.083 [2024-07-26 09:06:31.249679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.083 qpair failed and we were unable to recover it. 00:33:13.083 [2024-07-26 09:06:31.249810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.083 [2024-07-26 09:06:31.249838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.083 qpair failed and we were unable to recover it. 00:33:13.083 [2024-07-26 09:06:31.249977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.083 [2024-07-26 09:06:31.250003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.083 qpair failed and we were unable to recover it. 00:33:13.083 [2024-07-26 09:06:31.250171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.083 [2024-07-26 09:06:31.250197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.083 qpair failed and we were unable to recover it. 00:33:13.083 [2024-07-26 09:06:31.250356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.083 [2024-07-26 09:06:31.250384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.083 qpair failed and we were unable to recover it. 00:33:13.083 [2024-07-26 09:06:31.250544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.083 [2024-07-26 09:06:31.250572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.083 qpair failed and we were unable to recover it. 00:33:13.083 [2024-07-26 09:06:31.250731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.083 [2024-07-26 09:06:31.250759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.083 qpair failed and we were unable to recover it. 00:33:13.083 [2024-07-26 09:06:31.250878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.083 [2024-07-26 09:06:31.250906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.083 qpair failed and we were unable to recover it. 00:33:13.083 [2024-07-26 09:06:31.251043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.083 [2024-07-26 09:06:31.251080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.083 qpair failed and we were unable to recover it. 00:33:13.083 [2024-07-26 09:06:31.251204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.083 [2024-07-26 09:06:31.251229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.083 qpair failed and we were unable to recover it. 00:33:13.083 [2024-07-26 09:06:31.251345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.083 [2024-07-26 09:06:31.251371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.083 qpair failed and we were unable to recover it. 00:33:13.083 [2024-07-26 09:06:31.251530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.083 [2024-07-26 09:06:31.251559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.083 qpair failed and we were unable to recover it. 00:33:13.083 [2024-07-26 09:06:31.251715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.083 [2024-07-26 09:06:31.251743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.083 qpair failed and we were unable to recover it. 00:33:13.083 [2024-07-26 09:06:31.251873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.083 [2024-07-26 09:06:31.251901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.083 qpair failed and we were unable to recover it. 00:33:13.083 [2024-07-26 09:06:31.252084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.083 [2024-07-26 09:06:31.252110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.083 qpair failed and we were unable to recover it. 00:33:13.083 [2024-07-26 09:06:31.252253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.083 [2024-07-26 09:06:31.252278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.083 qpair failed and we were unable to recover it. 00:33:13.083 [2024-07-26 09:06:31.252446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.083 [2024-07-26 09:06:31.252473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.083 qpair failed and we were unable to recover it. 00:33:13.083 [2024-07-26 09:06:31.252658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.083 [2024-07-26 09:06:31.252686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.083 qpair failed and we were unable to recover it. 00:33:13.083 [2024-07-26 09:06:31.252814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.083 [2024-07-26 09:06:31.252842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.083 qpair failed and we were unable to recover it. 00:33:13.083 [2024-07-26 09:06:31.252976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.083 [2024-07-26 09:06:31.253017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.083 qpair failed and we were unable to recover it. 00:33:13.083 [2024-07-26 09:06:31.253216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.083 [2024-07-26 09:06:31.253242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.083 qpair failed and we were unable to recover it. 00:33:13.083 [2024-07-26 09:06:31.253381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.083 [2024-07-26 09:06:31.253409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.083 qpair failed and we were unable to recover it. 00:33:13.083 [2024-07-26 09:06:31.253547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.083 [2024-07-26 09:06:31.253576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.083 qpair failed and we were unable to recover it. 00:33:13.083 [2024-07-26 09:06:31.253746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.083 [2024-07-26 09:06:31.253774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.083 qpair failed and we were unable to recover it. 00:33:13.083 [2024-07-26 09:06:31.253934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.083 [2024-07-26 09:06:31.253962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.083 qpair failed and we were unable to recover it. 00:33:13.083 [2024-07-26 09:06:31.254141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.083 [2024-07-26 09:06:31.254167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.083 qpair failed and we were unable to recover it. 00:33:13.083 [2024-07-26 09:06:31.254339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.083 [2024-07-26 09:06:31.254364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.083 qpair failed and we were unable to recover it. 00:33:13.083 [2024-07-26 09:06:31.254539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.083 [2024-07-26 09:06:31.254567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.083 qpair failed and we were unable to recover it. 00:33:13.084 [2024-07-26 09:06:31.254730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.084 [2024-07-26 09:06:31.254757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.084 qpair failed and we were unable to recover it. 00:33:13.084 [2024-07-26 09:06:31.254918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.084 [2024-07-26 09:06:31.254947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.084 qpair failed and we were unable to recover it. 00:33:13.084 [2024-07-26 09:06:31.255078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.084 [2024-07-26 09:06:31.255124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.084 qpair failed and we were unable to recover it. 00:33:13.084 [2024-07-26 09:06:31.255266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.084 [2024-07-26 09:06:31.255296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.084 qpair failed and we were unable to recover it. 00:33:13.084 [2024-07-26 09:06:31.255520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.084 [2024-07-26 09:06:31.255568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.084 qpair failed and we were unable to recover it. 00:33:13.084 [2024-07-26 09:06:31.255730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.084 [2024-07-26 09:06:31.255758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.084 qpair failed and we were unable to recover it. 00:33:13.084 [2024-07-26 09:06:31.255902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.084 [2024-07-26 09:06:31.255945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.084 qpair failed and we were unable to recover it. 00:33:13.084 [2024-07-26 09:06:31.256148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.084 [2024-07-26 09:06:31.256174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.084 qpair failed and we were unable to recover it. 00:33:13.084 [2024-07-26 09:06:31.256321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.084 [2024-07-26 09:06:31.256362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.084 qpair failed and we were unable to recover it. 00:33:13.084 [2024-07-26 09:06:31.256498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.084 [2024-07-26 09:06:31.256526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.084 qpair failed and we were unable to recover it. 00:33:13.084 [2024-07-26 09:06:31.256695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.084 [2024-07-26 09:06:31.256720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.084 qpair failed and we were unable to recover it. 00:33:13.084 [2024-07-26 09:06:31.256878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.084 [2024-07-26 09:06:31.256906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.084 qpair failed and we were unable to recover it. 00:33:13.084 [2024-07-26 09:06:31.257047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.084 [2024-07-26 09:06:31.257081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.084 qpair failed and we were unable to recover it. 00:33:13.084 [2024-07-26 09:06:31.257210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.084 [2024-07-26 09:06:31.257235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.084 qpair failed and we were unable to recover it. 00:33:13.084 [2024-07-26 09:06:31.257423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.084 [2024-07-26 09:06:31.257451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.084 qpair failed and we were unable to recover it. 00:33:13.084 [2024-07-26 09:06:31.257626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.084 [2024-07-26 09:06:31.257651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.084 qpair failed and we were unable to recover it. 00:33:13.084 [2024-07-26 09:06:31.257775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.084 [2024-07-26 09:06:31.257800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.084 qpair failed and we were unable to recover it. 00:33:13.084 [2024-07-26 09:06:31.257925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.084 [2024-07-26 09:06:31.257950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.084 qpair failed and we were unable to recover it. 00:33:13.084 [2024-07-26 09:06:31.258072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.084 [2024-07-26 09:06:31.258099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.084 qpair failed and we were unable to recover it. 00:33:13.084 [2024-07-26 09:06:31.258271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.084 [2024-07-26 09:06:31.258297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.084 qpair failed and we were unable to recover it. 00:33:13.084 [2024-07-26 09:06:31.258472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.084 [2024-07-26 09:06:31.258499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.084 qpair failed and we were unable to recover it. 00:33:13.084 [2024-07-26 09:06:31.258687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.084 [2024-07-26 09:06:31.258715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.084 qpair failed and we were unable to recover it. 00:33:13.084 [2024-07-26 09:06:31.258903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.084 [2024-07-26 09:06:31.258928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.084 qpair failed and we were unable to recover it. 00:33:13.084 [2024-07-26 09:06:31.259073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.084 [2024-07-26 09:06:31.259106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.084 qpair failed and we were unable to recover it. 00:33:13.084 [2024-07-26 09:06:31.259234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.084 [2024-07-26 09:06:31.259263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.084 qpair failed and we were unable to recover it. 00:33:13.084 [2024-07-26 09:06:31.259428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.084 [2024-07-26 09:06:31.259454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.084 qpair failed and we were unable to recover it. 00:33:13.084 [2024-07-26 09:06:31.259622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.084 [2024-07-26 09:06:31.259651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.084 qpair failed and we were unable to recover it. 00:33:13.084 [2024-07-26 09:06:31.259801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.084 [2024-07-26 09:06:31.259830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.084 qpair failed and we were unable to recover it. 00:33:13.084 [2024-07-26 09:06:31.259967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.084 [2024-07-26 09:06:31.259992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.084 qpair failed and we were unable to recover it. 00:33:13.084 [2024-07-26 09:06:31.260134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.084 [2024-07-26 09:06:31.260176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.084 qpair failed and we were unable to recover it. 00:33:13.084 [2024-07-26 09:06:31.260309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.084 [2024-07-26 09:06:31.260342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.084 qpair failed and we were unable to recover it. 00:33:13.084 [2024-07-26 09:06:31.260532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.084 [2024-07-26 09:06:31.260557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.084 qpair failed and we were unable to recover it. 00:33:13.084 [2024-07-26 09:06:31.260748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.084 [2024-07-26 09:06:31.260776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.084 qpair failed and we were unable to recover it. 00:33:13.084 [2024-07-26 09:06:31.260912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.084 [2024-07-26 09:06:31.260941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.084 qpair failed and we were unable to recover it. 00:33:13.084 [2024-07-26 09:06:31.261099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.084 [2024-07-26 09:06:31.261125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.084 qpair failed and we were unable to recover it. 00:33:13.084 [2024-07-26 09:06:31.261269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.084 [2024-07-26 09:06:31.261313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.084 qpair failed and we were unable to recover it. 00:33:13.084 [2024-07-26 09:06:31.261490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.084 [2024-07-26 09:06:31.261515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.084 qpair failed and we were unable to recover it. 00:33:13.084 [2024-07-26 09:06:31.261661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.084 [2024-07-26 09:06:31.261687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.084 qpair failed and we were unable to recover it. 00:33:13.084 [2024-07-26 09:06:31.261878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.084 [2024-07-26 09:06:31.261906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.084 qpair failed and we were unable to recover it. 00:33:13.084 [2024-07-26 09:06:31.262068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.084 [2024-07-26 09:06:31.262097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.084 qpair failed and we were unable to recover it. 00:33:13.084 [2024-07-26 09:06:31.262241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.084 [2024-07-26 09:06:31.262266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.084 qpair failed and we were unable to recover it. 00:33:13.084 [2024-07-26 09:06:31.262413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.084 [2024-07-26 09:06:31.262454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.084 qpair failed and we were unable to recover it. 00:33:13.084 [2024-07-26 09:06:31.262613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.084 [2024-07-26 09:06:31.262641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.084 qpair failed and we were unable to recover it. 00:33:13.084 [2024-07-26 09:06:31.262802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.084 [2024-07-26 09:06:31.262827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.084 qpair failed and we were unable to recover it. 00:33:13.084 [2024-07-26 09:06:31.262979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.084 [2024-07-26 09:06:31.263005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.084 qpair failed and we were unable to recover it. 00:33:13.084 [2024-07-26 09:06:31.263133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.084 [2024-07-26 09:06:31.263160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.084 qpair failed and we were unable to recover it. 00:33:13.084 [2024-07-26 09:06:31.263307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.084 [2024-07-26 09:06:31.263332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.084 qpair failed and we were unable to recover it. 00:33:13.084 [2024-07-26 09:06:31.263475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.084 [2024-07-26 09:06:31.263500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.084 qpair failed and we were unable to recover it. 00:33:13.084 [2024-07-26 09:06:31.263648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.084 [2024-07-26 09:06:31.263689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.084 qpair failed and we were unable to recover it. 00:33:13.084 [2024-07-26 09:06:31.263855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.084 [2024-07-26 09:06:31.263884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.084 qpair failed and we were unable to recover it. 00:33:13.084 [2024-07-26 09:06:31.264043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.084 [2024-07-26 09:06:31.264107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.084 qpair failed and we were unable to recover it. 00:33:13.084 [2024-07-26 09:06:31.264256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.084 [2024-07-26 09:06:31.264282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.084 qpair failed and we were unable to recover it. 00:33:13.084 [2024-07-26 09:06:31.264400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.084 [2024-07-26 09:06:31.264426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.084 qpair failed and we were unable to recover it. 00:33:13.084 [2024-07-26 09:06:31.264538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.084 [2024-07-26 09:06:31.264563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.084 qpair failed and we were unable to recover it. 00:33:13.084 [2024-07-26 09:06:31.264735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.084 [2024-07-26 09:06:31.264763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.084 qpair failed and we were unable to recover it. 00:33:13.084 [2024-07-26 09:06:31.264933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.084 [2024-07-26 09:06:31.264959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.084 qpair failed and we were unable to recover it. 00:33:13.084 [2024-07-26 09:06:31.265087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.084 [2024-07-26 09:06:31.265114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.084 qpair failed and we were unable to recover it. 00:33:13.084 [2024-07-26 09:06:31.265287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.084 [2024-07-26 09:06:31.265330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.084 qpair failed and we were unable to recover it. 00:33:13.084 [2024-07-26 09:06:31.265527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.084 [2024-07-26 09:06:31.265553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.084 qpair failed and we were unable to recover it. 00:33:13.084 [2024-07-26 09:06:31.265745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.084 [2024-07-26 09:06:31.265773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.084 qpair failed and we were unable to recover it. 00:33:13.084 [2024-07-26 09:06:31.265910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.084 [2024-07-26 09:06:31.265938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.084 qpair failed and we were unable to recover it. 00:33:13.084 [2024-07-26 09:06:31.266109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.084 [2024-07-26 09:06:31.266135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.085 qpair failed and we were unable to recover it. 00:33:13.085 [2024-07-26 09:06:31.266296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.085 [2024-07-26 09:06:31.266324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.085 qpair failed and we were unable to recover it. 00:33:13.085 [2024-07-26 09:06:31.266462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.085 [2024-07-26 09:06:31.266490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.085 qpair failed and we were unable to recover it. 00:33:13.085 [2024-07-26 09:06:31.266634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.085 [2024-07-26 09:06:31.266659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.085 qpair failed and we were unable to recover it. 00:33:13.085 [2024-07-26 09:06:31.266801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.085 [2024-07-26 09:06:31.266841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.085 qpair failed and we were unable to recover it. 00:33:13.085 [2024-07-26 09:06:31.267004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.085 [2024-07-26 09:06:31.267032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.085 qpair failed and we were unable to recover it. 00:33:13.085 [2024-07-26 09:06:31.267211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.085 [2024-07-26 09:06:31.267238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.085 qpair failed and we were unable to recover it. 00:33:13.085 [2024-07-26 09:06:31.267395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.085 [2024-07-26 09:06:31.267423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.085 qpair failed and we were unable to recover it. 00:33:13.085 [2024-07-26 09:06:31.267554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.085 [2024-07-26 09:06:31.267583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.085 qpair failed and we were unable to recover it. 00:33:13.085 [2024-07-26 09:06:31.267747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.085 [2024-07-26 09:06:31.267772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.085 qpair failed and we were unable to recover it. 00:33:13.085 [2024-07-26 09:06:31.267916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.085 [2024-07-26 09:06:31.267945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.085 qpair failed and we were unable to recover it. 00:33:13.085 [2024-07-26 09:06:31.268098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.085 [2024-07-26 09:06:31.268127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.085 qpair failed and we were unable to recover it. 00:33:13.085 [2024-07-26 09:06:31.268315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.085 [2024-07-26 09:06:31.268341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.085 qpair failed and we were unable to recover it. 00:33:13.085 [2024-07-26 09:06:31.268535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.085 [2024-07-26 09:06:31.268563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.085 qpair failed and we were unable to recover it. 00:33:13.085 [2024-07-26 09:06:31.268680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.085 [2024-07-26 09:06:31.268708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.085 qpair failed and we were unable to recover it. 00:33:13.085 [2024-07-26 09:06:31.268847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.085 [2024-07-26 09:06:31.268872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.085 qpair failed and we were unable to recover it. 00:33:13.085 [2024-07-26 09:06:31.269009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.085 [2024-07-26 09:06:31.269034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.085 qpair failed and we were unable to recover it. 00:33:13.085 [2024-07-26 09:06:31.269243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.085 [2024-07-26 09:06:31.269271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.085 qpair failed and we were unable to recover it. 00:33:13.085 [2024-07-26 09:06:31.269439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.085 [2024-07-26 09:06:31.269464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.085 qpair failed and we were unable to recover it. 00:33:13.085 [2024-07-26 09:06:31.269587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.085 [2024-07-26 09:06:31.269613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.085 qpair failed and we were unable to recover it. 00:33:13.085 [2024-07-26 09:06:31.269762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.085 [2024-07-26 09:06:31.269787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.085 qpair failed and we were unable to recover it. 00:33:13.085 [2024-07-26 09:06:31.269922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.085 [2024-07-26 09:06:31.269950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.085 qpair failed and we were unable to recover it. 00:33:13.085 [2024-07-26 09:06:31.270125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.085 [2024-07-26 09:06:31.270151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.085 qpair failed and we were unable to recover it. 00:33:13.085 [2024-07-26 09:06:31.270291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.085 [2024-07-26 09:06:31.270316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.085 qpair failed and we were unable to recover it. 00:33:13.085 [2024-07-26 09:06:31.270537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.085 [2024-07-26 09:06:31.270562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.085 qpair failed and we were unable to recover it. 00:33:13.085 [2024-07-26 09:06:31.270704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.085 [2024-07-26 09:06:31.270746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.085 qpair failed and we were unable to recover it. 00:33:13.085 [2024-07-26 09:06:31.270881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.085 [2024-07-26 09:06:31.270909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.085 qpair failed and we were unable to recover it. 00:33:13.085 [2024-07-26 09:06:31.271100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.085 [2024-07-26 09:06:31.271127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.085 qpair failed and we were unable to recover it. 00:33:13.085 [2024-07-26 09:06:31.271298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.085 [2024-07-26 09:06:31.271326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.085 qpair failed and we were unable to recover it. 00:33:13.085 [2024-07-26 09:06:31.271486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.085 [2024-07-26 09:06:31.271511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.085 qpair failed and we were unable to recover it. 00:33:13.085 [2024-07-26 09:06:31.271653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.085 [2024-07-26 09:06:31.271678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.085 qpair failed and we were unable to recover it. 00:33:13.085 [2024-07-26 09:06:31.271839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.085 [2024-07-26 09:06:31.271867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.085 qpair failed and we were unable to recover it. 00:33:13.085 [2024-07-26 09:06:31.272015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.085 [2024-07-26 09:06:31.272044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.085 qpair failed and we were unable to recover it. 00:33:13.085 [2024-07-26 09:06:31.272183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.085 [2024-07-26 09:06:31.272209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.085 qpair failed and we were unable to recover it. 00:33:13.085 [2024-07-26 09:06:31.272354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.085 [2024-07-26 09:06:31.272379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.085 qpair failed and we were unable to recover it. 00:33:13.085 [2024-07-26 09:06:31.272525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.085 [2024-07-26 09:06:31.272553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.085 qpair failed and we were unable to recover it. 00:33:13.085 [2024-07-26 09:06:31.272717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.085 [2024-07-26 09:06:31.272742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.085 qpair failed and we were unable to recover it. 00:33:13.085 [2024-07-26 09:06:31.272862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.085 [2024-07-26 09:06:31.272902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.085 qpair failed and we were unable to recover it. 00:33:13.085 [2024-07-26 09:06:31.273094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.085 [2024-07-26 09:06:31.273123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.085 qpair failed and we were unable to recover it. 00:33:13.085 [2024-07-26 09:06:31.273266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.085 [2024-07-26 09:06:31.273291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.085 qpair failed and we were unable to recover it. 00:33:13.085 [2024-07-26 09:06:31.273435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.085 [2024-07-26 09:06:31.273460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.085 qpair failed and we were unable to recover it. 00:33:13.085 [2024-07-26 09:06:31.273585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.085 [2024-07-26 09:06:31.273610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.085 qpair failed and we were unable to recover it. 00:33:13.085 [2024-07-26 09:06:31.273721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.085 [2024-07-26 09:06:31.273746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.085 qpair failed and we were unable to recover it. 00:33:13.085 [2024-07-26 09:06:31.273865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.085 [2024-07-26 09:06:31.273890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.085 qpair failed and we were unable to recover it. 00:33:13.085 [2024-07-26 09:06:31.274035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.085 [2024-07-26 09:06:31.274066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.085 qpair failed and we were unable to recover it. 00:33:13.085 [2024-07-26 09:06:31.274225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.085 [2024-07-26 09:06:31.274250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.085 qpair failed and we were unable to recover it. 00:33:13.085 [2024-07-26 09:06:31.274390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.085 [2024-07-26 09:06:31.274415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.085 qpair failed and we were unable to recover it. 00:33:13.085 [2024-07-26 09:06:31.274584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.085 [2024-07-26 09:06:31.274612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.085 qpair failed and we were unable to recover it. 00:33:13.085 [2024-07-26 09:06:31.274781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.085 [2024-07-26 09:06:31.274806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.085 qpair failed and we were unable to recover it. 00:33:13.085 [2024-07-26 09:06:31.274948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.085 [2024-07-26 09:06:31.274973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.085 qpair failed and we were unable to recover it. 00:33:13.085 [2024-07-26 09:06:31.275086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.085 [2024-07-26 09:06:31.275113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.085 qpair failed and we were unable to recover it. 00:33:13.085 [2024-07-26 09:06:31.275276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.085 [2024-07-26 09:06:31.275314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:13.085 qpair failed and we were unable to recover it. 00:33:13.085 [2024-07-26 09:06:31.275484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.085 [2024-07-26 09:06:31.275529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:13.085 qpair failed and we were unable to recover it. 00:33:13.085 [2024-07-26 09:06:31.275666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.085 [2024-07-26 09:06:31.275710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:13.085 qpair failed and we were unable to recover it. 00:33:13.085 [2024-07-26 09:06:31.275907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.085 [2024-07-26 09:06:31.275955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:13.085 qpair failed and we were unable to recover it. 00:33:13.085 [2024-07-26 09:06:31.276103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.085 [2024-07-26 09:06:31.276129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:13.085 qpair failed and we were unable to recover it. 00:33:13.085 [2024-07-26 09:06:31.276263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.085 [2024-07-26 09:06:31.276307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:13.085 qpair failed and we were unable to recover it. 00:33:13.085 [2024-07-26 09:06:31.276486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.085 [2024-07-26 09:06:31.276519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:13.085 qpair failed and we were unable to recover it. 00:33:13.085 [2024-07-26 09:06:31.276706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.085 [2024-07-26 09:06:31.276750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:13.085 qpair failed and we were unable to recover it. 00:33:13.085 [2024-07-26 09:06:31.276877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.085 [2024-07-26 09:06:31.276903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:13.085 qpair failed and we were unable to recover it. 00:33:13.085 [2024-07-26 09:06:31.277070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.085 [2024-07-26 09:06:31.277100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:13.085 qpair failed and we were unable to recover it. 00:33:13.085 [2024-07-26 09:06:31.277288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.085 [2024-07-26 09:06:31.277332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:13.085 qpair failed and we were unable to recover it. 00:33:13.085 [2024-07-26 09:06:31.277530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.085 [2024-07-26 09:06:31.277558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:13.085 qpair failed and we were unable to recover it. 00:33:13.086 [2024-07-26 09:06:31.277742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.086 [2024-07-26 09:06:31.277768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:13.086 qpair failed and we were unable to recover it. 00:33:13.086 [2024-07-26 09:06:31.277941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.086 [2024-07-26 09:06:31.277972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:13.086 qpair failed and we were unable to recover it. 00:33:13.086 [2024-07-26 09:06:31.278149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.086 [2024-07-26 09:06:31.278192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:13.086 qpair failed and we were unable to recover it. 00:33:13.086 [2024-07-26 09:06:31.278333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.086 [2024-07-26 09:06:31.278364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.086 qpair failed and we were unable to recover it. 00:33:13.086 [2024-07-26 09:06:31.278522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.086 [2024-07-26 09:06:31.278550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.086 qpair failed and we were unable to recover it. 00:33:13.086 [2024-07-26 09:06:31.278711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.086 [2024-07-26 09:06:31.278739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.086 qpair failed and we were unable to recover it. 00:33:13.086 [2024-07-26 09:06:31.278879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.086 [2024-07-26 09:06:31.278905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.086 qpair failed and we were unable to recover it. 00:33:13.086 [2024-07-26 09:06:31.279082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.086 [2024-07-26 09:06:31.279112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.086 qpair failed and we were unable to recover it. 00:33:13.086 [2024-07-26 09:06:31.279245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.086 [2024-07-26 09:06:31.279273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.086 qpair failed and we were unable to recover it. 00:33:13.086 [2024-07-26 09:06:31.279432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.086 [2024-07-26 09:06:31.279460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.086 qpair failed and we were unable to recover it. 00:33:13.086 [2024-07-26 09:06:31.279615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.086 [2024-07-26 09:06:31.279643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.086 qpair failed and we were unable to recover it. 00:33:13.086 [2024-07-26 09:06:31.279805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.086 [2024-07-26 09:06:31.279833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.086 qpair failed and we were unable to recover it. 00:33:13.086 [2024-07-26 09:06:31.279995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.086 [2024-07-26 09:06:31.280021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.086 qpair failed and we were unable to recover it. 00:33:13.086 [2024-07-26 09:06:31.280170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.086 [2024-07-26 09:06:31.280196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.086 qpair failed and we were unable to recover it. 00:33:13.086 [2024-07-26 09:06:31.280360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.086 [2024-07-26 09:06:31.280388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.086 qpair failed and we were unable to recover it. 00:33:13.086 [2024-07-26 09:06:31.280632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.086 [2024-07-26 09:06:31.280679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:13.086 qpair failed and we were unable to recover it. 00:33:13.086 [2024-07-26 09:06:31.280852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.086 [2024-07-26 09:06:31.280895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:13.086 qpair failed and we were unable to recover it. 00:33:13.086 [2024-07-26 09:06:31.281010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.086 [2024-07-26 09:06:31.281035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:13.086 qpair failed and we were unable to recover it. 00:33:13.086 [2024-07-26 09:06:31.281210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.086 [2024-07-26 09:06:31.281258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:13.086 qpair failed and we were unable to recover it. 00:33:13.086 [2024-07-26 09:06:31.281403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.086 [2024-07-26 09:06:31.281446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:13.086 qpair failed and we were unable to recover it. 00:33:13.086 [2024-07-26 09:06:31.281611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.086 [2024-07-26 09:06:31.281654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:13.086 qpair failed and we were unable to recover it. 00:33:13.086 [2024-07-26 09:06:31.281823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.086 [2024-07-26 09:06:31.281851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:13.086 qpair failed and we were unable to recover it. 00:33:13.086 [2024-07-26 09:06:31.282004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.086 [2024-07-26 09:06:31.282030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:13.086 qpair failed and we were unable to recover it. 00:33:13.086 [2024-07-26 09:06:31.282173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.086 [2024-07-26 09:06:31.282217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:13.086 qpair failed and we were unable to recover it. 00:33:13.086 [2024-07-26 09:06:31.282410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.086 [2024-07-26 09:06:31.282453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:13.086 qpair failed and we were unable to recover it. 00:33:13.086 [2024-07-26 09:06:31.282609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.086 [2024-07-26 09:06:31.282653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:13.086 qpair failed and we were unable to recover it. 00:33:13.086 [2024-07-26 09:06:31.282795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.086 [2024-07-26 09:06:31.282820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:13.086 qpair failed and we were unable to recover it. 00:33:13.086 [2024-07-26 09:06:31.282963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.086 [2024-07-26 09:06:31.282991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.086 qpair failed and we were unable to recover it. 00:33:13.086 [2024-07-26 09:06:31.283161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.086 [2024-07-26 09:06:31.283195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.086 qpair failed and we were unable to recover it. 00:33:13.086 [2024-07-26 09:06:31.283347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.086 [2024-07-26 09:06:31.283376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.086 qpair failed and we were unable to recover it. 00:33:13.086 [2024-07-26 09:06:31.283506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.086 [2024-07-26 09:06:31.283534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.086 qpair failed and we were unable to recover it. 00:33:13.086 [2024-07-26 09:06:31.283692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.086 [2024-07-26 09:06:31.283720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.086 qpair failed and we were unable to recover it. 00:33:13.086 [2024-07-26 09:06:31.283872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.086 [2024-07-26 09:06:31.283900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.086 qpair failed and we were unable to recover it. 00:33:13.086 [2024-07-26 09:06:31.284088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.086 [2024-07-26 09:06:31.284133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:13.086 qpair failed and we were unable to recover it. 00:33:13.086 [2024-07-26 09:06:31.284271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.086 [2024-07-26 09:06:31.284300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:13.086 qpair failed and we were unable to recover it. 00:33:13.086 [2024-07-26 09:06:31.284486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.086 [2024-07-26 09:06:31.284531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:13.086 qpair failed and we were unable to recover it. 00:33:13.086 [2024-07-26 09:06:31.284718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.086 [2024-07-26 09:06:31.284773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:13.086 qpair failed and we were unable to recover it. 00:33:13.086 [2024-07-26 09:06:31.284927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.086 [2024-07-26 09:06:31.284953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:13.086 qpair failed and we were unable to recover it. 00:33:13.086 [2024-07-26 09:06:31.285119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.086 [2024-07-26 09:06:31.285148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:13.086 qpair failed and we were unable to recover it. 00:33:13.086 [2024-07-26 09:06:31.285363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.086 [2024-07-26 09:06:31.285392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:13.086 qpair failed and we were unable to recover it. 00:33:13.086 [2024-07-26 09:06:31.285578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.086 [2024-07-26 09:06:31.285620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:13.086 qpair failed and we were unable to recover it. 00:33:13.086 [2024-07-26 09:06:31.285745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.086 [2024-07-26 09:06:31.285770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:13.086 qpair failed and we were unable to recover it. 00:33:13.086 [2024-07-26 09:06:31.285913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.086 [2024-07-26 09:06:31.285939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:13.086 qpair failed and we were unable to recover it. 00:33:13.086 [2024-07-26 09:06:31.286082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.086 [2024-07-26 09:06:31.286108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:13.086 qpair failed and we were unable to recover it. 00:33:13.086 [2024-07-26 09:06:31.286249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.086 [2024-07-26 09:06:31.286293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:13.086 qpair failed and we were unable to recover it. 00:33:13.086 [2024-07-26 09:06:31.286464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.086 [2024-07-26 09:06:31.286506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:13.086 qpair failed and we were unable to recover it. 00:33:13.086 [2024-07-26 09:06:31.286677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.086 [2024-07-26 09:06:31.286703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:13.086 qpair failed and we were unable to recover it. 00:33:13.086 [2024-07-26 09:06:31.286818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.086 [2024-07-26 09:06:31.286845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:13.086 qpair failed and we were unable to recover it. 00:33:13.086 [2024-07-26 09:06:31.286998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.086 [2024-07-26 09:06:31.287025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.086 qpair failed and we were unable to recover it. 00:33:13.086 [2024-07-26 09:06:31.287163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.086 [2024-07-26 09:06:31.287189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.086 qpair failed and we were unable to recover it. 00:33:13.086 [2024-07-26 09:06:31.287326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.086 [2024-07-26 09:06:31.287355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.086 qpair failed and we were unable to recover it. 00:33:13.086 [2024-07-26 09:06:31.287495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.086 [2024-07-26 09:06:31.287523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.086 qpair failed and we were unable to recover it. 00:33:13.086 [2024-07-26 09:06:31.287687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.086 [2024-07-26 09:06:31.287715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.086 qpair failed and we were unable to recover it. 00:33:13.086 [2024-07-26 09:06:31.287868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.086 [2024-07-26 09:06:31.287896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.086 qpair failed and we were unable to recover it. 00:33:13.086 [2024-07-26 09:06:31.288042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.086 [2024-07-26 09:06:31.288075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.086 qpair failed and we were unable to recover it. 00:33:13.086 [2024-07-26 09:06:31.288228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.086 [2024-07-26 09:06:31.288261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.086 qpair failed and we were unable to recover it. 00:33:13.086 [2024-07-26 09:06:31.288401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.087 [2024-07-26 09:06:31.288428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.087 qpair failed and we were unable to recover it. 00:33:13.087 [2024-07-26 09:06:31.288682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.087 [2024-07-26 09:06:31.288710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.087 qpair failed and we were unable to recover it. 00:33:13.087 [2024-07-26 09:06:31.288843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.087 [2024-07-26 09:06:31.288872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.087 qpair failed and we were unable to recover it. 00:33:13.087 [2024-07-26 09:06:31.289034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.087 [2024-07-26 09:06:31.289072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.087 qpair failed and we were unable to recover it. 00:33:13.087 [2024-07-26 09:06:31.289218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.087 [2024-07-26 09:06:31.289242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.087 qpair failed and we were unable to recover it. 00:33:13.087 [2024-07-26 09:06:31.289388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.087 [2024-07-26 09:06:31.289413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.087 qpair failed and we were unable to recover it. 00:33:13.087 [2024-07-26 09:06:31.289553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.087 [2024-07-26 09:06:31.289581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.087 qpair failed and we were unable to recover it. 00:33:13.087 [2024-07-26 09:06:31.289742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.087 [2024-07-26 09:06:31.289770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.087 qpair failed and we were unable to recover it. 00:33:13.087 [2024-07-26 09:06:31.289903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.087 [2024-07-26 09:06:31.289930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.087 qpair failed and we were unable to recover it. 00:33:13.087 [2024-07-26 09:06:31.290070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.087 [2024-07-26 09:06:31.290114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.087 qpair failed and we were unable to recover it. 00:33:13.087 [2024-07-26 09:06:31.290250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.087 [2024-07-26 09:06:31.290278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.087 qpair failed and we were unable to recover it. 00:33:13.087 [2024-07-26 09:06:31.290439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.087 [2024-07-26 09:06:31.290467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.087 qpair failed and we were unable to recover it. 00:33:13.087 [2024-07-26 09:06:31.290622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.087 [2024-07-26 09:06:31.290650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.087 qpair failed and we were unable to recover it. 00:33:13.087 [2024-07-26 09:06:31.290903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.087 [2024-07-26 09:06:31.290931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.087 qpair failed and we were unable to recover it. 00:33:13.087 [2024-07-26 09:06:31.291076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.087 [2024-07-26 09:06:31.291122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.087 qpair failed and we were unable to recover it. 00:33:13.087 [2024-07-26 09:06:31.291275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.087 [2024-07-26 09:06:31.291300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.087 qpair failed and we were unable to recover it. 00:33:13.087 [2024-07-26 09:06:31.291456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.087 [2024-07-26 09:06:31.291485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.087 qpair failed and we were unable to recover it. 00:33:13.087 [2024-07-26 09:06:31.291626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.087 [2024-07-26 09:06:31.291654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.087 qpair failed and we were unable to recover it. 00:33:13.087 [2024-07-26 09:06:31.291795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.087 [2024-07-26 09:06:31.291823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.087 qpair failed and we were unable to recover it. 00:33:13.087 [2024-07-26 09:06:31.291978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.087 [2024-07-26 09:06:31.292005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.087 qpair failed and we were unable to recover it. 00:33:13.087 [2024-07-26 09:06:31.292160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.087 [2024-07-26 09:06:31.292186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.087 qpair failed and we were unable to recover it. 00:33:13.087 [2024-07-26 09:06:31.292326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.087 [2024-07-26 09:06:31.292352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.087 qpair failed and we were unable to recover it. 00:33:13.087 [2024-07-26 09:06:31.292517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.087 [2024-07-26 09:06:31.292545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.087 qpair failed and we were unable to recover it. 00:33:13.087 [2024-07-26 09:06:31.292676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.087 [2024-07-26 09:06:31.292704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.087 qpair failed and we were unable to recover it. 00:33:13.087 [2024-07-26 09:06:31.292884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.087 [2024-07-26 09:06:31.292912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.087 qpair failed and we were unable to recover it. 00:33:13.087 [2024-07-26 09:06:31.293052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.087 [2024-07-26 09:06:31.293092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.087 qpair failed and we were unable to recover it. 00:33:13.087 [2024-07-26 09:06:31.293232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.087 [2024-07-26 09:06:31.293258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.087 qpair failed and we were unable to recover it. 00:33:13.087 [2024-07-26 09:06:31.293400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.087 [2024-07-26 09:06:31.293442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.087 qpair failed and we were unable to recover it. 00:33:13.087 [2024-07-26 09:06:31.293567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.087 [2024-07-26 09:06:31.293596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.087 qpair failed and we were unable to recover it. 00:33:13.087 [2024-07-26 09:06:31.293735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.087 [2024-07-26 09:06:31.293779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.087 qpair failed and we were unable to recover it. 00:33:13.087 [2024-07-26 09:06:31.293947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.087 [2024-07-26 09:06:31.293972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.087 qpair failed and we were unable to recover it. 00:33:13.087 [2024-07-26 09:06:31.294091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.087 [2024-07-26 09:06:31.294118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.087 qpair failed and we were unable to recover it. 00:33:13.087 [2024-07-26 09:06:31.294233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.087 [2024-07-26 09:06:31.294259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.087 qpair failed and we were unable to recover it. 00:33:13.087 [2024-07-26 09:06:31.294373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.087 [2024-07-26 09:06:31.294398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.087 qpair failed and we were unable to recover it. 00:33:13.087 [2024-07-26 09:06:31.294582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.087 [2024-07-26 09:06:31.294610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.087 qpair failed and we were unable to recover it. 00:33:13.087 [2024-07-26 09:06:31.294750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.087 [2024-07-26 09:06:31.294778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.087 qpair failed and we were unable to recover it. 00:33:13.087 [2024-07-26 09:06:31.294937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.087 [2024-07-26 09:06:31.294965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.087 qpair failed and we were unable to recover it. 00:33:13.087 [2024-07-26 09:06:31.295108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.087 [2024-07-26 09:06:31.295140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.087 qpair failed and we were unable to recover it. 00:33:13.087 [2024-07-26 09:06:31.295259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.087 [2024-07-26 09:06:31.295285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.087 qpair failed and we were unable to recover it. 00:33:13.087 [2024-07-26 09:06:31.295417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.087 [2024-07-26 09:06:31.295446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.087 qpair failed and we were unable to recover it. 00:33:13.087 [2024-07-26 09:06:31.295694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.087 [2024-07-26 09:06:31.295756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:13.087 qpair failed and we were unable to recover it. 00:33:13.087 [2024-07-26 09:06:31.295943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.087 [2024-07-26 09:06:31.295982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:13.087 qpair failed and we were unable to recover it. 00:33:13.087 [2024-07-26 09:06:31.296151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.087 [2024-07-26 09:06:31.296190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:13.087 qpair failed and we were unable to recover it. 00:33:13.087 [2024-07-26 09:06:31.296356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.087 [2024-07-26 09:06:31.296386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.087 qpair failed and we were unable to recover it. 00:33:13.087 [2024-07-26 09:06:31.296572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.087 [2024-07-26 09:06:31.296613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.087 qpair failed and we were unable to recover it. 00:33:13.087 [2024-07-26 09:06:31.296763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.087 [2024-07-26 09:06:31.296791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.087 qpair failed and we were unable to recover it. 00:33:13.087 [2024-07-26 09:06:31.296953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.087 [2024-07-26 09:06:31.296978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.087 qpair failed and we were unable to recover it. 00:33:13.087 [2024-07-26 09:06:31.297100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.087 [2024-07-26 09:06:31.297127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.087 qpair failed and we were unable to recover it. 00:33:13.087 [2024-07-26 09:06:31.297274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.087 [2024-07-26 09:06:31.297300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.087 qpair failed and we were unable to recover it. 00:33:13.087 [2024-07-26 09:06:31.297461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.087 [2024-07-26 09:06:31.297490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.087 qpair failed and we were unable to recover it. 00:33:13.087 [2024-07-26 09:06:31.297631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.087 [2024-07-26 09:06:31.297674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.087 qpair failed and we were unable to recover it. 00:33:13.087 [2024-07-26 09:06:31.297833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.087 [2024-07-26 09:06:31.297861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.087 qpair failed and we were unable to recover it. 00:33:13.087 [2024-07-26 09:06:31.298052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.087 [2024-07-26 09:06:31.298084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.087 qpair failed and we were unable to recover it. 00:33:13.087 [2024-07-26 09:06:31.298204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.087 [2024-07-26 09:06:31.298229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.087 qpair failed and we were unable to recover it. 00:33:13.087 [2024-07-26 09:06:31.298402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.087 [2024-07-26 09:06:31.298430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.087 qpair failed and we were unable to recover it. 00:33:13.087 [2024-07-26 09:06:31.298584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.087 [2024-07-26 09:06:31.298611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.087 qpair failed and we were unable to recover it. 00:33:13.087 [2024-07-26 09:06:31.298742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.087 [2024-07-26 09:06:31.298770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.087 qpair failed and we were unable to recover it. 00:33:13.087 [2024-07-26 09:06:31.298956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.087 [2024-07-26 09:06:31.298984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.087 qpair failed and we were unable to recover it. 00:33:13.087 [2024-07-26 09:06:31.299151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.087 [2024-07-26 09:06:31.299178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.087 qpair failed and we were unable to recover it. 00:33:13.087 [2024-07-26 09:06:31.299322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.087 [2024-07-26 09:06:31.299348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.087 qpair failed and we were unable to recover it. 00:33:13.087 [2024-07-26 09:06:31.299490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.087 [2024-07-26 09:06:31.299515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.087 qpair failed and we were unable to recover it. 00:33:13.087 [2024-07-26 09:06:31.299662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.087 [2024-07-26 09:06:31.299691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.087 qpair failed and we were unable to recover it. 00:33:13.087 [2024-07-26 09:06:31.299849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.087 [2024-07-26 09:06:31.299876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.087 qpair failed and we were unable to recover it. 00:33:13.087 [2024-07-26 09:06:31.300041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.087 [2024-07-26 09:06:31.300080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.088 qpair failed and we were unable to recover it. 00:33:13.088 [2024-07-26 09:06:31.300221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.088 [2024-07-26 09:06:31.300246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.088 qpair failed and we were unable to recover it. 00:33:13.088 [2024-07-26 09:06:31.300439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.088 [2024-07-26 09:06:31.300467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.088 qpair failed and we were unable to recover it. 00:33:13.088 [2024-07-26 09:06:31.300625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.088 [2024-07-26 09:06:31.300653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.088 qpair failed and we were unable to recover it. 00:33:13.088 [2024-07-26 09:06:31.300833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.088 [2024-07-26 09:06:31.300866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.088 qpair failed and we were unable to recover it. 00:33:13.088 [2024-07-26 09:06:31.301027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.088 [2024-07-26 09:06:31.301055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.088 qpair failed and we were unable to recover it. 00:33:13.088 [2024-07-26 09:06:31.301233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.088 [2024-07-26 09:06:31.301258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.088 qpair failed and we were unable to recover it. 00:33:13.088 [2024-07-26 09:06:31.301422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.088 [2024-07-26 09:06:31.301449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.088 qpair failed and we were unable to recover it. 00:33:13.088 [2024-07-26 09:06:31.301585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.088 [2024-07-26 09:06:31.301613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.088 qpair failed and we were unable to recover it. 00:33:13.088 [2024-07-26 09:06:31.301774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.088 [2024-07-26 09:06:31.301802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.088 qpair failed and we were unable to recover it. 00:33:13.088 [2024-07-26 09:06:31.301958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.088 [2024-07-26 09:06:31.301986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.088 qpair failed and we were unable to recover it. 00:33:13.088 [2024-07-26 09:06:31.302135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.088 [2024-07-26 09:06:31.302161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.088 qpair failed and we were unable to recover it. 00:33:13.088 [2024-07-26 09:06:31.302298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.088 [2024-07-26 09:06:31.302323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.088 qpair failed and we were unable to recover it. 00:33:13.088 [2024-07-26 09:06:31.302462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.088 [2024-07-26 09:06:31.302489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.088 qpair failed and we were unable to recover it. 00:33:13.088 [2024-07-26 09:06:31.302672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.088 [2024-07-26 09:06:31.302699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.088 qpair failed and we were unable to recover it. 00:33:13.088 [2024-07-26 09:06:31.302827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.088 [2024-07-26 09:06:31.302854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.088 qpair failed and we were unable to recover it. 00:33:13.088 [2024-07-26 09:06:31.303015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.088 [2024-07-26 09:06:31.303041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.088 qpair failed and we were unable to recover it. 00:33:13.088 [2024-07-26 09:06:31.303191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.088 [2024-07-26 09:06:31.303216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.088 qpair failed and we were unable to recover it. 00:33:13.088 [2024-07-26 09:06:31.303337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.088 [2024-07-26 09:06:31.303362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.088 qpair failed and we were unable to recover it. 00:33:13.088 [2024-07-26 09:06:31.303475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.088 [2024-07-26 09:06:31.303500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.088 qpair failed and we were unable to recover it. 00:33:13.088 [2024-07-26 09:06:31.303645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.088 [2024-07-26 09:06:31.303671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.088 qpair failed and we were unable to recover it. 00:33:13.088 [2024-07-26 09:06:31.303787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.088 [2024-07-26 09:06:31.303812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.088 qpair failed and we were unable to recover it. 00:33:13.088 [2024-07-26 09:06:31.303985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.088 [2024-07-26 09:06:31.304013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.088 qpair failed and we were unable to recover it. 00:33:13.088 [2024-07-26 09:06:31.304155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.088 [2024-07-26 09:06:31.304181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.088 qpair failed and we were unable to recover it. 00:33:13.088 [2024-07-26 09:06:31.304328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.088 [2024-07-26 09:06:31.304353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.088 qpair failed and we were unable to recover it. 00:33:13.088 [2024-07-26 09:06:31.304490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.088 [2024-07-26 09:06:31.304516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.088 qpair failed and we were unable to recover it. 00:33:13.088 [2024-07-26 09:06:31.304687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.088 [2024-07-26 09:06:31.304715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.088 qpair failed and we were unable to recover it. 00:33:13.088 [2024-07-26 09:06:31.304843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.088 [2024-07-26 09:06:31.304871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.088 qpair failed and we were unable to recover it. 00:33:13.088 [2024-07-26 09:06:31.305040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.088 [2024-07-26 09:06:31.305072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.088 qpair failed and we were unable to recover it. 00:33:13.088 [2024-07-26 09:06:31.305191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.088 [2024-07-26 09:06:31.305216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.088 qpair failed and we were unable to recover it. 00:33:13.088 [2024-07-26 09:06:31.305353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.088 [2024-07-26 09:06:31.305381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.088 qpair failed and we were unable to recover it. 00:33:13.088 [2024-07-26 09:06:31.305523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.088 [2024-07-26 09:06:31.305549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.088 qpair failed and we were unable to recover it. 00:33:13.088 [2024-07-26 09:06:31.305672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.088 [2024-07-26 09:06:31.305714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.088 qpair failed and we were unable to recover it. 00:33:13.088 [2024-07-26 09:06:31.305865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.088 [2024-07-26 09:06:31.305893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.088 qpair failed and we were unable to recover it. 00:33:13.088 [2024-07-26 09:06:31.306057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.088 [2024-07-26 09:06:31.306089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.088 qpair failed and we were unable to recover it. 00:33:13.088 [2024-07-26 09:06:31.306227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.088 [2024-07-26 09:06:31.306252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.088 qpair failed and we were unable to recover it. 00:33:13.088 [2024-07-26 09:06:31.306397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.088 [2024-07-26 09:06:31.306424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.088 qpair failed and we were unable to recover it. 00:33:13.088 [2024-07-26 09:06:31.306559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.088 [2024-07-26 09:06:31.306584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.088 qpair failed and we were unable to recover it. 00:33:13.088 [2024-07-26 09:06:31.306694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.088 [2024-07-26 09:06:31.306719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.088 qpair failed and we were unable to recover it. 00:33:13.088 [2024-07-26 09:06:31.306884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.088 [2024-07-26 09:06:31.306912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.088 qpair failed and we were unable to recover it. 00:33:13.088 [2024-07-26 09:06:31.307053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.088 [2024-07-26 09:06:31.307084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.088 qpair failed and we were unable to recover it. 00:33:13.088 [2024-07-26 09:06:31.307218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.088 [2024-07-26 09:06:31.307243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.088 qpair failed and we were unable to recover it. 00:33:13.088 [2024-07-26 09:06:31.307365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.088 [2024-07-26 09:06:31.307390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.088 qpair failed and we were unable to recover it. 00:33:13.088 [2024-07-26 09:06:31.307547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.088 [2024-07-26 09:06:31.307572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.088 qpair failed and we were unable to recover it. 00:33:13.088 [2024-07-26 09:06:31.307683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.088 [2024-07-26 09:06:31.307709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.088 qpair failed and we were unable to recover it. 00:33:13.088 [2024-07-26 09:06:31.307860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.088 [2024-07-26 09:06:31.307885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.088 qpair failed and we were unable to recover it. 00:33:13.088 [2024-07-26 09:06:31.308023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.088 [2024-07-26 09:06:31.308048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.088 qpair failed and we were unable to recover it. 00:33:13.088 [2024-07-26 09:06:31.308197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.088 [2024-07-26 09:06:31.308223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.088 qpair failed and we were unable to recover it. 00:33:13.088 [2024-07-26 09:06:31.308353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.088 [2024-07-26 09:06:31.308379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.088 qpair failed and we were unable to recover it. 00:33:13.088 [2024-07-26 09:06:31.308522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.088 [2024-07-26 09:06:31.308547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.088 qpair failed and we were unable to recover it. 00:33:13.088 [2024-07-26 09:06:31.308671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.088 [2024-07-26 09:06:31.308698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.088 qpair failed and we were unable to recover it. 00:33:13.088 [2024-07-26 09:06:31.308866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.088 [2024-07-26 09:06:31.308891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.088 qpair failed and we were unable to recover it. 00:33:13.088 [2024-07-26 09:06:31.309041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.088 [2024-07-26 09:06:31.309078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.088 qpair failed and we were unable to recover it. 00:33:13.088 [2024-07-26 09:06:31.309250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.088 [2024-07-26 09:06:31.309275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.088 qpair failed and we were unable to recover it. 00:33:13.088 [2024-07-26 09:06:31.309429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.088 [2024-07-26 09:06:31.309454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.088 qpair failed and we were unable to recover it. 00:33:13.088 [2024-07-26 09:06:31.309564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.088 [2024-07-26 09:06:31.309590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.088 qpair failed and we were unable to recover it. 00:33:13.088 [2024-07-26 09:06:31.309703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.088 [2024-07-26 09:06:31.309729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.088 qpair failed and we were unable to recover it. 00:33:13.088 [2024-07-26 09:06:31.309845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.088 [2024-07-26 09:06:31.309871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.088 qpair failed and we were unable to recover it. 00:33:13.088 [2024-07-26 09:06:31.310010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.088 [2024-07-26 09:06:31.310035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.088 qpair failed and we were unable to recover it. 00:33:13.088 [2024-07-26 09:06:31.310196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.088 [2024-07-26 09:06:31.310221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.088 qpair failed and we were unable to recover it. 00:33:13.088 [2024-07-26 09:06:31.310392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.088 [2024-07-26 09:06:31.310417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.088 qpair failed and we were unable to recover it. 00:33:13.088 [2024-07-26 09:06:31.310595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.088 [2024-07-26 09:06:31.310620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.088 qpair failed and we were unable to recover it. 00:33:13.088 [2024-07-26 09:06:31.310748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.088 [2024-07-26 09:06:31.310773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.088 qpair failed and we were unable to recover it. 00:33:13.088 [2024-07-26 09:06:31.310884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.088 [2024-07-26 09:06:31.310909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.088 qpair failed and we were unable to recover it. 00:33:13.088 [2024-07-26 09:06:31.311026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.088 [2024-07-26 09:06:31.311051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.088 qpair failed and we were unable to recover it. 00:33:13.089 [2024-07-26 09:06:31.311173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.089 [2024-07-26 09:06:31.311199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.089 qpair failed and we were unable to recover it. 00:33:13.089 [2024-07-26 09:06:31.311374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.089 [2024-07-26 09:06:31.311400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.089 qpair failed and we were unable to recover it. 00:33:13.089 [2024-07-26 09:06:31.311547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.089 [2024-07-26 09:06:31.311572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.089 qpair failed and we were unable to recover it. 00:33:13.089 [2024-07-26 09:06:31.311716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.089 [2024-07-26 09:06:31.311744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.089 qpair failed and we were unable to recover it. 00:33:13.089 [2024-07-26 09:06:31.311862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.089 [2024-07-26 09:06:31.311890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.089 qpair failed and we were unable to recover it. 00:33:13.089 [2024-07-26 09:06:31.312023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.089 [2024-07-26 09:06:31.312048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.089 qpair failed and we were unable to recover it. 00:33:13.089 [2024-07-26 09:06:31.312185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.089 [2024-07-26 09:06:31.312211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.089 qpair failed and we were unable to recover it. 00:33:13.089 [2024-07-26 09:06:31.312356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.089 [2024-07-26 09:06:31.312385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.089 qpair failed and we were unable to recover it. 00:33:13.089 [2024-07-26 09:06:31.312530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.089 [2024-07-26 09:06:31.312555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.089 qpair failed and we were unable to recover it. 00:33:13.089 [2024-07-26 09:06:31.312675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.089 [2024-07-26 09:06:31.312700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.089 qpair failed and we were unable to recover it. 00:33:13.089 [2024-07-26 09:06:31.312848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.089 [2024-07-26 09:06:31.312874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.089 qpair failed and we were unable to recover it. 00:33:13.089 [2024-07-26 09:06:31.313021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.089 [2024-07-26 09:06:31.313046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.089 qpair failed and we were unable to recover it. 00:33:13.089 [2024-07-26 09:06:31.313187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.089 [2024-07-26 09:06:31.313212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.089 qpair failed and we were unable to recover it. 00:33:13.089 [2024-07-26 09:06:31.313323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.089 [2024-07-26 09:06:31.313348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.089 qpair failed and we were unable to recover it. 00:33:13.089 [2024-07-26 09:06:31.313465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.089 [2024-07-26 09:06:31.313490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.089 qpair failed and we were unable to recover it. 00:33:13.089 [2024-07-26 09:06:31.313635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.089 [2024-07-26 09:06:31.313660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.089 qpair failed and we were unable to recover it. 00:33:13.089 [2024-07-26 09:06:31.313773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.089 [2024-07-26 09:06:31.313798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.089 qpair failed and we were unable to recover it. 00:33:13.089 [2024-07-26 09:06:31.313912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.089 [2024-07-26 09:06:31.313937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.089 qpair failed and we were unable to recover it. 00:33:13.089 [2024-07-26 09:06:31.314108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.089 [2024-07-26 09:06:31.314135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.089 qpair failed and we were unable to recover it. 00:33:13.089 [2024-07-26 09:06:31.314279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.089 [2024-07-26 09:06:31.314304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.089 qpair failed and we were unable to recover it. 00:33:13.089 [2024-07-26 09:06:31.314444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.089 [2024-07-26 09:06:31.314469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.089 qpair failed and we were unable to recover it. 00:33:13.089 [2024-07-26 09:06:31.314586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.089 [2024-07-26 09:06:31.314613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.089 qpair failed and we were unable to recover it. 00:33:13.089 [2024-07-26 09:06:31.314760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.089 [2024-07-26 09:06:31.314788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.089 qpair failed and we were unable to recover it. 00:33:13.089 [2024-07-26 09:06:31.314922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.089 [2024-07-26 09:06:31.314947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.089 qpair failed and we were unable to recover it. 00:33:13.089 [2024-07-26 09:06:31.315081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.089 [2024-07-26 09:06:31.315107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.089 qpair failed and we were unable to recover it. 00:33:13.089 [2024-07-26 09:06:31.315250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.089 [2024-07-26 09:06:31.315275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.089 qpair failed and we were unable to recover it. 00:33:13.089 [2024-07-26 09:06:31.315426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.089 [2024-07-26 09:06:31.315452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.089 qpair failed and we were unable to recover it. 00:33:13.089 [2024-07-26 09:06:31.315595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.089 [2024-07-26 09:06:31.315620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.089 qpair failed and we were unable to recover it. 00:33:13.089 [2024-07-26 09:06:31.315743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.089 [2024-07-26 09:06:31.315769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.089 qpair failed and we were unable to recover it. 00:33:13.089 [2024-07-26 09:06:31.315889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.089 [2024-07-26 09:06:31.315914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.089 qpair failed and we were unable to recover it. 00:33:13.089 [2024-07-26 09:06:31.316029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.089 [2024-07-26 09:06:31.316054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.089 qpair failed and we were unable to recover it. 00:33:13.089 [2024-07-26 09:06:31.316247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.089 [2024-07-26 09:06:31.316276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.089 qpair failed and we were unable to recover it. 00:33:13.089 [2024-07-26 09:06:31.316433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.089 [2024-07-26 09:06:31.316458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.089 qpair failed and we were unable to recover it. 00:33:13.089 [2024-07-26 09:06:31.316595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.089 [2024-07-26 09:06:31.316620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.089 qpair failed and we were unable to recover it. 00:33:13.089 [2024-07-26 09:06:31.316775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.089 [2024-07-26 09:06:31.316801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.089 qpair failed and we were unable to recover it. 00:33:13.089 [2024-07-26 09:06:31.316953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.089 [2024-07-26 09:06:31.316978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.089 qpair failed and we were unable to recover it. 00:33:13.089 [2024-07-26 09:06:31.317105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.089 [2024-07-26 09:06:31.317131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.089 qpair failed and we were unable to recover it. 00:33:13.089 [2024-07-26 09:06:31.317325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.089 [2024-07-26 09:06:31.317353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.089 qpair failed and we were unable to recover it. 00:33:13.089 [2024-07-26 09:06:31.317503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.089 [2024-07-26 09:06:31.317528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.089 qpair failed and we were unable to recover it. 00:33:13.089 [2024-07-26 09:06:31.317667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.089 [2024-07-26 09:06:31.317692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.089 qpair failed and we were unable to recover it. 00:33:13.089 [2024-07-26 09:06:31.317833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.089 [2024-07-26 09:06:31.317859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.089 qpair failed and we were unable to recover it. 00:33:13.089 [2024-07-26 09:06:31.317972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.089 [2024-07-26 09:06:31.317998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.089 qpair failed and we were unable to recover it. 00:33:13.089 [2024-07-26 09:06:31.318116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.089 [2024-07-26 09:06:31.318142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.089 qpair failed and we were unable to recover it. 00:33:13.089 [2024-07-26 09:06:31.318260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.089 [2024-07-26 09:06:31.318285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.089 qpair failed and we were unable to recover it. 00:33:13.089 [2024-07-26 09:06:31.318415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.089 [2024-07-26 09:06:31.318440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.089 qpair failed and we were unable to recover it. 00:33:13.089 [2024-07-26 09:06:31.318585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.089 [2024-07-26 09:06:31.318611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.089 qpair failed and we were unable to recover it. 00:33:13.089 [2024-07-26 09:06:31.318724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.089 [2024-07-26 09:06:31.318749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.089 qpair failed and we were unable to recover it. 00:33:13.089 [2024-07-26 09:06:31.318866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.089 [2024-07-26 09:06:31.318892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.089 qpair failed and we were unable to recover it. 00:33:13.089 [2024-07-26 09:06:31.319005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.089 [2024-07-26 09:06:31.319034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.089 qpair failed and we were unable to recover it. 00:33:13.089 [2024-07-26 09:06:31.319168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.089 [2024-07-26 09:06:31.319194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.089 qpair failed and we were unable to recover it. 00:33:13.089 [2024-07-26 09:06:31.319331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.089 [2024-07-26 09:06:31.319357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.089 qpair failed and we were unable to recover it. 00:33:13.089 [2024-07-26 09:06:31.319471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.089 [2024-07-26 09:06:31.319495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.089 qpair failed and we were unable to recover it. 00:33:13.089 [2024-07-26 09:06:31.319667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.089 [2024-07-26 09:06:31.319694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.089 qpair failed and we were unable to recover it. 00:33:13.089 [2024-07-26 09:06:31.319863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.089 [2024-07-26 09:06:31.319888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.089 qpair failed and we were unable to recover it. 00:33:13.089 [2024-07-26 09:06:31.320013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.090 [2024-07-26 09:06:31.320038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.090 qpair failed and we were unable to recover it. 00:33:13.090 [2024-07-26 09:06:31.320199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.090 [2024-07-26 09:06:31.320224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.090 qpair failed and we were unable to recover it. 00:33:13.090 [2024-07-26 09:06:31.320346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.090 [2024-07-26 09:06:31.320371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.090 qpair failed and we were unable to recover it. 00:33:13.090 [2024-07-26 09:06:31.320495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.090 [2024-07-26 09:06:31.320519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.090 qpair failed and we were unable to recover it. 00:33:13.090 [2024-07-26 09:06:31.320662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.090 [2024-07-26 09:06:31.320686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.090 qpair failed and we were unable to recover it. 00:33:13.090 [2024-07-26 09:06:31.320829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.090 [2024-07-26 09:06:31.320854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.090 qpair failed and we were unable to recover it. 00:33:13.090 [2024-07-26 09:06:31.320992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.090 [2024-07-26 09:06:31.321020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.090 qpair failed and we were unable to recover it. 00:33:13.090 [2024-07-26 09:06:31.321195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.090 [2024-07-26 09:06:31.321224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.090 qpair failed and we were unable to recover it. 00:33:13.090 [2024-07-26 09:06:31.321397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.090 [2024-07-26 09:06:31.321422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.090 qpair failed and we were unable to recover it. 00:33:13.090 [2024-07-26 09:06:31.321544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.090 [2024-07-26 09:06:31.321570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.090 qpair failed and we were unable to recover it. 00:33:13.090 [2024-07-26 09:06:31.321711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.090 [2024-07-26 09:06:31.321736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.090 qpair failed and we were unable to recover it. 00:33:13.090 [2024-07-26 09:06:31.321882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.090 [2024-07-26 09:06:31.321907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.090 qpair failed and we were unable to recover it. 00:33:13.090 [2024-07-26 09:06:31.322029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.090 [2024-07-26 09:06:31.322079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.090 qpair failed and we were unable to recover it. 00:33:13.090 [2024-07-26 09:06:31.322233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.090 [2024-07-26 09:06:31.322258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.090 qpair failed and we were unable to recover it. 00:33:13.090 [2024-07-26 09:06:31.322376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.090 [2024-07-26 09:06:31.322400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.090 qpair failed and we were unable to recover it. 00:33:13.090 [2024-07-26 09:06:31.322514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.090 [2024-07-26 09:06:31.322539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.090 qpair failed and we were unable to recover it. 00:33:13.090 [2024-07-26 09:06:31.322681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.090 [2024-07-26 09:06:31.322706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.090 qpair failed and we were unable to recover it. 00:33:13.090 [2024-07-26 09:06:31.322818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.090 [2024-07-26 09:06:31.322843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.090 qpair failed and we were unable to recover it. 00:33:13.090 [2024-07-26 09:06:31.322954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.090 [2024-07-26 09:06:31.322979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.090 qpair failed and we were unable to recover it. 00:33:13.090 [2024-07-26 09:06:31.323116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.090 [2024-07-26 09:06:31.323142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.090 qpair failed and we were unable to recover it. 00:33:13.090 [2024-07-26 09:06:31.323290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.090 [2024-07-26 09:06:31.323314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.090 qpair failed and we were unable to recover it. 00:33:13.090 [2024-07-26 09:06:31.323463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.090 [2024-07-26 09:06:31.323492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.090 qpair failed and we were unable to recover it. 00:33:13.090 [2024-07-26 09:06:31.323656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.090 [2024-07-26 09:06:31.323685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.090 qpair failed and we were unable to recover it. 00:33:13.090 [2024-07-26 09:06:31.323830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.090 [2024-07-26 09:06:31.323855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.090 qpair failed and we were unable to recover it. 00:33:13.090 [2024-07-26 09:06:31.324002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.090 [2024-07-26 09:06:31.324027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.090 qpair failed and we were unable to recover it. 00:33:13.090 [2024-07-26 09:06:31.324160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.090 [2024-07-26 09:06:31.324186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.090 qpair failed and we were unable to recover it. 00:33:13.090 [2024-07-26 09:06:31.324304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.090 [2024-07-26 09:06:31.324329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.090 qpair failed and we were unable to recover it. 00:33:13.090 [2024-07-26 09:06:31.324469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.090 [2024-07-26 09:06:31.324493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.090 qpair failed and we were unable to recover it. 00:33:13.090 [2024-07-26 09:06:31.324679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.090 [2024-07-26 09:06:31.324707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.090 qpair failed and we were unable to recover it. 00:33:13.090 [2024-07-26 09:06:31.324875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.090 [2024-07-26 09:06:31.324901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.090 qpair failed and we were unable to recover it. 00:33:13.090 [2024-07-26 09:06:31.325051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.090 [2024-07-26 09:06:31.325081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.090 qpair failed and we were unable to recover it. 00:33:13.090 [2024-07-26 09:06:31.325251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.090 [2024-07-26 09:06:31.325277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.090 qpair failed and we were unable to recover it. 00:33:13.090 [2024-07-26 09:06:31.325421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.090 [2024-07-26 09:06:31.325446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.090 qpair failed and we were unable to recover it. 00:33:13.090 [2024-07-26 09:06:31.325592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.090 [2024-07-26 09:06:31.325617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.090 qpair failed and we were unable to recover it. 00:33:13.090 [2024-07-26 09:06:31.325759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.090 [2024-07-26 09:06:31.325784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.090 qpair failed and we were unable to recover it. 00:33:13.090 [2024-07-26 09:06:31.325932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.090 [2024-07-26 09:06:31.325957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.090 qpair failed and we were unable to recover it. 00:33:13.090 [2024-07-26 09:06:31.326101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.090 [2024-07-26 09:06:31.326127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.090 qpair failed and we were unable to recover it. 00:33:13.090 [2024-07-26 09:06:31.326262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.090 [2024-07-26 09:06:31.326291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.090 qpair failed and we were unable to recover it. 00:33:13.090 [2024-07-26 09:06:31.326433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.090 [2024-07-26 09:06:31.326458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.090 qpair failed and we were unable to recover it. 00:33:13.090 [2024-07-26 09:06:31.326601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.090 [2024-07-26 09:06:31.326626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.090 qpair failed and we were unable to recover it. 00:33:13.090 [2024-07-26 09:06:31.326778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.090 [2024-07-26 09:06:31.326805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.090 qpair failed and we were unable to recover it. 00:33:13.090 [2024-07-26 09:06:31.326933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.090 [2024-07-26 09:06:31.326959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.090 qpair failed and we were unable to recover it. 00:33:13.090 [2024-07-26 09:06:31.327097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.090 [2024-07-26 09:06:31.327123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.090 qpair failed and we were unable to recover it. 00:33:13.090 [2024-07-26 09:06:31.327277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.090 [2024-07-26 09:06:31.327305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.090 qpair failed and we were unable to recover it. 00:33:13.090 [2024-07-26 09:06:31.327469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.090 [2024-07-26 09:06:31.327494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.090 qpair failed and we were unable to recover it. 00:33:13.090 [2024-07-26 09:06:31.327648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.090 [2024-07-26 09:06:31.327674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.090 qpair failed and we were unable to recover it. 00:33:13.090 [2024-07-26 09:06:31.327821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.090 [2024-07-26 09:06:31.327864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.090 qpair failed and we were unable to recover it. 00:33:13.090 [2024-07-26 09:06:31.328064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.090 [2024-07-26 09:06:31.328093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.090 qpair failed and we were unable to recover it. 00:33:13.090 [2024-07-26 09:06:31.328245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.090 [2024-07-26 09:06:31.328270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.090 qpair failed and we were unable to recover it. 00:33:13.090 [2024-07-26 09:06:31.328421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.090 [2024-07-26 09:06:31.328463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.090 qpair failed and we were unable to recover it. 00:33:13.090 [2024-07-26 09:06:31.328639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.090 [2024-07-26 09:06:31.328664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.090 qpair failed and we were unable to recover it. 00:33:13.090 [2024-07-26 09:06:31.328782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.090 [2024-07-26 09:06:31.328807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.090 qpair failed and we were unable to recover it. 00:33:13.090 [2024-07-26 09:06:31.328932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.090 [2024-07-26 09:06:31.328958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.090 qpair failed and we were unable to recover it. 00:33:13.090 [2024-07-26 09:06:31.329099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.090 [2024-07-26 09:06:31.329125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.090 qpair failed and we were unable to recover it. 00:33:13.090 [2024-07-26 09:06:31.329242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.090 [2024-07-26 09:06:31.329268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.090 qpair failed and we were unable to recover it. 00:33:13.090 [2024-07-26 09:06:31.329415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.090 [2024-07-26 09:06:31.329440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.090 qpair failed and we were unable to recover it. 00:33:13.090 [2024-07-26 09:06:31.329620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.090 [2024-07-26 09:06:31.329644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.090 qpair failed and we were unable to recover it. 00:33:13.090 [2024-07-26 09:06:31.329762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.090 [2024-07-26 09:06:31.329804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.090 qpair failed and we were unable to recover it. 00:33:13.090 [2024-07-26 09:06:31.329941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.090 [2024-07-26 09:06:31.329969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.090 qpair failed and we were unable to recover it. 00:33:13.090 [2024-07-26 09:06:31.330147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.090 [2024-07-26 09:06:31.330172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.090 qpair failed and we were unable to recover it. 00:33:13.090 [2024-07-26 09:06:31.330290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.090 [2024-07-26 09:06:31.330332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.090 qpair failed and we were unable to recover it. 00:33:13.090 [2024-07-26 09:06:31.330515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.091 [2024-07-26 09:06:31.330544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.091 qpair failed and we were unable to recover it. 00:33:13.091 [2024-07-26 09:06:31.330681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.091 [2024-07-26 09:06:31.330711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.091 qpair failed and we were unable to recover it. 00:33:13.091 [2024-07-26 09:06:31.330851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.091 [2024-07-26 09:06:31.330892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.091 qpair failed and we were unable to recover it. 00:33:13.091 [2024-07-26 09:06:31.331052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.091 [2024-07-26 09:06:31.331086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.091 qpair failed and we were unable to recover it. 00:33:13.091 [2024-07-26 09:06:31.331242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.091 [2024-07-26 09:06:31.331267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.091 qpair failed and we were unable to recover it. 00:33:13.091 [2024-07-26 09:06:31.331461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.091 [2024-07-26 09:06:31.331488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.091 qpair failed and we were unable to recover it. 00:33:13.091 [2024-07-26 09:06:31.331619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.091 [2024-07-26 09:06:31.331646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.091 qpair failed and we were unable to recover it. 00:33:13.091 [2024-07-26 09:06:31.331775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.091 [2024-07-26 09:06:31.331800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.091 qpair failed and we were unable to recover it. 00:33:13.091 [2024-07-26 09:06:31.331947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.091 [2024-07-26 09:06:31.331972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.091 qpair failed and we were unable to recover it. 00:33:13.091 [2024-07-26 09:06:31.332153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.091 [2024-07-26 09:06:31.332179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.091 qpair failed and we were unable to recover it. 00:33:13.091 [2024-07-26 09:06:31.332349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.091 [2024-07-26 09:06:31.332374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.091 qpair failed and we were unable to recover it. 00:33:13.091 [2024-07-26 09:06:31.332493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.091 [2024-07-26 09:06:31.332518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.091 qpair failed and we were unable to recover it. 00:33:13.091 [2024-07-26 09:06:31.332686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.091 [2024-07-26 09:06:31.332728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.091 qpair failed and we were unable to recover it. 00:33:13.091 [2024-07-26 09:06:31.332867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.091 [2024-07-26 09:06:31.332892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.091 qpair failed and we were unable to recover it. 00:33:13.091 [2024-07-26 09:06:31.333078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.091 [2024-07-26 09:06:31.333107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.091 qpair failed and we were unable to recover it. 00:33:13.091 [2024-07-26 09:06:31.333275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.091 [2024-07-26 09:06:31.333303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.091 qpair failed and we were unable to recover it. 00:33:13.091 [2024-07-26 09:06:31.333448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.091 [2024-07-26 09:06:31.333472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.091 qpair failed and we were unable to recover it. 00:33:13.091 [2024-07-26 09:06:31.333611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.091 [2024-07-26 09:06:31.333635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.091 qpair failed and we were unable to recover it. 00:33:13.091 [2024-07-26 09:06:31.333829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.091 [2024-07-26 09:06:31.333857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.091 qpair failed and we were unable to recover it. 00:33:13.091 [2024-07-26 09:06:31.333998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.091 [2024-07-26 09:06:31.334024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.091 qpair failed and we were unable to recover it. 00:33:13.091 [2024-07-26 09:06:31.334184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.091 [2024-07-26 09:06:31.334209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.091 qpair failed and we were unable to recover it. 00:33:13.091 [2024-07-26 09:06:31.334330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.091 [2024-07-26 09:06:31.334356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.091 qpair failed and we were unable to recover it. 00:33:13.091 [2024-07-26 09:06:31.334505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.091 [2024-07-26 09:06:31.334530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.091 qpair failed and we were unable to recover it. 00:33:13.091 [2024-07-26 09:06:31.334642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.091 [2024-07-26 09:06:31.334683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.091 qpair failed and we were unable to recover it. 00:33:13.091 [2024-07-26 09:06:31.334868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.091 [2024-07-26 09:06:31.334897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.091 qpair failed and we were unable to recover it. 00:33:13.091 [2024-07-26 09:06:31.335051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.091 [2024-07-26 09:06:31.335081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.091 qpair failed and we were unable to recover it. 00:33:13.091 [2024-07-26 09:06:31.335230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.091 [2024-07-26 09:06:31.335256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.091 qpair failed and we were unable to recover it. 00:33:13.091 [2024-07-26 09:06:31.335370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.091 [2024-07-26 09:06:31.335395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.091 qpair failed and we were unable to recover it. 00:33:13.091 [2024-07-26 09:06:31.335610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.091 [2024-07-26 09:06:31.335640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.091 qpair failed and we were unable to recover it. 00:33:13.091 [2024-07-26 09:06:31.335801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.091 [2024-07-26 09:06:31.335829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.091 qpair failed and we were unable to recover it. 00:33:13.091 [2024-07-26 09:06:31.335965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.091 [2024-07-26 09:06:31.335993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.091 qpair failed and we were unable to recover it. 00:33:13.091 [2024-07-26 09:06:31.336150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.091 [2024-07-26 09:06:31.336177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.091 qpair failed and we were unable to recover it. 00:33:13.091 [2024-07-26 09:06:31.336321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.091 [2024-07-26 09:06:31.336347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.091 qpair failed and we were unable to recover it. 00:33:13.091 [2024-07-26 09:06:31.336520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.091 [2024-07-26 09:06:31.336547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.091 qpair failed and we were unable to recover it. 00:33:13.091 [2024-07-26 09:06:31.336715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.091 [2024-07-26 09:06:31.336740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.091 qpair failed and we were unable to recover it. 00:33:13.091 [2024-07-26 09:06:31.336881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.091 [2024-07-26 09:06:31.336906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.091 qpair failed and we were unable to recover it. 00:33:13.091 [2024-07-26 09:06:31.337089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.091 [2024-07-26 09:06:31.337118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.091 qpair failed and we were unable to recover it. 00:33:13.091 [2024-07-26 09:06:31.337284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.091 [2024-07-26 09:06:31.337309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.091 qpair failed and we were unable to recover it. 00:33:13.091 [2024-07-26 09:06:31.337446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.091 [2024-07-26 09:06:31.337471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.091 qpair failed and we were unable to recover it. 00:33:13.091 [2024-07-26 09:06:31.337617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.091 [2024-07-26 09:06:31.337643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.091 qpair failed and we were unable to recover it. 00:33:13.091 [2024-07-26 09:06:31.337790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.091 [2024-07-26 09:06:31.337815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.091 qpair failed and we were unable to recover it. 00:33:13.091 [2024-07-26 09:06:31.337971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.091 [2024-07-26 09:06:31.337998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.091 qpair failed and we were unable to recover it. 00:33:13.091 [2024-07-26 09:06:31.338174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.091 [2024-07-26 09:06:31.338200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.091 qpair failed and we were unable to recover it. 00:33:13.091 [2024-07-26 09:06:31.338324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.091 [2024-07-26 09:06:31.338350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.091 qpair failed and we were unable to recover it. 00:33:13.091 [2024-07-26 09:06:31.338463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.091 [2024-07-26 09:06:31.338488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.091 qpair failed and we were unable to recover it. 00:33:13.091 [2024-07-26 09:06:31.338660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.091 [2024-07-26 09:06:31.338686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.091 qpair failed and we were unable to recover it. 00:33:13.091 [2024-07-26 09:06:31.338837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.091 [2024-07-26 09:06:31.338863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.091 qpair failed and we were unable to recover it. 00:33:13.091 [2024-07-26 09:06:31.339028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.091 [2024-07-26 09:06:31.339055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.091 qpair failed and we were unable to recover it. 00:33:13.091 [2024-07-26 09:06:31.339222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.091 [2024-07-26 09:06:31.339250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.091 qpair failed and we were unable to recover it. 00:33:13.091 [2024-07-26 09:06:31.339388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.091 [2024-07-26 09:06:31.339414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.091 qpair failed and we were unable to recover it. 00:33:13.091 [2024-07-26 09:06:31.339605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.091 [2024-07-26 09:06:31.339633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.091 qpair failed and we were unable to recover it. 00:33:13.091 [2024-07-26 09:06:31.339782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.091 [2024-07-26 09:06:31.339810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.091 qpair failed and we were unable to recover it. 00:33:13.091 [2024-07-26 09:06:31.339970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.091 [2024-07-26 09:06:31.339995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.091 qpair failed and we were unable to recover it. 00:33:13.091 [2024-07-26 09:06:31.340120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.091 [2024-07-26 09:06:31.340146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.091 qpair failed and we were unable to recover it. 00:33:13.091 [2024-07-26 09:06:31.340317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.091 [2024-07-26 09:06:31.340345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.091 qpair failed and we were unable to recover it. 00:33:13.091 [2024-07-26 09:06:31.340485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.091 [2024-07-26 09:06:31.340510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.091 qpair failed and we were unable to recover it. 00:33:13.091 [2024-07-26 09:06:31.340635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.091 [2024-07-26 09:06:31.340660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.091 qpair failed and we were unable to recover it. 00:33:13.091 [2024-07-26 09:06:31.340815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.091 [2024-07-26 09:06:31.340843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.091 qpair failed and we were unable to recover it. 00:33:13.091 [2024-07-26 09:06:31.340985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.091 [2024-07-26 09:06:31.341010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.091 qpair failed and we were unable to recover it. 00:33:13.091 [2024-07-26 09:06:31.341147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.091 [2024-07-26 09:06:31.341172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.091 qpair failed and we were unable to recover it. 00:33:13.091 [2024-07-26 09:06:31.341325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.091 [2024-07-26 09:06:31.341350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.091 qpair failed and we were unable to recover it. 00:33:13.091 [2024-07-26 09:06:31.341459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.091 [2024-07-26 09:06:31.341484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.091 qpair failed and we were unable to recover it. 00:33:13.091 [2024-07-26 09:06:31.341635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.091 [2024-07-26 09:06:31.341660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.091 qpair failed and we were unable to recover it. 00:33:13.091 [2024-07-26 09:06:31.341777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.091 [2024-07-26 09:06:31.341803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.091 qpair failed and we were unable to recover it. 00:33:13.092 [2024-07-26 09:06:31.341986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.092 [2024-07-26 09:06:31.342012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.092 qpair failed and we were unable to recover it. 00:33:13.092 [2024-07-26 09:06:31.342149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.092 [2024-07-26 09:06:31.342175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.092 qpair failed and we were unable to recover it. 00:33:13.092 [2024-07-26 09:06:31.342294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.092 [2024-07-26 09:06:31.342319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.092 qpair failed and we were unable to recover it. 00:33:13.092 [2024-07-26 09:06:31.342461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.092 [2024-07-26 09:06:31.342486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.092 qpair failed and we were unable to recover it. 00:33:13.092 [2024-07-26 09:06:31.342670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.092 [2024-07-26 09:06:31.342698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.092 qpair failed and we were unable to recover it. 00:33:13.092 [2024-07-26 09:06:31.342879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.092 [2024-07-26 09:06:31.342910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.092 qpair failed and we were unable to recover it. 00:33:13.092 [2024-07-26 09:06:31.343050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.092 [2024-07-26 09:06:31.343082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.092 qpair failed and we were unable to recover it. 00:33:13.092 [2024-07-26 09:06:31.343249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.092 [2024-07-26 09:06:31.343277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.092 qpair failed and we were unable to recover it. 00:33:13.092 [2024-07-26 09:06:31.343438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.092 [2024-07-26 09:06:31.343466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.092 qpair failed and we were unable to recover it. 00:33:13.092 [2024-07-26 09:06:31.343623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.092 [2024-07-26 09:06:31.343648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.092 qpair failed and we were unable to recover it. 00:33:13.092 [2024-07-26 09:06:31.343798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.092 [2024-07-26 09:06:31.343823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.092 qpair failed and we were unable to recover it. 00:33:13.092 [2024-07-26 09:06:31.343964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.092 [2024-07-26 09:06:31.344005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.092 qpair failed and we were unable to recover it. 00:33:13.092 [2024-07-26 09:06:31.344178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.092 [2024-07-26 09:06:31.344205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.092 qpair failed and we were unable to recover it. 00:33:13.092 [2024-07-26 09:06:31.344313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.092 [2024-07-26 09:06:31.344338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.092 qpair failed and we were unable to recover it. 00:33:13.092 [2024-07-26 09:06:31.344503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.092 [2024-07-26 09:06:31.344531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.092 qpair failed and we were unable to recover it. 00:33:13.092 [2024-07-26 09:06:31.344699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.092 [2024-07-26 09:06:31.344724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.092 qpair failed and we were unable to recover it. 00:33:13.092 [2024-07-26 09:06:31.344837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.092 [2024-07-26 09:06:31.344877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.092 qpair failed and we were unable to recover it. 00:33:13.092 [2024-07-26 09:06:31.345023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.092 [2024-07-26 09:06:31.345049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.092 qpair failed and we were unable to recover it. 00:33:13.092 [2024-07-26 09:06:31.345237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.092 [2024-07-26 09:06:31.345262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.092 qpair failed and we were unable to recover it. 00:33:13.092 [2024-07-26 09:06:31.345408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.092 [2024-07-26 09:06:31.345437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.092 qpair failed and we were unable to recover it. 00:33:13.092 [2024-07-26 09:06:31.345593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.092 [2024-07-26 09:06:31.345621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.092 qpair failed and we were unable to recover it. 00:33:13.092 [2024-07-26 09:06:31.345782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.092 [2024-07-26 09:06:31.345807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.092 qpair failed and we were unable to recover it. 00:33:13.092 [2024-07-26 09:06:31.345948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.092 [2024-07-26 09:06:31.345991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.092 qpair failed and we were unable to recover it. 00:33:13.092 [2024-07-26 09:06:31.346125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.092 [2024-07-26 09:06:31.346154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.092 qpair failed and we were unable to recover it. 00:33:13.092 [2024-07-26 09:06:31.346327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.092 [2024-07-26 09:06:31.346352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.092 qpair failed and we were unable to recover it. 00:33:13.092 [2024-07-26 09:06:31.346513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.092 [2024-07-26 09:06:31.346540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.092 qpair failed and we were unable to recover it. 00:33:13.092 [2024-07-26 09:06:31.346727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.092 [2024-07-26 09:06:31.346752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.092 qpair failed and we were unable to recover it. 00:33:13.092 [2024-07-26 09:06:31.346898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.092 [2024-07-26 09:06:31.346922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.092 qpair failed and we were unable to recover it. 00:33:13.092 [2024-07-26 09:06:31.347088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.092 [2024-07-26 09:06:31.347117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.092 qpair failed and we were unable to recover it. 00:33:13.092 [2024-07-26 09:06:31.347267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.092 [2024-07-26 09:06:31.347292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.092 qpair failed and we were unable to recover it. 00:33:13.092 [2024-07-26 09:06:31.347436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.092 [2024-07-26 09:06:31.347460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.092 qpair failed and we were unable to recover it. 00:33:13.092 [2024-07-26 09:06:31.347605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.092 [2024-07-26 09:06:31.347644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.092 qpair failed and we were unable to recover it. 00:33:13.092 [2024-07-26 09:06:31.347801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.092 [2024-07-26 09:06:31.347834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.092 qpair failed and we were unable to recover it. 00:33:13.092 [2024-07-26 09:06:31.347974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.092 [2024-07-26 09:06:31.347999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.092 qpair failed and we were unable to recover it. 00:33:13.092 [2024-07-26 09:06:31.348147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.092 [2024-07-26 09:06:31.348173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.092 qpair failed and we were unable to recover it. 00:33:13.092 [2024-07-26 09:06:31.348383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.092 [2024-07-26 09:06:31.348408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.092 qpair failed and we were unable to recover it. 00:33:13.092 [2024-07-26 09:06:31.348557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.092 [2024-07-26 09:06:31.348582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.092 qpair failed and we were unable to recover it. 00:33:13.092 [2024-07-26 09:06:31.348694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.092 [2024-07-26 09:06:31.348719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.092 qpair failed and we were unable to recover it. 00:33:13.092 [2024-07-26 09:06:31.348844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.092 [2024-07-26 09:06:31.348868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.092 qpair failed and we were unable to recover it. 00:33:13.092 [2024-07-26 09:06:31.348993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.092 [2024-07-26 09:06:31.349019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.092 qpair failed and we were unable to recover it. 00:33:13.092 [2024-07-26 09:06:31.349166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.092 [2024-07-26 09:06:31.349191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.092 qpair failed and we were unable to recover it. 00:33:13.092 [2024-07-26 09:06:31.349364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.092 [2024-07-26 09:06:31.349391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.092 qpair failed and we were unable to recover it. 00:33:13.092 [2024-07-26 09:06:31.349553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.092 [2024-07-26 09:06:31.349578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.092 qpair failed and we were unable to recover it. 00:33:13.092 [2024-07-26 09:06:31.349727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.092 [2024-07-26 09:06:31.349751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.092 qpair failed and we were unable to recover it. 00:33:13.092 [2024-07-26 09:06:31.349920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.092 [2024-07-26 09:06:31.349961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.092 qpair failed and we were unable to recover it. 00:33:13.092 [2024-07-26 09:06:31.350099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.092 [2024-07-26 09:06:31.350124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.092 qpair failed and we were unable to recover it. 00:33:13.092 [2024-07-26 09:06:31.350273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.092 [2024-07-26 09:06:31.350313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.092 qpair failed and we were unable to recover it. 00:33:13.092 [2024-07-26 09:06:31.350477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.092 [2024-07-26 09:06:31.350505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.092 qpair failed and we were unable to recover it. 00:33:13.092 [2024-07-26 09:06:31.350697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.092 [2024-07-26 09:06:31.350722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.092 qpair failed and we were unable to recover it. 00:33:13.092 [2024-07-26 09:06:31.350869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.092 [2024-07-26 09:06:31.350894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.092 qpair failed and we were unable to recover it. 00:33:13.092 [2024-07-26 09:06:31.351078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.092 [2024-07-26 09:06:31.351107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.092 qpair failed and we were unable to recover it. 00:33:13.092 [2024-07-26 09:06:31.351278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.092 [2024-07-26 09:06:31.351303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.092 qpair failed and we were unable to recover it. 00:33:13.092 [2024-07-26 09:06:31.351449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.092 [2024-07-26 09:06:31.351474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.092 qpair failed and we were unable to recover it. 00:33:13.092 [2024-07-26 09:06:31.351616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.092 [2024-07-26 09:06:31.351656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.092 qpair failed and we were unable to recover it. 00:33:13.092 [2024-07-26 09:06:31.351822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.092 [2024-07-26 09:06:31.351847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.092 qpair failed and we were unable to recover it. 00:33:13.092 [2024-07-26 09:06:31.352010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.092 [2024-07-26 09:06:31.352037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.093 qpair failed and we were unable to recover it. 00:33:13.093 [2024-07-26 09:06:31.352209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.093 [2024-07-26 09:06:31.352237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.093 qpair failed and we were unable to recover it. 00:33:13.093 [2024-07-26 09:06:31.352379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.093 [2024-07-26 09:06:31.352405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.093 qpair failed and we were unable to recover it. 00:33:13.093 [2024-07-26 09:06:31.352555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.093 [2024-07-26 09:06:31.352580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.093 qpair failed and we were unable to recover it. 00:33:13.093 [2024-07-26 09:06:31.352725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.093 [2024-07-26 09:06:31.352765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.093 qpair failed and we were unable to recover it. 00:33:13.093 [2024-07-26 09:06:31.352961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.093 [2024-07-26 09:06:31.352987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.093 qpair failed and we were unable to recover it. 00:33:13.093 [2024-07-26 09:06:31.353150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.093 [2024-07-26 09:06:31.353179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.093 qpair failed and we were unable to recover it. 00:33:13.093 [2024-07-26 09:06:31.353370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.093 [2024-07-26 09:06:31.353396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.093 qpair failed and we were unable to recover it. 00:33:13.093 [2024-07-26 09:06:31.353517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.093 [2024-07-26 09:06:31.353542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.093 qpair failed and we were unable to recover it. 00:33:13.093 [2024-07-26 09:06:31.353710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.093 [2024-07-26 09:06:31.353735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.093 qpair failed and we were unable to recover it. 00:33:13.093 [2024-07-26 09:06:31.353925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.093 [2024-07-26 09:06:31.353953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.093 qpair failed and we were unable to recover it. 00:33:13.093 [2024-07-26 09:06:31.354118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.093 [2024-07-26 09:06:31.354143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.093 qpair failed and we were unable to recover it. 00:33:13.093 [2024-07-26 09:06:31.354265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.093 [2024-07-26 09:06:31.354290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.093 qpair failed and we were unable to recover it. 00:33:13.093 [2024-07-26 09:06:31.354436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.093 [2024-07-26 09:06:31.354461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.093 qpair failed and we were unable to recover it. 00:33:13.093 [2024-07-26 09:06:31.354612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.093 [2024-07-26 09:06:31.354637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.093 qpair failed and we were unable to recover it. 00:33:13.093 [2024-07-26 09:06:31.354782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.093 [2024-07-26 09:06:31.354807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.093 qpair failed and we were unable to recover it. 00:33:13.093 [2024-07-26 09:06:31.354943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.093 [2024-07-26 09:06:31.354969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.093 qpair failed and we were unable to recover it. 00:33:13.093 [2024-07-26 09:06:31.355123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.093 [2024-07-26 09:06:31.355149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.093 qpair failed and we were unable to recover it. 00:33:13.093 [2024-07-26 09:06:31.355288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.093 [2024-07-26 09:06:31.355334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.093 qpair failed and we were unable to recover it. 00:33:13.093 [2024-07-26 09:06:31.355523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.093 [2024-07-26 09:06:31.355551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.093 qpair failed and we were unable to recover it. 00:33:13.093 [2024-07-26 09:06:31.355693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.093 [2024-07-26 09:06:31.355717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.093 qpair failed and we were unable to recover it. 00:33:13.093 [2024-07-26 09:06:31.355888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.093 [2024-07-26 09:06:31.355914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.093 qpair failed and we were unable to recover it. 00:33:13.093 [2024-07-26 09:06:31.356107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.093 [2024-07-26 09:06:31.356137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.093 qpair failed and we were unable to recover it. 00:33:13.093 [2024-07-26 09:06:31.356283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.093 [2024-07-26 09:06:31.356308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.093 qpair failed and we were unable to recover it. 00:33:13.093 [2024-07-26 09:06:31.356486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.093 [2024-07-26 09:06:31.356526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.093 qpair failed and we were unable to recover it. 00:33:13.093 [2024-07-26 09:06:31.356685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.093 [2024-07-26 09:06:31.356713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.093 qpair failed and we were unable to recover it. 00:33:13.093 [2024-07-26 09:06:31.356880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.093 [2024-07-26 09:06:31.356905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.093 qpair failed and we were unable to recover it. 00:33:13.093 [2024-07-26 09:06:31.357042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.093 [2024-07-26 09:06:31.357090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.093 qpair failed and we were unable to recover it. 00:33:13.093 [2024-07-26 09:06:31.357254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.093 [2024-07-26 09:06:31.357282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.093 qpair failed and we were unable to recover it. 00:33:13.093 [2024-07-26 09:06:31.357446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.093 [2024-07-26 09:06:31.357471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.093 qpair failed and we were unable to recover it. 00:33:13.093 [2024-07-26 09:06:31.357611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.093 [2024-07-26 09:06:31.357654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.093 qpair failed and we were unable to recover it. 00:33:13.093 [2024-07-26 09:06:31.357790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.093 [2024-07-26 09:06:31.357818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.093 qpair failed and we were unable to recover it. 00:33:13.093 [2024-07-26 09:06:31.357999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.093 [2024-07-26 09:06:31.358024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.093 qpair failed and we were unable to recover it. 00:33:13.093 [2024-07-26 09:06:31.358204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.093 [2024-07-26 09:06:31.358230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.093 qpair failed and we were unable to recover it. 00:33:13.093 [2024-07-26 09:06:31.358392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.093 [2024-07-26 09:06:31.358421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.093 qpair failed and we were unable to recover it. 00:33:13.093 [2024-07-26 09:06:31.358557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.093 [2024-07-26 09:06:31.358582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.093 qpair failed and we were unable to recover it. 00:33:13.093 [2024-07-26 09:06:31.358727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.093 [2024-07-26 09:06:31.358751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.093 qpair failed and we were unable to recover it. 00:33:13.093 [2024-07-26 09:06:31.358920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.093 [2024-07-26 09:06:31.358944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.093 qpair failed and we were unable to recover it. 00:33:13.093 [2024-07-26 09:06:31.359089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.093 [2024-07-26 09:06:31.359115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.093 qpair failed and we were unable to recover it. 00:33:13.093 [2024-07-26 09:06:31.359230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.093 [2024-07-26 09:06:31.359254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.093 qpair failed and we were unable to recover it. 00:33:13.093 [2024-07-26 09:06:31.359394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.093 [2024-07-26 09:06:31.359418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.093 qpair failed and we were unable to recover it. 00:33:13.093 [2024-07-26 09:06:31.359536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.093 [2024-07-26 09:06:31.359560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.093 qpair failed and we were unable to recover it. 00:33:13.093 [2024-07-26 09:06:31.359734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.093 [2024-07-26 09:06:31.359759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.093 qpair failed and we were unable to recover it. 00:33:13.093 [2024-07-26 09:06:31.359921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.093 [2024-07-26 09:06:31.359946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.093 qpair failed and we were unable to recover it. 00:33:13.093 [2024-07-26 09:06:31.360119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.093 [2024-07-26 09:06:31.360146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.093 qpair failed and we were unable to recover it. 00:33:13.093 [2024-07-26 09:06:31.360311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.093 [2024-07-26 09:06:31.360338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.093 qpair failed and we were unable to recover it. 00:33:13.093 [2024-07-26 09:06:31.360478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.093 [2024-07-26 09:06:31.360507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.093 qpair failed and we were unable to recover it. 00:33:13.093 [2024-07-26 09:06:31.360702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.093 [2024-07-26 09:06:31.360727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.093 qpair failed and we were unable to recover it. 00:33:13.093 [2024-07-26 09:06:31.360863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.093 [2024-07-26 09:06:31.360890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.093 qpair failed and we were unable to recover it. 00:33:13.093 [2024-07-26 09:06:31.361044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.093 [2024-07-26 09:06:31.361100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.093 qpair failed and we were unable to recover it. 00:33:13.093 [2024-07-26 09:06:31.361250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.093 [2024-07-26 09:06:31.361275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.093 qpair failed and we were unable to recover it. 00:33:13.093 [2024-07-26 09:06:31.361421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.093 [2024-07-26 09:06:31.361447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.093 qpair failed and we were unable to recover it. 00:33:13.093 [2024-07-26 09:06:31.361593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.093 [2024-07-26 09:06:31.361621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.093 qpair failed and we were unable to recover it. 00:33:13.093 [2024-07-26 09:06:31.361755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.093 [2024-07-26 09:06:31.361782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.093 qpair failed and we were unable to recover it. 00:33:13.093 [2024-07-26 09:06:31.361933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.093 [2024-07-26 09:06:31.361958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.093 qpair failed and we were unable to recover it. 00:33:13.093 [2024-07-26 09:06:31.362104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.093 [2024-07-26 09:06:31.362147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.093 qpair failed and we were unable to recover it. 00:33:13.093 [2024-07-26 09:06:31.362314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.093 [2024-07-26 09:06:31.362339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.093 qpair failed and we were unable to recover it. 00:33:13.093 [2024-07-26 09:06:31.362479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.093 [2024-07-26 09:06:31.362520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.093 qpair failed and we were unable to recover it. 00:33:13.093 [2024-07-26 09:06:31.362678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.093 [2024-07-26 09:06:31.362706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.093 qpair failed and we were unable to recover it. 00:33:13.093 [2024-07-26 09:06:31.362870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.093 [2024-07-26 09:06:31.362895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.093 qpair failed and we were unable to recover it. 00:33:13.093 [2024-07-26 09:06:31.363085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.093 [2024-07-26 09:06:31.363114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.093 qpair failed and we were unable to recover it. 00:33:13.093 [2024-07-26 09:06:31.363270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.093 [2024-07-26 09:06:31.363299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.093 qpair failed and we were unable to recover it. 00:33:13.093 [2024-07-26 09:06:31.363490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.093 [2024-07-26 09:06:31.363515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.093 qpair failed and we were unable to recover it. 00:33:13.093 [2024-07-26 09:06:31.363703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.093 [2024-07-26 09:06:31.363731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.093 qpair failed and we were unable to recover it. 00:33:13.094 [2024-07-26 09:06:31.363890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.094 [2024-07-26 09:06:31.363918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.094 qpair failed and we were unable to recover it. 00:33:13.094 [2024-07-26 09:06:31.364090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.094 [2024-07-26 09:06:31.364117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.094 qpair failed and we were unable to recover it. 00:33:13.094 [2024-07-26 09:06:31.364280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.094 [2024-07-26 09:06:31.364310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.094 qpair failed and we were unable to recover it. 00:33:13.094 [2024-07-26 09:06:31.364481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.094 [2024-07-26 09:06:31.364506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.094 qpair failed and we were unable to recover it. 00:33:13.094 [2024-07-26 09:06:31.364672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.094 [2024-07-26 09:06:31.364697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.094 qpair failed and we were unable to recover it. 00:33:13.094 [2024-07-26 09:06:31.364822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.094 [2024-07-26 09:06:31.364847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.094 qpair failed and we were unable to recover it. 00:33:13.094 [2024-07-26 09:06:31.364993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.094 [2024-07-26 09:06:31.365017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.094 qpair failed and we were unable to recover it. 00:33:13.094 [2024-07-26 09:06:31.365134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.094 [2024-07-26 09:06:31.365159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.094 qpair failed and we were unable to recover it. 00:33:13.094 [2024-07-26 09:06:31.365332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.094 [2024-07-26 09:06:31.365373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.094 qpair failed and we were unable to recover it. 00:33:13.094 [2024-07-26 09:06:31.365536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.094 [2024-07-26 09:06:31.365563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.094 qpair failed and we were unable to recover it. 00:33:13.094 [2024-07-26 09:06:31.365705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.094 [2024-07-26 09:06:31.365730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.094 qpair failed and we were unable to recover it. 00:33:13.094 [2024-07-26 09:06:31.365850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.094 [2024-07-26 09:06:31.365875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.094 qpair failed and we were unable to recover it. 00:33:13.094 [2024-07-26 09:06:31.366043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.094 [2024-07-26 09:06:31.366072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.094 qpair failed and we were unable to recover it. 00:33:13.094 [2024-07-26 09:06:31.366218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.094 [2024-07-26 09:06:31.366242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.094 qpair failed and we were unable to recover it. 00:33:13.094 [2024-07-26 09:06:31.366404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.094 [2024-07-26 09:06:31.366433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.094 qpair failed and we were unable to recover it. 00:33:13.094 [2024-07-26 09:06:31.366609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.094 [2024-07-26 09:06:31.366634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.094 qpair failed and we were unable to recover it. 00:33:13.094 [2024-07-26 09:06:31.366781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.094 [2024-07-26 09:06:31.366805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.094 qpair failed and we were unable to recover it. 00:33:13.094 [2024-07-26 09:06:31.366993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.094 [2024-07-26 09:06:31.367020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.094 qpair failed and we were unable to recover it. 00:33:13.094 [2024-07-26 09:06:31.367154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.094 [2024-07-26 09:06:31.367182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.094 qpair failed and we were unable to recover it. 00:33:13.094 [2024-07-26 09:06:31.367322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.094 [2024-07-26 09:06:31.367346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.094 qpair failed and we were unable to recover it. 00:33:13.094 [2024-07-26 09:06:31.367517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.094 [2024-07-26 09:06:31.367542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.094 qpair failed and we were unable to recover it. 00:33:13.094 [2024-07-26 09:06:31.367689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.094 [2024-07-26 09:06:31.367717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.094 qpair failed and we were unable to recover it. 00:33:13.094 [2024-07-26 09:06:31.367877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.094 [2024-07-26 09:06:31.367906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.094 qpair failed and we were unable to recover it. 00:33:13.094 [2024-07-26 09:06:31.368050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.094 [2024-07-26 09:06:31.368106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.094 qpair failed and we were unable to recover it. 00:33:13.094 [2024-07-26 09:06:31.368279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.094 [2024-07-26 09:06:31.368304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.094 qpair failed and we were unable to recover it. 00:33:13.094 [2024-07-26 09:06:31.368464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.094 [2024-07-26 09:06:31.368490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.094 qpair failed and we were unable to recover it. 00:33:13.094 [2024-07-26 09:06:31.368660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.094 [2024-07-26 09:06:31.368688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.094 qpair failed and we were unable to recover it. 00:33:13.094 [2024-07-26 09:06:31.368882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.094 [2024-07-26 09:06:31.368907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.094 qpair failed and we were unable to recover it. 00:33:13.094 [2024-07-26 09:06:31.369048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.094 [2024-07-26 09:06:31.369091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.094 qpair failed and we were unable to recover it. 00:33:13.094 [2024-07-26 09:06:31.369283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.094 [2024-07-26 09:06:31.369311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.094 qpair failed and we were unable to recover it. 00:33:13.094 [2024-07-26 09:06:31.369475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.094 [2024-07-26 09:06:31.369503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.094 qpair failed and we were unable to recover it. 00:33:13.094 [2024-07-26 09:06:31.369637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.094 [2024-07-26 09:06:31.369663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.094 qpair failed and we were unable to recover it. 00:33:13.094 [2024-07-26 09:06:31.369807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.094 [2024-07-26 09:06:31.369832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.094 qpair failed and we were unable to recover it. 00:33:13.094 [2024-07-26 09:06:31.370007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.094 [2024-07-26 09:06:31.370035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.094 qpair failed and we were unable to recover it. 00:33:13.094 [2024-07-26 09:06:31.370207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.094 [2024-07-26 09:06:31.370233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.094 qpair failed and we were unable to recover it. 00:33:13.094 [2024-07-26 09:06:31.370395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.094 [2024-07-26 09:06:31.370423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.094 qpair failed and we were unable to recover it. 00:33:13.094 [2024-07-26 09:06:31.370552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.094 [2024-07-26 09:06:31.370580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.094 qpair failed and we were unable to recover it. 00:33:13.094 [2024-07-26 09:06:31.370740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.094 [2024-07-26 09:06:31.370766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.094 qpair failed and we were unable to recover it. 00:33:13.094 [2024-07-26 09:06:31.370883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.094 [2024-07-26 09:06:31.370908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.094 qpair failed and we were unable to recover it. 00:33:13.094 [2024-07-26 09:06:31.371076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.094 [2024-07-26 09:06:31.371120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.094 qpair failed and we were unable to recover it. 00:33:13.094 [2024-07-26 09:06:31.371312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.094 [2024-07-26 09:06:31.371337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.094 qpair failed and we were unable to recover it. 00:33:13.094 [2024-07-26 09:06:31.371472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.094 [2024-07-26 09:06:31.371496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.094 qpair failed and we were unable to recover it. 00:33:13.094 [2024-07-26 09:06:31.371615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.094 [2024-07-26 09:06:31.371641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.094 qpair failed and we were unable to recover it. 00:33:13.094 [2024-07-26 09:06:31.371785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.094 [2024-07-26 09:06:31.371810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.094 qpair failed and we were unable to recover it. 00:33:13.094 [2024-07-26 09:06:31.371933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.094 [2024-07-26 09:06:31.371958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.094 qpair failed and we were unable to recover it. 00:33:13.094 [2024-07-26 09:06:31.372112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.094 [2024-07-26 09:06:31.372138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.094 qpair failed and we were unable to recover it. 00:33:13.094 [2024-07-26 09:06:31.372349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.094 [2024-07-26 09:06:31.372374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.094 qpair failed and we were unable to recover it. 00:33:13.094 [2024-07-26 09:06:31.372508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.094 [2024-07-26 09:06:31.372535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.094 qpair failed and we were unable to recover it. 00:33:13.094 [2024-07-26 09:06:31.372721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.094 [2024-07-26 09:06:31.372750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.094 qpair failed and we were unable to recover it. 00:33:13.094 [2024-07-26 09:06:31.372945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.094 [2024-07-26 09:06:31.372971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.094 qpair failed and we were unable to recover it. 00:33:13.094 [2024-07-26 09:06:31.373162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.094 [2024-07-26 09:06:31.373191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.094 qpair failed and we were unable to recover it. 00:33:13.094 [2024-07-26 09:06:31.373384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.094 [2024-07-26 09:06:31.373408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.094 qpair failed and we were unable to recover it. 00:33:13.094 [2024-07-26 09:06:31.373525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.094 [2024-07-26 09:06:31.373551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.094 qpair failed and we were unable to recover it. 00:33:13.094 [2024-07-26 09:06:31.373699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.094 [2024-07-26 09:06:31.373741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.094 qpair failed and we were unable to recover it. 00:33:13.094 [2024-07-26 09:06:31.373916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.094 [2024-07-26 09:06:31.373941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.094 qpair failed and we were unable to recover it. 00:33:13.094 [2024-07-26 09:06:31.374087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.094 [2024-07-26 09:06:31.374113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.094 qpair failed and we were unable to recover it. 00:33:13.094 [2024-07-26 09:06:31.374227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.094 [2024-07-26 09:06:31.374252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.094 qpair failed and we were unable to recover it. 00:33:13.094 [2024-07-26 09:06:31.374406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.094 [2024-07-26 09:06:31.374435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.094 qpair failed and we were unable to recover it. 00:33:13.094 [2024-07-26 09:06:31.374624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.094 [2024-07-26 09:06:31.374649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.094 qpair failed and we were unable to recover it. 00:33:13.094 [2024-07-26 09:06:31.374837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.094 [2024-07-26 09:06:31.374866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.094 qpair failed and we were unable to recover it. 00:33:13.094 [2024-07-26 09:06:31.375027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.094 [2024-07-26 09:06:31.375055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.094 qpair failed and we were unable to recover it. 00:33:13.094 [2024-07-26 09:06:31.375225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.094 [2024-07-26 09:06:31.375251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.094 qpair failed and we were unable to recover it. 00:33:13.094 [2024-07-26 09:06:31.375409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.095 [2024-07-26 09:06:31.375436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.095 qpair failed and we were unable to recover it. 00:33:13.095 [2024-07-26 09:06:31.375572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.095 [2024-07-26 09:06:31.375600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.095 qpair failed and we were unable to recover it. 00:33:13.095 [2024-07-26 09:06:31.375768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.095 [2024-07-26 09:06:31.375793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.095 qpair failed and we were unable to recover it. 00:33:13.095 [2024-07-26 09:06:31.375942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.095 [2024-07-26 09:06:31.375967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.095 qpair failed and we were unable to recover it. 00:33:13.095 [2024-07-26 09:06:31.376086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.095 [2024-07-26 09:06:31.376113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.095 qpair failed and we were unable to recover it. 00:33:13.095 [2024-07-26 09:06:31.376258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.095 [2024-07-26 09:06:31.376284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.095 qpair failed and we were unable to recover it. 00:33:13.095 [2024-07-26 09:06:31.376428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.095 [2024-07-26 09:06:31.376453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.095 qpair failed and we were unable to recover it. 00:33:13.095 [2024-07-26 09:06:31.376609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.095 [2024-07-26 09:06:31.376638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.095 qpair failed and we were unable to recover it. 00:33:13.095 [2024-07-26 09:06:31.376806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.095 [2024-07-26 09:06:31.376832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.095 qpair failed and we were unable to recover it. 00:33:13.095 [2024-07-26 09:06:31.377000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.095 [2024-07-26 09:06:31.377041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.095 qpair failed and we were unable to recover it. 00:33:13.095 [2024-07-26 09:06:31.377193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.095 [2024-07-26 09:06:31.377220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.095 qpair failed and we were unable to recover it. 00:33:13.095 [2024-07-26 09:06:31.377389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.095 [2024-07-26 09:06:31.377421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.095 qpair failed and we were unable to recover it. 00:33:13.095 [2024-07-26 09:06:31.377570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.095 [2024-07-26 09:06:31.377607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.095 qpair failed and we were unable to recover it. 00:33:13.095 [2024-07-26 09:06:31.377813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.095 [2024-07-26 09:06:31.377849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.095 qpair failed and we were unable to recover it. 00:33:13.095 [2024-07-26 09:06:31.378031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.095 [2024-07-26 09:06:31.378070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.095 qpair failed and we were unable to recover it. 00:33:13.095 [2024-07-26 09:06:31.378271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.095 [2024-07-26 09:06:31.378307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.095 qpair failed and we were unable to recover it. 00:33:13.095 [2024-07-26 09:06:31.378492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.095 [2024-07-26 09:06:31.378525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.095 qpair failed and we were unable to recover it. 00:33:13.095 [2024-07-26 09:06:31.378674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.095 [2024-07-26 09:06:31.378700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.095 qpair failed and we were unable to recover it. 00:33:13.095 [2024-07-26 09:06:31.378844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.095 [2024-07-26 09:06:31.378887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.095 qpair failed and we were unable to recover it. 00:33:13.095 [2024-07-26 09:06:31.379047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.095 [2024-07-26 09:06:31.379081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.095 qpair failed and we were unable to recover it. 00:33:13.095 [2024-07-26 09:06:31.379228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.095 [2024-07-26 09:06:31.379252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.095 qpair failed and we were unable to recover it. 00:33:13.095 [2024-07-26 09:06:31.379397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.095 [2024-07-26 09:06:31.379439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.095 qpair failed and we were unable to recover it. 00:33:13.095 [2024-07-26 09:06:31.379595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.095 [2024-07-26 09:06:31.379623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.095 qpair failed and we were unable to recover it. 00:33:13.095 [2024-07-26 09:06:31.379763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.095 [2024-07-26 09:06:31.379788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.095 qpair failed and we were unable to recover it. 00:33:13.095 [2024-07-26 09:06:31.379907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.095 [2024-07-26 09:06:31.379933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.095 qpair failed and we were unable to recover it. 00:33:13.095 [2024-07-26 09:06:31.380129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.095 [2024-07-26 09:06:31.380159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.095 qpair failed and we were unable to recover it. 00:33:13.095 [2024-07-26 09:06:31.380323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.095 [2024-07-26 09:06:31.380348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.095 qpair failed and we were unable to recover it. 00:33:13.095 [2024-07-26 09:06:31.380476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.095 [2024-07-26 09:06:31.380502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.095 qpair failed and we were unable to recover it. 00:33:13.095 [2024-07-26 09:06:31.380669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.095 [2024-07-26 09:06:31.380701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.095 qpair failed and we were unable to recover it. 00:33:13.095 [2024-07-26 09:06:31.380887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.095 [2024-07-26 09:06:31.380912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.095 qpair failed and we were unable to recover it. 00:33:13.095 [2024-07-26 09:06:31.381082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.095 [2024-07-26 09:06:31.381107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.095 qpair failed and we were unable to recover it. 00:33:13.095 [2024-07-26 09:06:31.381313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.095 [2024-07-26 09:06:31.381341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.095 qpair failed and we were unable to recover it. 00:33:13.095 [2024-07-26 09:06:31.381477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.095 [2024-07-26 09:06:31.381502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.095 qpair failed and we were unable to recover it. 00:33:13.095 [2024-07-26 09:06:31.381671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.095 [2024-07-26 09:06:31.381696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.095 qpair failed and we were unable to recover it. 00:33:13.095 [2024-07-26 09:06:31.381897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.095 [2024-07-26 09:06:31.381925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.095 qpair failed and we were unable to recover it. 00:33:13.095 [2024-07-26 09:06:31.382114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.095 [2024-07-26 09:06:31.382139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.095 qpair failed and we were unable to recover it. 00:33:13.095 [2024-07-26 09:06:31.382282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.095 [2024-07-26 09:06:31.382323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.095 qpair failed and we were unable to recover it. 00:33:13.095 [2024-07-26 09:06:31.382439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.095 [2024-07-26 09:06:31.382467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.095 qpair failed and we were unable to recover it. 00:33:13.095 [2024-07-26 09:06:31.382636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.095 [2024-07-26 09:06:31.382660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.095 qpair failed and we were unable to recover it. 00:33:13.095 [2024-07-26 09:06:31.382836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.095 [2024-07-26 09:06:31.382861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.095 qpair failed and we were unable to recover it. 00:33:13.095 [2024-07-26 09:06:31.383032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.095 [2024-07-26 09:06:31.383057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.095 qpair failed and we were unable to recover it. 00:33:13.095 [2024-07-26 09:06:31.383233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.095 [2024-07-26 09:06:31.383259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.095 qpair failed and we were unable to recover it. 00:33:13.095 [2024-07-26 09:06:31.383431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.095 [2024-07-26 09:06:31.383458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.095 qpair failed and we were unable to recover it. 00:33:13.095 [2024-07-26 09:06:31.383638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.095 [2024-07-26 09:06:31.383665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.095 qpair failed and we were unable to recover it. 00:33:13.095 [2024-07-26 09:06:31.383866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.095 [2024-07-26 09:06:31.383891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.095 qpair failed and we were unable to recover it. 00:33:13.095 [2024-07-26 09:06:31.384015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.095 [2024-07-26 09:06:31.384040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.095 qpair failed and we were unable to recover it. 00:33:13.095 [2024-07-26 09:06:31.384179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.095 [2024-07-26 09:06:31.384204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.095 qpair failed and we were unable to recover it. 00:33:13.095 [2024-07-26 09:06:31.384354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.095 [2024-07-26 09:06:31.384379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.095 qpair failed and we were unable to recover it. 00:33:13.095 [2024-07-26 09:06:31.384526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.095 [2024-07-26 09:06:31.384567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.095 qpair failed and we were unable to recover it. 00:33:13.095 [2024-07-26 09:06:31.384709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.095 [2024-07-26 09:06:31.384733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.095 qpair failed and we were unable to recover it. 00:33:13.095 [2024-07-26 09:06:31.384903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.095 [2024-07-26 09:06:31.384928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.095 qpair failed and we were unable to recover it. 00:33:13.095 [2024-07-26 09:06:31.385054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.095 [2024-07-26 09:06:31.385099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.095 qpair failed and we were unable to recover it. 00:33:13.095 [2024-07-26 09:06:31.385240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.095 [2024-07-26 09:06:31.385265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.095 qpair failed and we were unable to recover it. 00:33:13.095 [2024-07-26 09:06:31.385389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.095 [2024-07-26 09:06:31.385414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.095 qpair failed and we were unable to recover it. 00:33:13.095 [2024-07-26 09:06:31.385528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.095 [2024-07-26 09:06:31.385553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.095 qpair failed and we were unable to recover it. 00:33:13.095 [2024-07-26 09:06:31.385698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.095 [2024-07-26 09:06:31.385725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.095 qpair failed and we were unable to recover it. 00:33:13.095 [2024-07-26 09:06:31.385860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.095 [2024-07-26 09:06:31.385885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.095 qpair failed and we were unable to recover it. 00:33:13.095 [2024-07-26 09:06:31.386002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.095 [2024-07-26 09:06:31.386026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.095 qpair failed and we were unable to recover it. 00:33:13.095 [2024-07-26 09:06:31.386237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.096 [2024-07-26 09:06:31.386263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.096 qpair failed and we were unable to recover it. 00:33:13.096 [2024-07-26 09:06:31.386402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.096 [2024-07-26 09:06:31.386428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.096 qpair failed and we were unable to recover it. 00:33:13.096 [2024-07-26 09:06:31.386570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.096 [2024-07-26 09:06:31.386595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.096 qpair failed and we were unable to recover it. 00:33:13.096 [2024-07-26 09:06:31.386760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.096 [2024-07-26 09:06:31.386801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.096 qpair failed and we were unable to recover it. 00:33:13.096 [2024-07-26 09:06:31.386968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.096 [2024-07-26 09:06:31.386994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.096 qpair failed and we were unable to recover it. 00:33:13.096 [2024-07-26 09:06:31.387114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.096 [2024-07-26 09:06:31.387155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.096 qpair failed and we were unable to recover it. 00:33:13.096 [2024-07-26 09:06:31.387319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.096 [2024-07-26 09:06:31.387346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.096 qpair failed and we were unable to recover it. 00:33:13.096 [2024-07-26 09:06:31.387508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.096 [2024-07-26 09:06:31.387533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.096 qpair failed and we were unable to recover it. 00:33:13.096 [2024-07-26 09:06:31.387659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.096 [2024-07-26 09:06:31.387683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.096 qpair failed and we were unable to recover it. 00:33:13.096 [2024-07-26 09:06:31.387828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.096 [2024-07-26 09:06:31.387853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.096 qpair failed and we were unable to recover it. 00:33:13.096 [2024-07-26 09:06:31.387998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.096 [2024-07-26 09:06:31.388023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.096 qpair failed and we were unable to recover it. 00:33:13.096 [2024-07-26 09:06:31.388171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.096 [2024-07-26 09:06:31.388201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.096 qpair failed and we were unable to recover it. 00:33:13.096 [2024-07-26 09:06:31.388399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.096 [2024-07-26 09:06:31.388426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.096 qpair failed and we were unable to recover it. 00:33:13.096 [2024-07-26 09:06:31.388572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.096 [2024-07-26 09:06:31.388597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.096 qpair failed and we were unable to recover it. 00:33:13.096 [2024-07-26 09:06:31.388723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.096 [2024-07-26 09:06:31.388748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.096 qpair failed and we were unable to recover it. 00:33:13.096 [2024-07-26 09:06:31.388944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.096 [2024-07-26 09:06:31.388971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.096 qpair failed and we were unable to recover it. 00:33:13.096 [2024-07-26 09:06:31.389107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.096 [2024-07-26 09:06:31.389132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.096 qpair failed and we were unable to recover it. 00:33:13.096 [2024-07-26 09:06:31.389258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.096 [2024-07-26 09:06:31.389283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.096 qpair failed and we were unable to recover it. 00:33:13.096 [2024-07-26 09:06:31.389405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.096 [2024-07-26 09:06:31.389430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.096 qpair failed and we were unable to recover it. 00:33:13.096 [2024-07-26 09:06:31.389602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.096 [2024-07-26 09:06:31.389628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.096 qpair failed and we were unable to recover it. 00:33:13.096 [2024-07-26 09:06:31.389741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.096 [2024-07-26 09:06:31.389766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.096 qpair failed and we were unable to recover it. 00:33:13.096 [2024-07-26 09:06:31.389918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.096 [2024-07-26 09:06:31.389943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.096 qpair failed and we were unable to recover it. 00:33:13.096 [2024-07-26 09:06:31.390136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.096 [2024-07-26 09:06:31.390162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.096 qpair failed and we were unable to recover it. 00:33:13.096 [2024-07-26 09:06:31.390332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.096 [2024-07-26 09:06:31.390374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.096 qpair failed and we were unable to recover it. 00:33:13.096 [2024-07-26 09:06:31.390512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.096 [2024-07-26 09:06:31.390540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.096 qpair failed and we were unable to recover it. 00:33:13.096 [2024-07-26 09:06:31.390710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.096 [2024-07-26 09:06:31.390734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.096 qpair failed and we were unable to recover it. 00:33:13.096 [2024-07-26 09:06:31.390871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.096 [2024-07-26 09:06:31.390912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.096 qpair failed and we were unable to recover it. 00:33:13.096 [2024-07-26 09:06:31.391101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.096 [2024-07-26 09:06:31.391130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.096 qpair failed and we were unable to recover it. 00:33:13.096 [2024-07-26 09:06:31.391276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.096 [2024-07-26 09:06:31.391300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.096 qpair failed and we were unable to recover it. 00:33:13.096 [2024-07-26 09:06:31.391446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.096 [2024-07-26 09:06:31.391489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.096 qpair failed and we were unable to recover it. 00:33:13.096 [2024-07-26 09:06:31.391665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.096 [2024-07-26 09:06:31.391694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.096 qpair failed and we were unable to recover it. 00:33:13.096 [2024-07-26 09:06:31.391865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.096 [2024-07-26 09:06:31.391890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.096 qpair failed and we were unable to recover it. 00:33:13.096 [2024-07-26 09:06:31.392032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.096 [2024-07-26 09:06:31.392056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.096 qpair failed and we were unable to recover it. 00:33:13.096 [2024-07-26 09:06:31.392237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.096 [2024-07-26 09:06:31.392265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.096 qpair failed and we were unable to recover it. 00:33:13.096 [2024-07-26 09:06:31.392418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.096 [2024-07-26 09:06:31.392443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.096 qpair failed and we were unable to recover it. 00:33:13.096 [2024-07-26 09:06:31.392581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.096 [2024-07-26 09:06:31.392622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.096 qpair failed and we were unable to recover it. 00:33:13.096 [2024-07-26 09:06:31.392771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.096 [2024-07-26 09:06:31.392799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.096 qpair failed and we were unable to recover it. 00:33:13.096 [2024-07-26 09:06:31.392963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.096 [2024-07-26 09:06:31.392988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.096 qpair failed and we were unable to recover it. 00:33:13.096 [2024-07-26 09:06:31.393177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.096 [2024-07-26 09:06:31.393210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.096 qpair failed and we were unable to recover it. 00:33:13.096 [2024-07-26 09:06:31.393350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.096 [2024-07-26 09:06:31.393378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.096 qpair failed and we were unable to recover it. 00:33:13.096 [2024-07-26 09:06:31.393550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.096 [2024-07-26 09:06:31.393575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.096 qpair failed and we were unable to recover it. 00:33:13.096 [2024-07-26 09:06:31.393716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.096 [2024-07-26 09:06:31.393741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.096 qpair failed and we were unable to recover it. 00:33:13.096 [2024-07-26 09:06:31.393924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.096 [2024-07-26 09:06:31.393953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.096 qpair failed and we were unable to recover it. 00:33:13.096 [2024-07-26 09:06:31.394117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.096 [2024-07-26 09:06:31.394144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.096 qpair failed and we were unable to recover it. 00:33:13.096 [2024-07-26 09:06:31.394313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.096 [2024-07-26 09:06:31.394355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.096 qpair failed and we were unable to recover it. 00:33:13.096 [2024-07-26 09:06:31.394522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.096 [2024-07-26 09:06:31.394547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.096 qpair failed and we were unable to recover it. 00:33:13.096 [2024-07-26 09:06:31.394696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.096 [2024-07-26 09:06:31.394722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.096 qpair failed and we were unable to recover it. 00:33:13.096 [2024-07-26 09:06:31.394882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.096 [2024-07-26 09:06:31.394910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.096 qpair failed and we were unable to recover it. 00:33:13.096 [2024-07-26 09:06:31.395073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.096 [2024-07-26 09:06:31.395102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.096 qpair failed and we were unable to recover it. 00:33:13.096 [2024-07-26 09:06:31.395274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.096 [2024-07-26 09:06:31.395299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.096 qpair failed and we were unable to recover it. 00:33:13.096 [2024-07-26 09:06:31.395440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.096 [2024-07-26 09:06:31.395481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.096 qpair failed and we were unable to recover it. 00:33:13.096 [2024-07-26 09:06:31.395653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.096 [2024-07-26 09:06:31.395678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.096 qpair failed and we were unable to recover it. 00:33:13.096 [2024-07-26 09:06:31.395804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.096 [2024-07-26 09:06:31.395830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.096 qpair failed and we were unable to recover it. 00:33:13.096 [2024-07-26 09:06:31.395971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.096 [2024-07-26 09:06:31.395996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.096 qpair failed and we were unable to recover it. 00:33:13.096 [2024-07-26 09:06:31.396160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.096 [2024-07-26 09:06:31.396190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.096 qpair failed and we were unable to recover it. 00:33:13.096 [2024-07-26 09:06:31.396362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.096 [2024-07-26 09:06:31.396387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.096 qpair failed and we were unable to recover it. 00:33:13.096 [2024-07-26 09:06:31.396506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.096 [2024-07-26 09:06:31.396531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.096 qpair failed and we were unable to recover it. 00:33:13.096 [2024-07-26 09:06:31.396654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.096 [2024-07-26 09:06:31.396679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.096 qpair failed and we were unable to recover it. 00:33:13.096 [2024-07-26 09:06:31.396822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.097 [2024-07-26 09:06:31.396847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.097 qpair failed and we were unable to recover it. 00:33:13.097 [2024-07-26 09:06:31.396985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.097 [2024-07-26 09:06:31.397010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.097 qpair failed and we were unable to recover it. 00:33:13.097 [2024-07-26 09:06:31.397156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.097 [2024-07-26 09:06:31.397199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.097 qpair failed and we were unable to recover it. 00:33:13.097 [2024-07-26 09:06:31.397365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.097 [2024-07-26 09:06:31.397390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.097 qpair failed and we were unable to recover it. 00:33:13.097 [2024-07-26 09:06:31.397508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.097 [2024-07-26 09:06:31.397548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.097 qpair failed and we were unable to recover it. 00:33:13.097 [2024-07-26 09:06:31.397710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.097 [2024-07-26 09:06:31.397738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.097 qpair failed and we were unable to recover it. 00:33:13.097 [2024-07-26 09:06:31.397903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.097 [2024-07-26 09:06:31.397928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.097 qpair failed and we were unable to recover it. 00:33:13.097 [2024-07-26 09:06:31.398072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.097 [2024-07-26 09:06:31.398098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.097 qpair failed and we were unable to recover it. 00:33:13.097 [2024-07-26 09:06:31.398249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.097 [2024-07-26 09:06:31.398275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.097 qpair failed and we were unable to recover it. 00:33:13.097 [2024-07-26 09:06:31.398443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.097 [2024-07-26 09:06:31.398468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.097 qpair failed and we were unable to recover it. 00:33:13.097 [2024-07-26 09:06:31.398624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.097 [2024-07-26 09:06:31.398652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.097 qpair failed and we were unable to recover it. 00:33:13.097 [2024-07-26 09:06:31.398777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.097 [2024-07-26 09:06:31.398805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.097 qpair failed and we were unable to recover it. 00:33:13.097 [2024-07-26 09:06:31.398939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.097 [2024-07-26 09:06:31.398964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.097 qpair failed and we were unable to recover it. 00:33:13.097 [2024-07-26 09:06:31.399073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.097 [2024-07-26 09:06:31.399099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.097 qpair failed and we were unable to recover it. 00:33:13.097 [2024-07-26 09:06:31.399265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.097 [2024-07-26 09:06:31.399290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.097 qpair failed and we were unable to recover it. 00:33:13.097 [2024-07-26 09:06:31.399438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.097 [2024-07-26 09:06:31.399463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.097 qpair failed and we were unable to recover it. 00:33:13.097 [2024-07-26 09:06:31.399582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.097 [2024-07-26 09:06:31.399608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.097 qpair failed and we were unable to recover it. 00:33:13.097 [2024-07-26 09:06:31.399758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.097 [2024-07-26 09:06:31.399783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.097 qpair failed and we were unable to recover it. 00:33:13.097 [2024-07-26 09:06:31.399895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.097 [2024-07-26 09:06:31.399920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.097 qpair failed and we were unable to recover it. 00:33:13.097 [2024-07-26 09:06:31.400108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.097 [2024-07-26 09:06:31.400138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.097 qpair failed and we were unable to recover it. 00:33:13.097 [2024-07-26 09:06:31.400275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.097 [2024-07-26 09:06:31.400303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.097 qpair failed and we were unable to recover it. 00:33:13.097 [2024-07-26 09:06:31.400471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.097 [2024-07-26 09:06:31.400500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.097 qpair failed and we were unable to recover it. 00:33:13.097 [2024-07-26 09:06:31.400616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.097 [2024-07-26 09:06:31.400658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.097 qpair failed and we were unable to recover it. 00:33:13.097 [2024-07-26 09:06:31.400831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.097 [2024-07-26 09:06:31.400856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.097 qpair failed and we were unable to recover it. 00:33:13.097 [2024-07-26 09:06:31.400964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.097 [2024-07-26 09:06:31.400990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.097 qpair failed and we were unable to recover it. 00:33:13.097 [2024-07-26 09:06:31.401136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.097 [2024-07-26 09:06:31.401162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.097 qpair failed and we were unable to recover it. 00:33:13.097 [2024-07-26 09:06:31.401311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.097 [2024-07-26 09:06:31.401354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.097 qpair failed and we were unable to recover it. 00:33:13.097 [2024-07-26 09:06:31.401502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.097 [2024-07-26 09:06:31.401527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.097 qpair failed and we were unable to recover it. 00:33:13.097 [2024-07-26 09:06:31.401661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.097 [2024-07-26 09:06:31.401686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.097 qpair failed and we were unable to recover it. 00:33:13.097 [2024-07-26 09:06:31.401849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.097 [2024-07-26 09:06:31.401877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.097 qpair failed and we were unable to recover it. 00:33:13.097 [2024-07-26 09:06:31.402021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.097 [2024-07-26 09:06:31.402046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.097 qpair failed and we were unable to recover it. 00:33:13.097 [2024-07-26 09:06:31.402193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.097 [2024-07-26 09:06:31.402218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.097 qpair failed and we were unable to recover it. 00:33:13.097 [2024-07-26 09:06:31.402363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.097 [2024-07-26 09:06:31.402404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.097 qpair failed and we were unable to recover it. 00:33:13.097 [2024-07-26 09:06:31.402570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.097 [2024-07-26 09:06:31.402595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.097 qpair failed and we were unable to recover it. 00:33:13.097 [2024-07-26 09:06:31.402749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.097 [2024-07-26 09:06:31.402774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.097 qpair failed and we were unable to recover it. 00:33:13.097 [2024-07-26 09:06:31.402920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.097 [2024-07-26 09:06:31.402948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.097 qpair failed and we were unable to recover it. 00:33:13.097 [2024-07-26 09:06:31.403084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.097 [2024-07-26 09:06:31.403110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.097 qpair failed and we were unable to recover it. 00:33:13.097 [2024-07-26 09:06:31.403293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.097 [2024-07-26 09:06:31.403321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.097 qpair failed and we were unable to recover it. 00:33:13.097 [2024-07-26 09:06:31.403453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.097 [2024-07-26 09:06:31.403482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.097 qpair failed and we were unable to recover it. 00:33:13.097 [2024-07-26 09:06:31.403673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.097 [2024-07-26 09:06:31.403698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.097 qpair failed and we were unable to recover it. 00:33:13.097 [2024-07-26 09:06:31.403811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.097 [2024-07-26 09:06:31.403854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.097 qpair failed and we were unable to recover it. 00:33:13.097 [2024-07-26 09:06:31.404017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.097 [2024-07-26 09:06:31.404045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.097 qpair failed and we were unable to recover it. 00:33:13.097 [2024-07-26 09:06:31.404225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.097 [2024-07-26 09:06:31.404251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.097 qpair failed and we were unable to recover it. 00:33:13.097 [2024-07-26 09:06:31.404375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.097 [2024-07-26 09:06:31.404415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.097 qpair failed and we were unable to recover it. 00:33:13.097 [2024-07-26 09:06:31.404554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.097 [2024-07-26 09:06:31.404582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.097 qpair failed and we were unable to recover it. 00:33:13.097 [2024-07-26 09:06:31.404712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.097 [2024-07-26 09:06:31.404737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.097 qpair failed and we were unable to recover it. 00:33:13.097 [2024-07-26 09:06:31.404862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.097 [2024-07-26 09:06:31.404887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.097 qpair failed and we were unable to recover it. 00:33:13.097 [2024-07-26 09:06:31.405027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.097 [2024-07-26 09:06:31.405052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.097 qpair failed and we were unable to recover it. 00:33:13.097 [2024-07-26 09:06:31.405224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.097 [2024-07-26 09:06:31.405253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.097 qpair failed and we were unable to recover it. 00:33:13.097 [2024-07-26 09:06:31.405401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.097 [2024-07-26 09:06:31.405444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.097 qpair failed and we were unable to recover it. 00:33:13.097 [2024-07-26 09:06:31.405647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.097 [2024-07-26 09:06:31.405672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.097 qpair failed and we were unable to recover it. 00:33:13.097 [2024-07-26 09:06:31.405788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.097 [2024-07-26 09:06:31.405813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.097 qpair failed and we were unable to recover it. 00:33:13.097 [2024-07-26 09:06:31.405961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.097 [2024-07-26 09:06:31.405985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.097 qpair failed and we were unable to recover it. 00:33:13.097 [2024-07-26 09:06:31.406109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.097 [2024-07-26 09:06:31.406135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.097 qpair failed and we were unable to recover it. 00:33:13.097 [2024-07-26 09:06:31.406313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.097 [2024-07-26 09:06:31.406337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.097 qpair failed and we were unable to recover it. 00:33:13.097 [2024-07-26 09:06:31.406461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.097 [2024-07-26 09:06:31.406502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.097 qpair failed and we were unable to recover it. 00:33:13.097 [2024-07-26 09:06:31.406659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.097 [2024-07-26 09:06:31.406687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.097 qpair failed and we were unable to recover it. 00:33:13.097 [2024-07-26 09:06:31.406852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.097 [2024-07-26 09:06:31.406877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.097 qpair failed and we were unable to recover it. 00:33:13.097 [2024-07-26 09:06:31.407040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.097 [2024-07-26 09:06:31.407073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.097 qpair failed and we were unable to recover it. 00:33:13.097 [2024-07-26 09:06:31.407214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.097 [2024-07-26 09:06:31.407242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.097 qpair failed and we were unable to recover it. 00:33:13.097 [2024-07-26 09:06:31.407386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.097 [2024-07-26 09:06:31.407411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.097 qpair failed and we were unable to recover it. 00:33:13.098 [2024-07-26 09:06:31.407557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.098 [2024-07-26 09:06:31.407581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.098 qpair failed and we were unable to recover it. 00:33:13.098 [2024-07-26 09:06:31.407771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.098 [2024-07-26 09:06:31.407799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.098 qpair failed and we were unable to recover it. 00:33:13.098 [2024-07-26 09:06:31.407968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.098 [2024-07-26 09:06:31.407993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.098 qpair failed and we were unable to recover it. 00:33:13.098 [2024-07-26 09:06:31.408140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.098 [2024-07-26 09:06:31.408166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.098 qpair failed and we were unable to recover it. 00:33:13.098 [2024-07-26 09:06:31.408310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.098 [2024-07-26 09:06:31.408335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.098 qpair failed and we were unable to recover it. 00:33:13.098 [2024-07-26 09:06:31.408512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.098 [2024-07-26 09:06:31.408538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.098 qpair failed and we were unable to recover it. 00:33:13.098 [2024-07-26 09:06:31.408649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.098 [2024-07-26 09:06:31.408690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.098 qpair failed and we were unable to recover it. 00:33:13.098 [2024-07-26 09:06:31.408839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.098 [2024-07-26 09:06:31.408866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.098 qpair failed and we were unable to recover it. 00:33:13.098 [2024-07-26 09:06:31.409071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.098 [2024-07-26 09:06:31.409097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.098 qpair failed and we were unable to recover it. 00:33:13.098 [2024-07-26 09:06:31.409286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.098 [2024-07-26 09:06:31.409314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.098 qpair failed and we were unable to recover it. 00:33:13.098 [2024-07-26 09:06:31.409475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.098 [2024-07-26 09:06:31.409503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.098 qpair failed and we were unable to recover it. 00:33:13.098 [2024-07-26 09:06:31.409694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.098 [2024-07-26 09:06:31.409718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.098 qpair failed and we were unable to recover it. 00:33:13.098 [2024-07-26 09:06:31.409842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.098 [2024-07-26 09:06:31.409866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.098 qpair failed and we were unable to recover it. 00:33:13.098 [2024-07-26 09:06:31.410010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.098 [2024-07-26 09:06:31.410035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.098 qpair failed and we were unable to recover it. 00:33:13.098 [2024-07-26 09:06:31.410219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.098 [2024-07-26 09:06:31.410245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.098 qpair failed and we were unable to recover it. 00:33:13.098 [2024-07-26 09:06:31.410403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.098 [2024-07-26 09:06:31.410430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.098 qpair failed and we were unable to recover it. 00:33:13.098 [2024-07-26 09:06:31.410590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.098 [2024-07-26 09:06:31.410618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.098 qpair failed and we were unable to recover it. 00:33:13.098 [2024-07-26 09:06:31.410755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.098 [2024-07-26 09:06:31.410779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.098 qpair failed and we were unable to recover it. 00:33:13.098 [2024-07-26 09:06:31.410926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.098 [2024-07-26 09:06:31.410965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.098 qpair failed and we were unable to recover it. 00:33:13.098 [2024-07-26 09:06:31.411096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.098 [2024-07-26 09:06:31.411124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.098 qpair failed and we were unable to recover it. 00:33:13.098 [2024-07-26 09:06:31.411290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.098 [2024-07-26 09:06:31.411315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.098 qpair failed and we were unable to recover it. 00:33:13.098 [2024-07-26 09:06:31.411477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.098 [2024-07-26 09:06:31.411504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.098 qpair failed and we were unable to recover it. 00:33:13.098 [2024-07-26 09:06:31.411640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.098 [2024-07-26 09:06:31.411668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.098 qpair failed and we were unable to recover it. 00:33:13.098 [2024-07-26 09:06:31.411829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.098 [2024-07-26 09:06:31.411854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.098 qpair failed and we were unable to recover it. 00:33:13.098 [2024-07-26 09:06:31.411975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.098 [2024-07-26 09:06:31.411999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.098 qpair failed and we were unable to recover it. 00:33:13.098 [2024-07-26 09:06:31.412167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.098 [2024-07-26 09:06:31.412194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.098 qpair failed and we were unable to recover it. 00:33:13.098 [2024-07-26 09:06:31.412345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.098 [2024-07-26 09:06:31.412370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.098 qpair failed and we were unable to recover it. 00:33:13.098 [2024-07-26 09:06:31.412534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.098 [2024-07-26 09:06:31.412561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.098 qpair failed and we were unable to recover it. 00:33:13.098 [2024-07-26 09:06:31.412711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.098 [2024-07-26 09:06:31.412743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.098 qpair failed and we were unable to recover it. 00:33:13.098 [2024-07-26 09:06:31.412881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.098 [2024-07-26 09:06:31.412907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.098 qpair failed and we were unable to recover it. 00:33:13.098 [2024-07-26 09:06:31.413048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.098 [2024-07-26 09:06:31.413081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.098 qpair failed and we were unable to recover it. 00:33:13.098 [2024-07-26 09:06:31.413225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.098 [2024-07-26 09:06:31.413250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.098 qpair failed and we were unable to recover it. 00:33:13.098 [2024-07-26 09:06:31.413421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.098 [2024-07-26 09:06:31.413446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.098 qpair failed and we were unable to recover it. 00:33:13.098 [2024-07-26 09:06:31.413606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.098 [2024-07-26 09:06:31.413633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.098 qpair failed and we were unable to recover it. 00:33:13.098 [2024-07-26 09:06:31.413812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.098 [2024-07-26 09:06:31.413839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.098 qpair failed and we were unable to recover it. 00:33:13.098 [2024-07-26 09:06:31.414009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.098 [2024-07-26 09:06:31.414033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.098 qpair failed and we were unable to recover it. 00:33:13.098 [2024-07-26 09:06:31.414150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.098 [2024-07-26 09:06:31.414176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.098 qpair failed and we were unable to recover it. 00:33:13.098 [2024-07-26 09:06:31.414321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.098 [2024-07-26 09:06:31.414366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.098 qpair failed and we were unable to recover it. 00:33:13.098 [2024-07-26 09:06:31.414558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.098 [2024-07-26 09:06:31.414582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.098 qpair failed and we were unable to recover it. 00:33:13.098 [2024-07-26 09:06:31.414744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.098 [2024-07-26 09:06:31.414772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.098 qpair failed and we were unable to recover it. 00:33:13.098 [2024-07-26 09:06:31.414956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.098 [2024-07-26 09:06:31.414984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.098 qpair failed and we were unable to recover it. 00:33:13.098 [2024-07-26 09:06:31.415147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.098 [2024-07-26 09:06:31.415172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.098 qpair failed and we were unable to recover it. 00:33:13.098 [2024-07-26 09:06:31.415322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.098 [2024-07-26 09:06:31.415347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.098 qpair failed and we were unable to recover it. 00:33:13.098 [2024-07-26 09:06:31.415481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.098 [2024-07-26 09:06:31.415506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.098 qpair failed and we were unable to recover it. 00:33:13.098 [2024-07-26 09:06:31.415676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.098 [2024-07-26 09:06:31.415700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.098 qpair failed and we were unable to recover it. 00:33:13.098 [2024-07-26 09:06:31.415849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.098 [2024-07-26 09:06:31.415873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.098 qpair failed and we were unable to recover it. 00:33:13.098 [2024-07-26 09:06:31.416069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.098 [2024-07-26 09:06:31.416102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.098 qpair failed and we were unable to recover it. 00:33:13.098 [2024-07-26 09:06:31.416247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.098 [2024-07-26 09:06:31.416274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.098 qpair failed and we were unable to recover it. 00:33:13.098 [2024-07-26 09:06:31.416419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.098 [2024-07-26 09:06:31.416461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.098 qpair failed and we were unable to recover it. 00:33:13.098 [2024-07-26 09:06:31.416616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.098 [2024-07-26 09:06:31.416643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.098 qpair failed and we were unable to recover it. 00:33:13.098 [2024-07-26 09:06:31.416800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.098 [2024-07-26 09:06:31.416826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.098 qpair failed and we were unable to recover it. 00:33:13.098 [2024-07-26 09:06:31.416992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.098 [2024-07-26 09:06:31.417020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.098 qpair failed and we were unable to recover it. 00:33:13.098 [2024-07-26 09:06:31.417198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.098 [2024-07-26 09:06:31.417225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.098 qpair failed and we were unable to recover it. 00:33:13.098 [2024-07-26 09:06:31.417370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.098 [2024-07-26 09:06:31.417395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.098 qpair failed and we were unable to recover it. 00:33:13.098 [2024-07-26 09:06:31.417565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.098 [2024-07-26 09:06:31.417590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.098 qpair failed and we were unable to recover it. 00:33:13.098 [2024-07-26 09:06:31.417748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.098 [2024-07-26 09:06:31.417776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.098 qpair failed and we were unable to recover it. 00:33:13.098 [2024-07-26 09:06:31.417951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.098 [2024-07-26 09:06:31.417975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.098 qpair failed and we were unable to recover it. 00:33:13.098 [2024-07-26 09:06:31.418133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.098 [2024-07-26 09:06:31.418161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.098 qpair failed and we were unable to recover it. 00:33:13.098 [2024-07-26 09:06:31.418314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.098 [2024-07-26 09:06:31.418341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.098 qpair failed and we were unable to recover it. 00:33:13.098 [2024-07-26 09:06:31.418513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.098 [2024-07-26 09:06:31.418537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.099 qpair failed and we were unable to recover it. 00:33:13.099 [2024-07-26 09:06:31.418658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.099 [2024-07-26 09:06:31.418682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.099 qpair failed and we were unable to recover it. 00:33:13.099 [2024-07-26 09:06:31.418800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.099 [2024-07-26 09:06:31.418824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.099 qpair failed and we were unable to recover it. 00:33:13.099 [2024-07-26 09:06:31.418993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.099 [2024-07-26 09:06:31.419017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.099 qpair failed and we were unable to recover it. 00:33:13.099 [2024-07-26 09:06:31.419212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.099 [2024-07-26 09:06:31.419240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.099 qpair failed and we were unable to recover it. 00:33:13.099 [2024-07-26 09:06:31.419420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.099 [2024-07-26 09:06:31.419448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.099 qpair failed and we were unable to recover it. 00:33:13.099 [2024-07-26 09:06:31.419608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.099 [2024-07-26 09:06:31.419633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.099 qpair failed and we were unable to recover it. 00:33:13.099 [2024-07-26 09:06:31.419757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.099 [2024-07-26 09:06:31.419781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.099 qpair failed and we were unable to recover it. 00:33:13.099 [2024-07-26 09:06:31.419978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.099 [2024-07-26 09:06:31.420005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.099 qpair failed and we were unable to recover it. 00:33:13.099 [2024-07-26 09:06:31.420181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.099 [2024-07-26 09:06:31.420207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.099 qpair failed and we were unable to recover it. 00:33:13.099 [2024-07-26 09:06:31.420326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.099 [2024-07-26 09:06:31.420369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.099 qpair failed and we were unable to recover it. 00:33:13.099 [2024-07-26 09:06:31.420523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.099 [2024-07-26 09:06:31.420547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.099 qpair failed and we were unable to recover it. 00:33:13.099 [2024-07-26 09:06:31.420719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.099 [2024-07-26 09:06:31.420745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.099 qpair failed and we were unable to recover it. 00:33:13.099 [2024-07-26 09:06:31.420903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.099 [2024-07-26 09:06:31.420931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.099 qpair failed and we were unable to recover it. 00:33:13.099 [2024-07-26 09:06:31.421113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.099 [2024-07-26 09:06:31.421141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.099 qpair failed and we were unable to recover it. 00:33:13.099 [2024-07-26 09:06:31.421281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.099 [2024-07-26 09:06:31.421306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.099 qpair failed and we were unable to recover it. 00:33:13.099 [2024-07-26 09:06:31.421455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.099 [2024-07-26 09:06:31.421481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.099 qpair failed and we were unable to recover it. 00:33:13.099 [2024-07-26 09:06:31.421647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.099 [2024-07-26 09:06:31.421672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.099 qpair failed and we were unable to recover it. 00:33:13.099 [2024-07-26 09:06:31.421792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.099 [2024-07-26 09:06:31.421816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.099 qpair failed and we were unable to recover it. 00:33:13.099 [2024-07-26 09:06:31.421938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.099 [2024-07-26 09:06:31.421963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.099 qpair failed and we were unable to recover it. 00:33:13.099 [2024-07-26 09:06:31.422155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.099 [2024-07-26 09:06:31.422181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.099 qpair failed and we were unable to recover it. 00:33:13.099 [2024-07-26 09:06:31.422352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.099 [2024-07-26 09:06:31.422376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.099 qpair failed and we were unable to recover it. 00:33:13.099 [2024-07-26 09:06:31.422549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.099 [2024-07-26 09:06:31.422577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.099 qpair failed and we were unable to recover it. 00:33:13.099 [2024-07-26 09:06:31.422724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.099 [2024-07-26 09:06:31.422749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.099 qpair failed and we were unable to recover it. 00:33:13.099 [2024-07-26 09:06:31.422893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.099 [2024-07-26 09:06:31.422918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.099 qpair failed and we were unable to recover it. 00:33:13.099 [2024-07-26 09:06:31.423110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.099 [2024-07-26 09:06:31.423138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.099 qpair failed and we were unable to recover it. 00:33:13.099 [2024-07-26 09:06:31.423301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.099 [2024-07-26 09:06:31.423328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.099 qpair failed and we were unable to recover it. 00:33:13.099 [2024-07-26 09:06:31.423474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.099 [2024-07-26 09:06:31.423499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.099 qpair failed and we were unable to recover it. 00:33:13.099 [2024-07-26 09:06:31.423649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.099 [2024-07-26 09:06:31.423673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.099 qpair failed and we were unable to recover it. 00:33:13.099 [2024-07-26 09:06:31.423813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.099 [2024-07-26 09:06:31.423854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.099 qpair failed and we were unable to recover it. 00:33:13.099 [2024-07-26 09:06:31.424020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.099 [2024-07-26 09:06:31.424044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.099 qpair failed and we were unable to recover it. 00:33:13.099 [2024-07-26 09:06:31.424169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.099 [2024-07-26 09:06:31.424211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.099 qpair failed and we were unable to recover it. 00:33:13.099 [2024-07-26 09:06:31.424345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.099 [2024-07-26 09:06:31.424373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.099 qpair failed and we were unable to recover it. 00:33:13.099 [2024-07-26 09:06:31.424537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.099 [2024-07-26 09:06:31.424561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.099 qpair failed and we were unable to recover it. 00:33:13.099 [2024-07-26 09:06:31.424703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.099 [2024-07-26 09:06:31.424744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.099 qpair failed and we were unable to recover it. 00:33:13.099 [2024-07-26 09:06:31.424903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.099 [2024-07-26 09:06:31.424930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.099 qpair failed and we were unable to recover it. 00:33:13.099 [2024-07-26 09:06:31.425119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.099 [2024-07-26 09:06:31.425144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.099 qpair failed and we were unable to recover it. 00:33:13.099 [2024-07-26 09:06:31.425308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.099 [2024-07-26 09:06:31.425341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.099 qpair failed and we were unable to recover it. 00:33:13.099 [2024-07-26 09:06:31.425470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.099 [2024-07-26 09:06:31.425498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.099 qpair failed and we were unable to recover it. 00:33:13.099 [2024-07-26 09:06:31.425674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.099 [2024-07-26 09:06:31.425699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.099 qpair failed and we were unable to recover it. 00:33:13.099 [2024-07-26 09:06:31.425864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.099 [2024-07-26 09:06:31.425891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.099 qpair failed and we were unable to recover it. 00:33:13.099 [2024-07-26 09:06:31.426055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.099 [2024-07-26 09:06:31.426089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.099 qpair failed and we were unable to recover it. 00:33:13.099 [2024-07-26 09:06:31.426230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.099 [2024-07-26 09:06:31.426255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.099 qpair failed and we were unable to recover it. 00:33:13.099 [2024-07-26 09:06:31.426430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.099 [2024-07-26 09:06:31.426473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.099 qpair failed and we were unable to recover it. 00:33:13.099 [2024-07-26 09:06:31.426634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.099 [2024-07-26 09:06:31.426662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.099 qpair failed and we were unable to recover it. 00:33:13.099 [2024-07-26 09:06:31.426838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.099 [2024-07-26 09:06:31.426862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.099 qpair failed and we were unable to recover it. 00:33:13.099 [2024-07-26 09:06:31.427028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.099 [2024-07-26 09:06:31.427054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.099 qpair failed and we were unable to recover it. 00:33:13.099 [2024-07-26 09:06:31.427214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.099 [2024-07-26 09:06:31.427242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.099 qpair failed and we were unable to recover it. 00:33:13.099 [2024-07-26 09:06:31.427414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.099 [2024-07-26 09:06:31.427439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.099 qpair failed and we were unable to recover it. 00:33:13.099 [2024-07-26 09:06:31.427610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.099 [2024-07-26 09:06:31.427635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.099 qpair failed and we were unable to recover it. 00:33:13.099 [2024-07-26 09:06:31.427773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.099 [2024-07-26 09:06:31.427801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.099 qpair failed and we were unable to recover it. 00:33:13.099 [2024-07-26 09:06:31.427944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.099 [2024-07-26 09:06:31.427969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.099 qpair failed and we were unable to recover it. 00:33:13.099 [2024-07-26 09:06:31.428083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.099 [2024-07-26 09:06:31.428110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.099 qpair failed and we were unable to recover it. 00:33:13.099 [2024-07-26 09:06:31.428286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.099 [2024-07-26 09:06:31.428313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.099 qpair failed and we were unable to recover it. 00:33:13.099 [2024-07-26 09:06:31.428450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.099 [2024-07-26 09:06:31.428475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.099 qpair failed and we were unable to recover it. 00:33:13.099 [2024-07-26 09:06:31.428617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.099 [2024-07-26 09:06:31.428657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.099 qpair failed and we were unable to recover it. 00:33:13.099 [2024-07-26 09:06:31.428816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.099 [2024-07-26 09:06:31.428844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.099 qpair failed and we were unable to recover it. 00:33:13.099 [2024-07-26 09:06:31.428996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.099 [2024-07-26 09:06:31.429024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.099 qpair failed and we were unable to recover it. 00:33:13.099 [2024-07-26 09:06:31.429193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.099 [2024-07-26 09:06:31.429219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.099 qpair failed and we were unable to recover it. 00:33:13.099 [2024-07-26 09:06:31.429385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.099 [2024-07-26 09:06:31.429412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.099 qpair failed and we were unable to recover it. 00:33:13.099 [2024-07-26 09:06:31.429575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.099 [2024-07-26 09:06:31.429601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.099 qpair failed and we were unable to recover it. 00:33:13.099 [2024-07-26 09:06:31.429770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.099 [2024-07-26 09:06:31.429798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.099 qpair failed and we were unable to recover it. 00:33:13.099 [2024-07-26 09:06:31.429961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.099 [2024-07-26 09:06:31.429989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.099 qpair failed and we were unable to recover it. 00:33:13.099 [2024-07-26 09:06:31.430181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.100 [2024-07-26 09:06:31.430206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.100 qpair failed and we were unable to recover it. 00:33:13.100 [2024-07-26 09:06:31.430351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.100 [2024-07-26 09:06:31.430376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.100 qpair failed and we were unable to recover it. 00:33:13.100 [2024-07-26 09:06:31.430507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.100 [2024-07-26 09:06:31.430532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.100 qpair failed and we were unable to recover it. 00:33:13.100 [2024-07-26 09:06:31.430654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.100 [2024-07-26 09:06:31.430679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.100 qpair failed and we were unable to recover it. 00:33:13.100 [2024-07-26 09:06:31.430803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.100 [2024-07-26 09:06:31.430845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.100 qpair failed and we were unable to recover it. 00:33:13.100 [2024-07-26 09:06:31.430995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.100 [2024-07-26 09:06:31.431023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.100 qpair failed and we were unable to recover it. 00:33:13.100 [2024-07-26 09:06:31.431193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.100 [2024-07-26 09:06:31.431218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.100 qpair failed and we were unable to recover it. 00:33:13.100 [2024-07-26 09:06:31.431369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.100 [2024-07-26 09:06:31.431394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.100 qpair failed and we were unable to recover it. 00:33:13.100 [2024-07-26 09:06:31.431516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.100 [2024-07-26 09:06:31.431541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.100 qpair failed and we were unable to recover it. 00:33:13.100 [2024-07-26 09:06:31.431688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.100 [2024-07-26 09:06:31.431712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.100 qpair failed and we were unable to recover it. 00:33:13.100 [2024-07-26 09:06:31.431877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.100 [2024-07-26 09:06:31.431905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.100 qpair failed and we were unable to recover it. 00:33:13.100 [2024-07-26 09:06:31.432037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.100 [2024-07-26 09:06:31.432076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.100 qpair failed and we were unable to recover it. 00:33:13.100 [2024-07-26 09:06:31.432261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.100 [2024-07-26 09:06:31.432286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.100 qpair failed and we were unable to recover it. 00:33:13.100 [2024-07-26 09:06:31.432433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.100 [2024-07-26 09:06:31.432457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.100 qpair failed and we were unable to recover it. 00:33:13.100 [2024-07-26 09:06:31.432639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.100 [2024-07-26 09:06:31.432664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.100 qpair failed and we were unable to recover it. 00:33:13.100 [2024-07-26 09:06:31.432784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.100 [2024-07-26 09:06:31.432809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.100 qpair failed and we were unable to recover it. 00:33:13.100 [2024-07-26 09:06:31.432998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.100 [2024-07-26 09:06:31.433026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.100 qpair failed and we were unable to recover it. 00:33:13.100 [2024-07-26 09:06:31.433190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.100 [2024-07-26 09:06:31.433217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.100 qpair failed and we were unable to recover it. 00:33:13.100 [2024-07-26 09:06:31.433383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.100 [2024-07-26 09:06:31.433407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.100 qpair failed and we were unable to recover it. 00:33:13.100 [2024-07-26 09:06:31.433530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.100 [2024-07-26 09:06:31.433555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.100 qpair failed and we were unable to recover it. 00:33:13.100 [2024-07-26 09:06:31.433700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.100 [2024-07-26 09:06:31.433724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.100 qpair failed and we were unable to recover it. 00:33:13.100 [2024-07-26 09:06:31.433880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.100 [2024-07-26 09:06:31.433904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.100 qpair failed and we were unable to recover it. 00:33:13.100 [2024-07-26 09:06:31.434054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.100 [2024-07-26 09:06:31.434111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.100 qpair failed and we were unable to recover it. 00:33:13.100 [2024-07-26 09:06:31.434270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.100 [2024-07-26 09:06:31.434298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.100 qpair failed and we were unable to recover it. 00:33:13.100 [2024-07-26 09:06:31.434468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.100 [2024-07-26 09:06:31.434493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.100 qpair failed and we were unable to recover it. 00:33:13.100 [2024-07-26 09:06:31.434600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.100 [2024-07-26 09:06:31.434625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.100 qpair failed and we were unable to recover it. 00:33:13.100 [2024-07-26 09:06:31.434775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.100 [2024-07-26 09:06:31.434802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.100 qpair failed and we were unable to recover it. 00:33:13.100 [2024-07-26 09:06:31.434989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.100 [2024-07-26 09:06:31.435014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.100 qpair failed and we were unable to recover it. 00:33:13.100 [2024-07-26 09:06:31.435122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.100 [2024-07-26 09:06:31.435148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.100 qpair failed and we were unable to recover it. 00:33:13.100 [2024-07-26 09:06:31.435371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.100 [2024-07-26 09:06:31.435400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.100 qpair failed and we were unable to recover it. 00:33:13.100 [2024-07-26 09:06:31.435565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.100 [2024-07-26 09:06:31.435590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.100 qpair failed and we were unable to recover it. 00:33:13.100 [2024-07-26 09:06:31.435773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.100 [2024-07-26 09:06:31.435800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.100 qpair failed and we were unable to recover it. 00:33:13.100 [2024-07-26 09:06:31.435930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.100 [2024-07-26 09:06:31.435957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.100 qpair failed and we were unable to recover it. 00:33:13.100 [2024-07-26 09:06:31.436104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.100 [2024-07-26 09:06:31.436131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.100 qpair failed and we were unable to recover it. 00:33:13.100 [2024-07-26 09:06:31.436298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.100 [2024-07-26 09:06:31.436337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.100 qpair failed and we were unable to recover it. 00:33:13.100 [2024-07-26 09:06:31.436521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.100 [2024-07-26 09:06:31.436549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.100 qpair failed and we were unable to recover it. 00:33:13.100 [2024-07-26 09:06:31.436692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.100 [2024-07-26 09:06:31.436718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.100 qpair failed and we were unable to recover it. 00:33:13.100 [2024-07-26 09:06:31.436834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.100 [2024-07-26 09:06:31.436858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.100 qpair failed and we were unable to recover it. 00:33:13.100 [2024-07-26 09:06:31.437023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.100 [2024-07-26 09:06:31.437050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.100 qpair failed and we were unable to recover it. 00:33:13.100 [2024-07-26 09:06:31.437214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.100 [2024-07-26 09:06:31.437239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.100 qpair failed and we were unable to recover it. 00:33:13.100 [2024-07-26 09:06:31.437360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.100 [2024-07-26 09:06:31.437385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.100 qpair failed and we were unable to recover it. 00:33:13.100 [2024-07-26 09:06:31.437554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.100 [2024-07-26 09:06:31.437579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.100 qpair failed and we were unable to recover it. 00:33:13.100 [2024-07-26 09:06:31.437717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.100 [2024-07-26 09:06:31.437747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.100 qpair failed and we were unable to recover it. 00:33:13.100 [2024-07-26 09:06:31.437889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.100 [2024-07-26 09:06:31.437914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.100 qpair failed and we were unable to recover it. 00:33:13.100 [2024-07-26 09:06:31.438101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.100 [2024-07-26 09:06:31.438129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.100 qpair failed and we were unable to recover it. 00:33:13.100 [2024-07-26 09:06:31.438299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.100 [2024-07-26 09:06:31.438324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.100 qpair failed and we were unable to recover it. 00:33:13.100 [2024-07-26 09:06:31.438514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.100 [2024-07-26 09:06:31.438542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.100 qpair failed and we were unable to recover it. 00:33:13.100 [2024-07-26 09:06:31.438698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.100 [2024-07-26 09:06:31.438725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.100 qpair failed and we were unable to recover it. 00:33:13.100 [2024-07-26 09:06:31.438884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.100 [2024-07-26 09:06:31.438910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.100 qpair failed and we were unable to recover it. 00:33:13.100 [2024-07-26 09:06:31.439064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.100 [2024-07-26 09:06:31.439090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.100 qpair failed and we were unable to recover it. 00:33:13.100 [2024-07-26 09:06:31.439230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.100 [2024-07-26 09:06:31.439255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.100 qpair failed and we were unable to recover it. 00:33:13.100 [2024-07-26 09:06:31.439380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.100 [2024-07-26 09:06:31.439405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.100 qpair failed and we were unable to recover it. 00:33:13.100 [2024-07-26 09:06:31.439522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.100 [2024-07-26 09:06:31.439548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.100 qpair failed and we were unable to recover it. 00:33:13.100 [2024-07-26 09:06:31.439663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.100 [2024-07-26 09:06:31.439687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.100 qpair failed and we were unable to recover it. 00:33:13.101 [2024-07-26 09:06:31.439836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.101 [2024-07-26 09:06:31.439861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.101 qpair failed and we were unable to recover it. 00:33:13.101 [2024-07-26 09:06:31.440048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.101 [2024-07-26 09:06:31.440091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.101 qpair failed and we were unable to recover it. 00:33:13.101 [2024-07-26 09:06:31.440264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.101 [2024-07-26 09:06:31.440292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.101 qpair failed and we were unable to recover it. 00:33:13.101 [2024-07-26 09:06:31.440456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.101 [2024-07-26 09:06:31.440481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.101 qpair failed and we were unable to recover it. 00:33:13.101 [2024-07-26 09:06:31.440637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.101 [2024-07-26 09:06:31.440665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.101 qpair failed and we were unable to recover it. 00:33:13.101 [2024-07-26 09:06:31.440849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.101 [2024-07-26 09:06:31.440877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.101 qpair failed and we were unable to recover it. 00:33:13.101 [2024-07-26 09:06:31.441016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.101 [2024-07-26 09:06:31.441041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.101 qpair failed and we were unable to recover it. 00:33:13.101 [2024-07-26 09:06:31.441169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.101 [2024-07-26 09:06:31.441194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.101 qpair failed and we were unable to recover it. 00:33:13.101 [2024-07-26 09:06:31.441350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.101 [2024-07-26 09:06:31.441375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.101 qpair failed and we were unable to recover it. 00:33:13.101 [2024-07-26 09:06:31.441489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.101 [2024-07-26 09:06:31.441514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.101 qpair failed and we were unable to recover it. 00:33:13.101 [2024-07-26 09:06:31.441631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.101 [2024-07-26 09:06:31.441656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.101 qpair failed and we were unable to recover it. 00:33:13.101 [2024-07-26 09:06:31.441863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.101 [2024-07-26 09:06:31.441888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.101 qpair failed and we were unable to recover it. 00:33:13.101 [2024-07-26 09:06:31.442030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.101 [2024-07-26 09:06:31.442056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.101 qpair failed and we were unable to recover it. 00:33:13.101 [2024-07-26 09:06:31.442222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.101 [2024-07-26 09:06:31.442250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.101 qpair failed and we were unable to recover it. 00:33:13.101 [2024-07-26 09:06:31.442373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.101 [2024-07-26 09:06:31.442401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.101 qpair failed and we were unable to recover it. 00:33:13.101 [2024-07-26 09:06:31.442566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.101 [2024-07-26 09:06:31.442591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.101 qpair failed and we were unable to recover it. 00:33:13.101 [2024-07-26 09:06:31.442751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.101 [2024-07-26 09:06:31.442779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.101 qpair failed and we were unable to recover it. 00:33:13.101 [2024-07-26 09:06:31.442943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.101 [2024-07-26 09:06:31.442970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.101 qpair failed and we were unable to recover it. 00:33:13.101 [2024-07-26 09:06:31.443140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.101 [2024-07-26 09:06:31.443165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.101 qpair failed and we were unable to recover it. 00:33:13.101 [2024-07-26 09:06:31.443279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.101 [2024-07-26 09:06:31.443320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.101 qpair failed and we were unable to recover it. 00:33:13.101 [2024-07-26 09:06:31.443446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.101 [2024-07-26 09:06:31.443474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.101 qpair failed and we were unable to recover it. 00:33:13.101 [2024-07-26 09:06:31.443610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.101 [2024-07-26 09:06:31.443635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.101 qpair failed and we were unable to recover it. 00:33:13.101 [2024-07-26 09:06:31.443738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.101 [2024-07-26 09:06:31.443764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.101 qpair failed and we were unable to recover it. 00:33:13.101 [2024-07-26 09:06:31.443932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.101 [2024-07-26 09:06:31.443960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.101 qpair failed and we were unable to recover it. 00:33:13.101 [2024-07-26 09:06:31.444097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.101 [2024-07-26 09:06:31.444124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.101 qpair failed and we were unable to recover it. 00:33:13.101 [2024-07-26 09:06:31.444292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.101 [2024-07-26 09:06:31.444317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.101 qpair failed and we were unable to recover it. 00:33:13.101 [2024-07-26 09:06:31.444490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.101 [2024-07-26 09:06:31.444518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.101 qpair failed and we were unable to recover it. 00:33:13.101 [2024-07-26 09:06:31.444684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.101 [2024-07-26 09:06:31.444709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.101 qpair failed and we were unable to recover it. 00:33:13.101 [2024-07-26 09:06:31.444821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.101 [2024-07-26 09:06:31.444846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.101 qpair failed and we were unable to recover it. 00:33:13.101 [2024-07-26 09:06:31.445044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.101 [2024-07-26 09:06:31.445094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.101 qpair failed and we were unable to recover it. 00:33:13.101 [2024-07-26 09:06:31.445266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.101 [2024-07-26 09:06:31.445291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.101 qpair failed and we were unable to recover it. 00:33:13.101 [2024-07-26 09:06:31.445483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.101 [2024-07-26 09:06:31.445511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.101 qpair failed and we were unable to recover it. 00:33:13.101 [2024-07-26 09:06:31.445669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.101 [2024-07-26 09:06:31.445697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.101 qpair failed and we were unable to recover it. 00:33:13.101 [2024-07-26 09:06:31.445863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.101 [2024-07-26 09:06:31.445888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.101 qpair failed and we were unable to recover it. 00:33:13.101 [2024-07-26 09:06:31.446004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.101 [2024-07-26 09:06:31.446029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.101 qpair failed and we were unable to recover it. 00:33:13.101 [2024-07-26 09:06:31.446207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.101 [2024-07-26 09:06:31.446232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.101 qpair failed and we were unable to recover it. 00:33:13.101 [2024-07-26 09:06:31.446372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.101 [2024-07-26 09:06:31.446397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.101 qpair failed and we were unable to recover it. 00:33:13.101 [2024-07-26 09:06:31.446558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.101 [2024-07-26 09:06:31.446586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.101 qpair failed and we were unable to recover it. 00:33:13.101 [2024-07-26 09:06:31.446738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.101 [2024-07-26 09:06:31.446766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.101 qpair failed and we were unable to recover it. 00:33:13.101 [2024-07-26 09:06:31.446934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.101 [2024-07-26 09:06:31.446959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.101 qpair failed and we were unable to recover it. 00:33:13.101 [2024-07-26 09:06:31.447073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.101 [2024-07-26 09:06:31.447115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.101 qpair failed and we were unable to recover it. 00:33:13.101 [2024-07-26 09:06:31.447268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.101 [2024-07-26 09:06:31.447295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.101 qpair failed and we were unable to recover it. 00:33:13.101 [2024-07-26 09:06:31.447464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.101 [2024-07-26 09:06:31.447490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.101 qpair failed and we were unable to recover it. 00:33:13.101 [2024-07-26 09:06:31.447658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.101 [2024-07-26 09:06:31.447686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.101 qpair failed and we were unable to recover it. 00:33:13.101 [2024-07-26 09:06:31.447882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.101 [2024-07-26 09:06:31.447908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.101 qpair failed and we were unable to recover it. 00:33:13.101 [2024-07-26 09:06:31.448079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.101 [2024-07-26 09:06:31.448106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.101 qpair failed and we were unable to recover it. 00:33:13.101 [2024-07-26 09:06:31.448226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.101 [2024-07-26 09:06:31.448251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.101 qpair failed and we were unable to recover it. 00:33:13.101 [2024-07-26 09:06:31.448369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.101 [2024-07-26 09:06:31.448394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.101 qpair failed and we were unable to recover it. 00:33:13.101 [2024-07-26 09:06:31.448503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.101 [2024-07-26 09:06:31.448528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.101 qpair failed and we were unable to recover it. 00:33:13.101 [2024-07-26 09:06:31.448640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.101 [2024-07-26 09:06:31.448665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.101 qpair failed and we were unable to recover it. 00:33:13.101 [2024-07-26 09:06:31.448825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.101 [2024-07-26 09:06:31.448853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.101 qpair failed and we were unable to recover it. 00:33:13.101 [2024-07-26 09:06:31.449043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.101 [2024-07-26 09:06:31.449073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.101 qpair failed and we were unable to recover it. 00:33:13.101 [2024-07-26 09:06:31.449219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.101 [2024-07-26 09:06:31.449261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.101 qpair failed and we were unable to recover it. 00:33:13.101 [2024-07-26 09:06:31.449392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.101 [2024-07-26 09:06:31.449420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.101 qpair failed and we were unable to recover it. 00:33:13.101 [2024-07-26 09:06:31.449552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.101 [2024-07-26 09:06:31.449577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.101 qpair failed and we were unable to recover it. 00:33:13.101 [2024-07-26 09:06:31.449694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.101 [2024-07-26 09:06:31.449719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.101 qpair failed and we were unable to recover it. 00:33:13.101 [2024-07-26 09:06:31.449890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.101 [2024-07-26 09:06:31.449922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.101 qpair failed and we were unable to recover it. 00:33:13.101 [2024-07-26 09:06:31.450070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.101 [2024-07-26 09:06:31.450095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.101 qpair failed and we were unable to recover it. 00:33:13.101 [2024-07-26 09:06:31.450267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.101 [2024-07-26 09:06:31.450308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.101 qpair failed and we were unable to recover it. 00:33:13.101 [2024-07-26 09:06:31.450507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.101 [2024-07-26 09:06:31.450532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.101 qpair failed and we were unable to recover it. 00:33:13.101 [2024-07-26 09:06:31.450688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.101 [2024-07-26 09:06:31.450714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.101 qpair failed and we were unable to recover it. 00:33:13.101 [2024-07-26 09:06:31.450851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.101 [2024-07-26 09:06:31.450878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.101 qpair failed and we were unable to recover it. 00:33:13.101 [2024-07-26 09:06:31.451039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.101 [2024-07-26 09:06:31.451074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.101 qpair failed and we were unable to recover it. 00:33:13.101 [2024-07-26 09:06:31.451230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.101 [2024-07-26 09:06:31.451256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.101 qpair failed and we were unable to recover it. 00:33:13.101 [2024-07-26 09:06:31.451441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.102 [2024-07-26 09:06:31.451470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.102 qpair failed and we were unable to recover it. 00:33:13.102 [2024-07-26 09:06:31.451634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.102 [2024-07-26 09:06:31.451661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.102 qpair failed and we were unable to recover it. 00:33:13.102 [2024-07-26 09:06:31.451847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.102 [2024-07-26 09:06:31.451871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.102 qpair failed and we were unable to recover it. 00:33:13.102 [2024-07-26 09:06:31.452069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.102 [2024-07-26 09:06:31.452102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.102 qpair failed and we were unable to recover it. 00:33:13.102 [2024-07-26 09:06:31.452251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.102 [2024-07-26 09:06:31.452279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.102 qpair failed and we were unable to recover it. 00:33:13.102 [2024-07-26 09:06:31.452419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.102 [2024-07-26 09:06:31.452445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.102 qpair failed and we were unable to recover it. 00:33:13.102 [2024-07-26 09:06:31.452616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.102 [2024-07-26 09:06:31.452658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.102 qpair failed and we were unable to recover it. 00:33:13.102 [2024-07-26 09:06:31.452811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.102 [2024-07-26 09:06:31.452836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.102 qpair failed and we were unable to recover it. 00:33:13.102 [2024-07-26 09:06:31.452975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.102 [2024-07-26 09:06:31.453000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.102 qpair failed and we were unable to recover it. 00:33:13.102 [2024-07-26 09:06:31.453140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.102 [2024-07-26 09:06:31.453186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.102 qpair failed and we were unable to recover it. 00:33:13.102 [2024-07-26 09:06:31.453334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.102 [2024-07-26 09:06:31.453358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.102 qpair failed and we were unable to recover it. 00:33:13.102 [2024-07-26 09:06:31.453499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.102 [2024-07-26 09:06:31.453524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.102 qpair failed and we were unable to recover it. 00:33:13.102 [2024-07-26 09:06:31.453645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.102 [2024-07-26 09:06:31.453686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.102 qpair failed and we were unable to recover it. 00:33:13.102 [2024-07-26 09:06:31.453872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.102 [2024-07-26 09:06:31.453899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.102 qpair failed and we were unable to recover it. 00:33:13.102 [2024-07-26 09:06:31.454044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.102 [2024-07-26 09:06:31.454076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.102 qpair failed and we were unable to recover it. 00:33:13.102 [2024-07-26 09:06:31.454217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.102 [2024-07-26 09:06:31.454242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.102 qpair failed and we were unable to recover it. 00:33:13.102 [2024-07-26 09:06:31.454380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.102 [2024-07-26 09:06:31.454405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.102 qpair failed and we were unable to recover it. 00:33:13.102 [2024-07-26 09:06:31.454525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.102 [2024-07-26 09:06:31.454550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.102 qpair failed and we were unable to recover it. 00:33:13.102 [2024-07-26 09:06:31.454670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.102 [2024-07-26 09:06:31.454694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.102 qpair failed and we were unable to recover it. 00:33:13.102 [2024-07-26 09:06:31.454891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.102 [2024-07-26 09:06:31.454918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.102 qpair failed and we were unable to recover it. 00:33:13.102 [2024-07-26 09:06:31.455089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.102 [2024-07-26 09:06:31.455115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.102 qpair failed and we were unable to recover it. 00:33:13.102 [2024-07-26 09:06:31.455238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.102 [2024-07-26 09:06:31.455263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.102 qpair failed and we were unable to recover it. 00:33:13.102 [2024-07-26 09:06:31.455374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.102 [2024-07-26 09:06:31.455399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.102 qpair failed and we were unable to recover it. 00:33:13.102 [2024-07-26 09:06:31.455545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.102 [2024-07-26 09:06:31.455571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.102 qpair failed and we were unable to recover it. 00:33:13.102 [2024-07-26 09:06:31.455717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.102 [2024-07-26 09:06:31.455742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.102 qpair failed and we were unable to recover it. 00:33:13.102 [2024-07-26 09:06:31.455868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.102 [2024-07-26 09:06:31.455894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.102 qpair failed and we were unable to recover it. 00:33:13.102 [2024-07-26 09:06:31.456074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.102 [2024-07-26 09:06:31.456103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.102 qpair failed and we were unable to recover it. 00:33:13.102 [2024-07-26 09:06:31.456290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.102 [2024-07-26 09:06:31.456319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.102 qpair failed and we were unable to recover it. 00:33:13.102 [2024-07-26 09:06:31.456454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.102 [2024-07-26 09:06:31.456482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.102 qpair failed and we were unable to recover it. 00:33:13.102 [2024-07-26 09:06:31.456634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.102 [2024-07-26 09:06:31.456658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.102 qpair failed and we were unable to recover it. 00:33:13.102 [2024-07-26 09:06:31.456779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.102 [2024-07-26 09:06:31.456803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.102 qpair failed and we were unable to recover it. 00:33:13.102 [2024-07-26 09:06:31.456977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.102 [2024-07-26 09:06:31.457002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.102 qpair failed and we were unable to recover it. 00:33:13.102 [2024-07-26 09:06:31.457170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.102 [2024-07-26 09:06:31.457196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.102 qpair failed and we were unable to recover it. 00:33:13.102 [2024-07-26 09:06:31.457363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.102 [2024-07-26 09:06:31.457395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.102 qpair failed and we were unable to recover it. 00:33:13.102 [2024-07-26 09:06:31.457523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.102 [2024-07-26 09:06:31.457550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.102 qpair failed and we were unable to recover it. 00:33:13.102 [2024-07-26 09:06:31.457703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.102 [2024-07-26 09:06:31.457728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.102 qpair failed and we were unable to recover it. 00:33:13.102 [2024-07-26 09:06:31.457875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.102 [2024-07-26 09:06:31.457918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.102 qpair failed and we were unable to recover it. 00:33:13.102 [2024-07-26 09:06:31.458102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.102 [2024-07-26 09:06:31.458130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.102 qpair failed and we were unable to recover it. 00:33:13.102 [2024-07-26 09:06:31.458297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.102 [2024-07-26 09:06:31.458322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.102 qpair failed and we were unable to recover it. 00:33:13.102 [2024-07-26 09:06:31.458470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.102 [2024-07-26 09:06:31.458495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.102 qpair failed and we were unable to recover it. 00:33:13.102 [2024-07-26 09:06:31.458611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.102 [2024-07-26 09:06:31.458635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.102 qpair failed and we were unable to recover it. 00:33:13.102 [2024-07-26 09:06:31.458810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.102 [2024-07-26 09:06:31.458836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.102 qpair failed and we were unable to recover it. 00:33:13.102 [2024-07-26 09:06:31.458961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.102 [2024-07-26 09:06:31.458986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.102 qpair failed and we were unable to recover it. 00:33:13.102 [2024-07-26 09:06:31.459134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.102 [2024-07-26 09:06:31.459159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.102 qpair failed and we were unable to recover it. 00:33:13.102 [2024-07-26 09:06:31.459343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.102 [2024-07-26 09:06:31.459368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.102 qpair failed and we were unable to recover it. 00:33:13.102 [2024-07-26 09:06:31.459492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.102 [2024-07-26 09:06:31.459518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.102 qpair failed and we were unable to recover it. 00:33:13.102 [2024-07-26 09:06:31.459692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.102 [2024-07-26 09:06:31.459733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.102 qpair failed and we were unable to recover it. 00:33:13.102 [2024-07-26 09:06:31.459909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.102 [2024-07-26 09:06:31.459934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.102 qpair failed and we were unable to recover it. 00:33:13.102 [2024-07-26 09:06:31.460052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.102 [2024-07-26 09:06:31.460106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.102 qpair failed and we were unable to recover it. 00:33:13.102 [2024-07-26 09:06:31.460306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.102 [2024-07-26 09:06:31.460334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.102 qpair failed and we were unable to recover it. 00:33:13.102 [2024-07-26 09:06:31.460473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.102 [2024-07-26 09:06:31.460497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.102 qpair failed and we were unable to recover it. 00:33:13.102 [2024-07-26 09:06:31.460643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.102 [2024-07-26 09:06:31.460667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.102 qpair failed and we were unable to recover it. 00:33:13.102 [2024-07-26 09:06:31.460823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.102 [2024-07-26 09:06:31.460849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.102 qpair failed and we were unable to recover it. 00:33:13.102 [2024-07-26 09:06:31.460994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.102 [2024-07-26 09:06:31.461019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.102 qpair failed and we were unable to recover it. 00:33:13.102 [2024-07-26 09:06:31.461174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.102 [2024-07-26 09:06:31.461200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.102 qpair failed and we were unable to recover it. 00:33:13.102 [2024-07-26 09:06:31.461366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.102 [2024-07-26 09:06:31.461394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.102 qpair failed and we were unable to recover it. 00:33:13.102 [2024-07-26 09:06:31.461537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.102 [2024-07-26 09:06:31.461562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.102 qpair failed and we were unable to recover it. 00:33:13.102 [2024-07-26 09:06:31.461709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.102 [2024-07-26 09:06:31.461734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.102 qpair failed and we were unable to recover it. 00:33:13.102 [2024-07-26 09:06:31.461894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.102 [2024-07-26 09:06:31.461921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.102 qpair failed and we were unable to recover it. 00:33:13.102 [2024-07-26 09:06:31.462083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.102 [2024-07-26 09:06:31.462109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.102 qpair failed and we were unable to recover it. 00:33:13.102 [2024-07-26 09:06:31.462229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.102 [2024-07-26 09:06:31.462259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.102 qpair failed and we were unable to recover it. 00:33:13.102 [2024-07-26 09:06:31.462455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.102 [2024-07-26 09:06:31.462482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.102 qpair failed and we were unable to recover it. 00:33:13.102 [2024-07-26 09:06:31.462649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.102 [2024-07-26 09:06:31.462674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.102 qpair failed and we were unable to recover it. 00:33:13.102 [2024-07-26 09:06:31.462796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.103 [2024-07-26 09:06:31.462822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.103 qpair failed and we were unable to recover it. 00:33:13.103 [2024-07-26 09:06:31.462973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.103 [2024-07-26 09:06:31.462997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.103 qpair failed and we were unable to recover it. 00:33:13.103 [2024-07-26 09:06:31.463208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.103 [2024-07-26 09:06:31.463233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.103 qpair failed and we were unable to recover it. 00:33:13.103 [2024-07-26 09:06:31.463395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.103 [2024-07-26 09:06:31.463423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.103 qpair failed and we were unable to recover it. 00:33:13.103 [2024-07-26 09:06:31.463563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.103 [2024-07-26 09:06:31.463591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.103 qpair failed and we were unable to recover it. 00:33:13.103 [2024-07-26 09:06:31.463723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.103 [2024-07-26 09:06:31.463749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.103 qpair failed and we were unable to recover it. 00:33:13.103 [2024-07-26 09:06:31.463868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.103 [2024-07-26 09:06:31.463894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.103 qpair failed and we were unable to recover it. 00:33:13.103 [2024-07-26 09:06:31.464070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.103 [2024-07-26 09:06:31.464096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.103 qpair failed and we were unable to recover it. 00:33:13.103 [2024-07-26 09:06:31.464246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.103 [2024-07-26 09:06:31.464272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.103 qpair failed and we were unable to recover it. 00:33:13.103 [2024-07-26 09:06:31.464422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.103 [2024-07-26 09:06:31.464448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.103 qpair failed and we were unable to recover it. 00:33:13.103 [2024-07-26 09:06:31.464573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.103 [2024-07-26 09:06:31.464598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.103 qpair failed and we were unable to recover it. 00:33:13.103 [2024-07-26 09:06:31.464750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.103 [2024-07-26 09:06:31.464775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.103 qpair failed and we were unable to recover it. 00:33:13.103 [2024-07-26 09:06:31.464919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.103 [2024-07-26 09:06:31.464945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.103 qpair failed and we were unable to recover it. 00:33:13.103 [2024-07-26 09:06:31.465113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.103 [2024-07-26 09:06:31.465140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.103 qpair failed and we were unable to recover it. 00:33:13.103 [2024-07-26 09:06:31.465290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.103 [2024-07-26 09:06:31.465315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.103 qpair failed and we were unable to recover it. 00:33:13.103 [2024-07-26 09:06:31.465473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.103 [2024-07-26 09:06:31.465497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.103 qpair failed and we were unable to recover it. 00:33:13.103 [2024-07-26 09:06:31.465616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.103 [2024-07-26 09:06:31.465641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.103 qpair failed and we were unable to recover it. 00:33:13.103 [2024-07-26 09:06:31.465801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.103 [2024-07-26 09:06:31.465826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.103 qpair failed and we were unable to recover it. 00:33:13.103 [2024-07-26 09:06:31.465994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.103 [2024-07-26 09:06:31.466019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.103 qpair failed and we were unable to recover it. 00:33:13.103 [2024-07-26 09:06:31.466168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.103 [2024-07-26 09:06:31.466194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.103 qpair failed and we were unable to recover it. 00:33:13.103 [2024-07-26 09:06:31.466343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.103 [2024-07-26 09:06:31.466369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.103 qpair failed and we were unable to recover it. 00:33:13.103 [2024-07-26 09:06:31.466537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.103 [2024-07-26 09:06:31.466561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.103 qpair failed and we were unable to recover it. 00:33:13.103 [2024-07-26 09:06:31.466736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.103 [2024-07-26 09:06:31.466761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.103 qpair failed and we were unable to recover it. 00:33:13.103 [2024-07-26 09:06:31.466930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.103 [2024-07-26 09:06:31.466955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.103 qpair failed and we were unable to recover it. 00:33:13.103 [2024-07-26 09:06:31.467105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.103 [2024-07-26 09:06:31.467131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.103 qpair failed and we were unable to recover it. 00:33:13.103 [2024-07-26 09:06:31.467279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.103 [2024-07-26 09:06:31.467304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.103 qpair failed and we were unable to recover it. 00:33:13.103 [2024-07-26 09:06:31.467456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.103 [2024-07-26 09:06:31.467482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.103 qpair failed and we were unable to recover it. 00:33:13.103 [2024-07-26 09:06:31.467652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.103 [2024-07-26 09:06:31.467677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.103 qpair failed and we were unable to recover it. 00:33:13.103 [2024-07-26 09:06:31.467798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.103 [2024-07-26 09:06:31.467822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.103 qpair failed and we were unable to recover it. 00:33:13.103 [2024-07-26 09:06:31.467972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.103 [2024-07-26 09:06:31.467997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.103 qpair failed and we were unable to recover it. 00:33:13.103 [2024-07-26 09:06:31.468168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.103 [2024-07-26 09:06:31.468193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.103 qpair failed and we were unable to recover it. 00:33:13.103 [2024-07-26 09:06:31.468310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.103 [2024-07-26 09:06:31.468334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.103 qpair failed and we were unable to recover it. 00:33:13.103 [2024-07-26 09:06:31.468490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.103 [2024-07-26 09:06:31.468515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.103 qpair failed and we were unable to recover it. 00:33:13.103 [2024-07-26 09:06:31.468625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.103 [2024-07-26 09:06:31.468650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.103 qpair failed and we were unable to recover it. 00:33:13.103 [2024-07-26 09:06:31.468791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.103 [2024-07-26 09:06:31.468817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.103 qpair failed and we were unable to recover it. 00:33:13.103 [2024-07-26 09:06:31.468962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.103 [2024-07-26 09:06:31.468987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.103 qpair failed and we were unable to recover it. 00:33:13.103 [2024-07-26 09:06:31.469130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.103 [2024-07-26 09:06:31.469156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.103 qpair failed and we were unable to recover it. 00:33:13.103 [2024-07-26 09:06:31.469278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.103 [2024-07-26 09:06:31.469303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.103 qpair failed and we were unable to recover it. 00:33:13.103 [2024-07-26 09:06:31.469421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.103 [2024-07-26 09:06:31.469450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.103 qpair failed and we were unable to recover it. 00:33:13.103 [2024-07-26 09:06:31.469570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.103 [2024-07-26 09:06:31.469594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.103 qpair failed and we were unable to recover it. 00:33:13.103 [2024-07-26 09:06:31.469718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.103 [2024-07-26 09:06:31.469743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.103 qpair failed and we were unable to recover it. 00:33:13.103 [2024-07-26 09:06:31.469886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.103 [2024-07-26 09:06:31.469911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.103 qpair failed and we were unable to recover it. 00:33:13.103 [2024-07-26 09:06:31.470026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.103 [2024-07-26 09:06:31.470051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.103 qpair failed and we were unable to recover it. 00:33:13.103 [2024-07-26 09:06:31.470198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.103 [2024-07-26 09:06:31.470223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.103 qpair failed and we were unable to recover it. 00:33:13.103 [2024-07-26 09:06:31.470371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.103 [2024-07-26 09:06:31.470396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.103 qpair failed and we were unable to recover it. 00:33:13.103 [2024-07-26 09:06:31.470517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.103 [2024-07-26 09:06:31.470542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.103 qpair failed and we were unable to recover it. 00:33:13.103 [2024-07-26 09:06:31.470686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.103 [2024-07-26 09:06:31.470712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.103 qpair failed and we were unable to recover it. 00:33:13.103 [2024-07-26 09:06:31.470867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.103 [2024-07-26 09:06:31.470892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.103 qpair failed and we were unable to recover it. 00:33:13.103 [2024-07-26 09:06:31.471038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.103 [2024-07-26 09:06:31.471074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.103 qpair failed and we were unable to recover it. 00:33:13.103 [2024-07-26 09:06:31.471194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.103 [2024-07-26 09:06:31.471219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.103 qpair failed and we were unable to recover it. 00:33:13.103 [2024-07-26 09:06:31.471370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.103 [2024-07-26 09:06:31.471395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.103 qpair failed and we were unable to recover it. 00:33:13.103 [2024-07-26 09:06:31.471536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.103 [2024-07-26 09:06:31.471560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.103 qpair failed and we were unable to recover it. 00:33:13.103 [2024-07-26 09:06:31.471704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.103 [2024-07-26 09:06:31.471729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.103 qpair failed and we were unable to recover it. 00:33:13.103 [2024-07-26 09:06:31.471878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.103 [2024-07-26 09:06:31.471902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.103 qpair failed and we were unable to recover it. 00:33:13.103 [2024-07-26 09:06:31.472050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.103 [2024-07-26 09:06:31.472081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.103 qpair failed and we were unable to recover it. 00:33:13.103 [2024-07-26 09:06:31.472229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.103 [2024-07-26 09:06:31.472254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.103 qpair failed and we were unable to recover it. 00:33:13.103 [2024-07-26 09:06:31.472428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.103 [2024-07-26 09:06:31.472453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.103 qpair failed and we were unable to recover it. 00:33:13.103 [2024-07-26 09:06:31.472597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.104 [2024-07-26 09:06:31.472621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.104 qpair failed and we were unable to recover it. 00:33:13.104 [2024-07-26 09:06:31.472770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.104 [2024-07-26 09:06:31.472796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.104 qpair failed and we were unable to recover it. 00:33:13.104 [2024-07-26 09:06:31.472940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.104 [2024-07-26 09:06:31.472966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.104 qpair failed and we were unable to recover it. 00:33:13.104 [2024-07-26 09:06:31.473094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.104 [2024-07-26 09:06:31.473120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.104 qpair failed and we were unable to recover it. 00:33:13.104 [2024-07-26 09:06:31.473291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.104 [2024-07-26 09:06:31.473316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.104 qpair failed and we were unable to recover it. 00:33:13.104 [2024-07-26 09:06:31.473438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.104 [2024-07-26 09:06:31.473463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.104 qpair failed and we were unable to recover it. 00:33:13.104 [2024-07-26 09:06:31.473578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.104 [2024-07-26 09:06:31.473602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.104 qpair failed and we were unable to recover it. 00:33:13.104 [2024-07-26 09:06:31.473750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.104 [2024-07-26 09:06:31.473776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.104 qpair failed and we were unable to recover it. 00:33:13.104 [2024-07-26 09:06:31.473917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.104 [2024-07-26 09:06:31.473946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.104 qpair failed and we were unable to recover it. 00:33:13.104 [2024-07-26 09:06:31.474090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.104 [2024-07-26 09:06:31.474116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.104 qpair failed and we were unable to recover it. 00:33:13.104 [2024-07-26 09:06:31.474232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.104 [2024-07-26 09:06:31.474257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.104 qpair failed and we were unable to recover it. 00:33:13.104 [2024-07-26 09:06:31.474408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.104 [2024-07-26 09:06:31.474433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.104 qpair failed and we were unable to recover it. 00:33:13.104 [2024-07-26 09:06:31.474583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.104 [2024-07-26 09:06:31.474609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.104 qpair failed and we were unable to recover it. 00:33:13.104 [2024-07-26 09:06:31.474777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.104 [2024-07-26 09:06:31.474803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.104 qpair failed and we were unable to recover it. 00:33:13.104 [2024-07-26 09:06:31.474949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.104 [2024-07-26 09:06:31.474974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.104 qpair failed and we were unable to recover it. 00:33:13.104 [2024-07-26 09:06:31.475132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.104 [2024-07-26 09:06:31.475158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.104 qpair failed and we were unable to recover it. 00:33:13.104 [2024-07-26 09:06:31.475297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.104 [2024-07-26 09:06:31.475323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.104 qpair failed and we were unable to recover it. 00:33:13.104 [2024-07-26 09:06:31.475448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.104 [2024-07-26 09:06:31.475473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.104 qpair failed and we were unable to recover it. 00:33:13.104 [2024-07-26 09:06:31.475592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.104 [2024-07-26 09:06:31.475617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.104 qpair failed and we were unable to recover it. 00:33:13.104 [2024-07-26 09:06:31.475785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.104 [2024-07-26 09:06:31.475809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.104 qpair failed and we were unable to recover it. 00:33:13.104 [2024-07-26 09:06:31.475980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.104 [2024-07-26 09:06:31.476004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.104 qpair failed and we were unable to recover it. 00:33:13.104 [2024-07-26 09:06:31.476153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.104 [2024-07-26 09:06:31.476179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.104 qpair failed and we were unable to recover it. 00:33:13.104 [2024-07-26 09:06:31.476311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.104 [2024-07-26 09:06:31.476336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.104 qpair failed and we were unable to recover it. 00:33:13.104 [2024-07-26 09:06:31.476458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.104 [2024-07-26 09:06:31.476484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.104 qpair failed and we were unable to recover it. 00:33:13.104 [2024-07-26 09:06:31.476627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.104 [2024-07-26 09:06:31.476653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.104 qpair failed and we were unable to recover it. 00:33:13.104 [2024-07-26 09:06:31.476800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.104 [2024-07-26 09:06:31.476825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.104 qpair failed and we were unable to recover it. 00:33:13.104 [2024-07-26 09:06:31.476971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.104 [2024-07-26 09:06:31.476996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.104 qpair failed and we were unable to recover it. 00:33:13.104 [2024-07-26 09:06:31.477145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.104 [2024-07-26 09:06:31.477171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.104 qpair failed and we were unable to recover it. 00:33:13.104 [2024-07-26 09:06:31.477312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.104 [2024-07-26 09:06:31.477336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.104 qpair failed and we were unable to recover it. 00:33:13.104 [2024-07-26 09:06:31.477477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.104 [2024-07-26 09:06:31.477502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.104 qpair failed and we were unable to recover it. 00:33:13.104 [2024-07-26 09:06:31.477675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.104 [2024-07-26 09:06:31.477700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.104 qpair failed and we were unable to recover it. 00:33:13.104 [2024-07-26 09:06:31.477848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.104 [2024-07-26 09:06:31.477872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.104 qpair failed and we were unable to recover it. 00:33:13.104 [2024-07-26 09:06:31.478015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.104 [2024-07-26 09:06:31.478039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.104 qpair failed and we were unable to recover it. 00:33:13.104 [2024-07-26 09:06:31.478192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.104 [2024-07-26 09:06:31.478218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.104 qpair failed and we were unable to recover it. 00:33:13.104 [2024-07-26 09:06:31.478391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.104 [2024-07-26 09:06:31.478416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.104 qpair failed and we were unable to recover it. 00:33:13.104 [2024-07-26 09:06:31.478564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.104 [2024-07-26 09:06:31.478589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.104 qpair failed and we were unable to recover it. 00:33:13.104 [2024-07-26 09:06:31.478740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.104 [2024-07-26 09:06:31.478765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.104 qpair failed and we were unable to recover it. 00:33:13.104 [2024-07-26 09:06:31.478906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.104 [2024-07-26 09:06:31.478931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.104 qpair failed and we were unable to recover it. 00:33:13.104 [2024-07-26 09:06:31.479064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.104 [2024-07-26 09:06:31.479089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.104 qpair failed and we were unable to recover it. 00:33:13.104 [2024-07-26 09:06:31.479266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.104 [2024-07-26 09:06:31.479292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.104 qpair failed and we were unable to recover it. 00:33:13.104 [2024-07-26 09:06:31.479414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.104 [2024-07-26 09:06:31.479438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.104 qpair failed and we were unable to recover it. 00:33:13.104 [2024-07-26 09:06:31.479586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.104 [2024-07-26 09:06:31.479612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.104 qpair failed and we were unable to recover it. 00:33:13.104 [2024-07-26 09:06:31.479757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.104 [2024-07-26 09:06:31.479782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.104 qpair failed and we were unable to recover it. 00:33:13.104 [2024-07-26 09:06:31.479952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.104 [2024-07-26 09:06:31.479977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.104 qpair failed and we were unable to recover it. 00:33:13.104 [2024-07-26 09:06:31.480090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.104 [2024-07-26 09:06:31.480117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.104 qpair failed and we were unable to recover it. 00:33:13.104 [2024-07-26 09:06:31.480266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.104 [2024-07-26 09:06:31.480293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.104 qpair failed and we were unable to recover it. 00:33:13.104 [2024-07-26 09:06:31.480445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.104 [2024-07-26 09:06:31.480471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.104 qpair failed and we were unable to recover it. 00:33:13.104 [2024-07-26 09:06:31.480644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.104 [2024-07-26 09:06:31.480669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.104 qpair failed and we were unable to recover it. 00:33:13.104 [2024-07-26 09:06:31.480842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.104 [2024-07-26 09:06:31.480867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.104 qpair failed and we were unable to recover it. 00:33:13.104 [2024-07-26 09:06:31.481011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.104 [2024-07-26 09:06:31.481042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.104 qpair failed and we were unable to recover it. 00:33:13.104 [2024-07-26 09:06:31.481214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.104 [2024-07-26 09:06:31.481240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.104 qpair failed and we were unable to recover it. 00:33:13.104 [2024-07-26 09:06:31.481387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.104 [2024-07-26 09:06:31.481412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.104 qpair failed and we were unable to recover it. 00:33:13.104 [2024-07-26 09:06:31.481527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.104 [2024-07-26 09:06:31.481552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.104 qpair failed and we were unable to recover it. 00:33:13.104 [2024-07-26 09:06:31.481727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.104 [2024-07-26 09:06:31.481752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.104 qpair failed and we were unable to recover it. 00:33:13.104 [2024-07-26 09:06:31.481898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.104 [2024-07-26 09:06:31.481924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.104 qpair failed and we were unable to recover it. 00:33:13.104 [2024-07-26 09:06:31.482070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.104 [2024-07-26 09:06:31.482096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.104 qpair failed and we were unable to recover it. 00:33:13.104 [2024-07-26 09:06:31.482246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.104 [2024-07-26 09:06:31.482272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.104 qpair failed and we were unable to recover it. 00:33:13.104 [2024-07-26 09:06:31.482409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.104 [2024-07-26 09:06:31.482435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.104 qpair failed and we were unable to recover it. 00:33:13.104 [2024-07-26 09:06:31.482547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.104 [2024-07-26 09:06:31.482572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.104 qpair failed and we were unable to recover it. 00:33:13.104 [2024-07-26 09:06:31.482714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.104 [2024-07-26 09:06:31.482739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.104 qpair failed and we were unable to recover it. 00:33:13.104 [2024-07-26 09:06:31.482890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.104 [2024-07-26 09:06:31.482915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.104 qpair failed and we were unable to recover it. 00:33:13.104 [2024-07-26 09:06:31.483066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.105 [2024-07-26 09:06:31.483092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.105 qpair failed and we were unable to recover it. 00:33:13.105 [2024-07-26 09:06:31.483238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.105 [2024-07-26 09:06:31.483264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.105 qpair failed and we were unable to recover it. 00:33:13.105 [2024-07-26 09:06:31.483414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.105 [2024-07-26 09:06:31.483441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.105 qpair failed and we were unable to recover it. 00:33:13.105 [2024-07-26 09:06:31.483595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.105 [2024-07-26 09:06:31.483621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.105 qpair failed and we were unable to recover it. 00:33:13.105 [2024-07-26 09:06:31.483747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.105 [2024-07-26 09:06:31.483773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.105 qpair failed and we were unable to recover it. 00:33:13.105 [2024-07-26 09:06:31.483944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.105 [2024-07-26 09:06:31.483970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.105 qpair failed and we were unable to recover it. 00:33:13.105 [2024-07-26 09:06:31.484115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.105 [2024-07-26 09:06:31.484142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.105 qpair failed and we were unable to recover it. 00:33:13.105 [2024-07-26 09:06:31.484296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.105 [2024-07-26 09:06:31.484322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.105 qpair failed and we were unable to recover it. 00:33:13.105 [2024-07-26 09:06:31.484473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.105 [2024-07-26 09:06:31.484498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.105 qpair failed and we were unable to recover it. 00:33:13.105 [2024-07-26 09:06:31.484647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.105 [2024-07-26 09:06:31.484672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.105 qpair failed and we were unable to recover it. 00:33:13.105 [2024-07-26 09:06:31.484823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.105 [2024-07-26 09:06:31.484849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.105 qpair failed and we were unable to recover it. 00:33:13.105 [2024-07-26 09:06:31.484991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.105 [2024-07-26 09:06:31.485016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.105 qpair failed and we were unable to recover it. 00:33:13.105 [2024-07-26 09:06:31.485162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.105 [2024-07-26 09:06:31.485188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.105 qpair failed and we were unable to recover it. 00:33:13.105 [2024-07-26 09:06:31.485334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.105 [2024-07-26 09:06:31.485359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.105 qpair failed and we were unable to recover it. 00:33:13.105 [2024-07-26 09:06:31.485506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.105 [2024-07-26 09:06:31.485531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.105 qpair failed and we were unable to recover it. 00:33:13.105 [2024-07-26 09:06:31.485706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.105 [2024-07-26 09:06:31.485731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.105 qpair failed and we were unable to recover it. 00:33:13.105 [2024-07-26 09:06:31.485860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.105 [2024-07-26 09:06:31.485886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.105 qpair failed and we were unable to recover it. 00:33:13.105 [2024-07-26 09:06:31.486009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.105 [2024-07-26 09:06:31.486034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.105 qpair failed and we were unable to recover it. 00:33:13.105 [2024-07-26 09:06:31.486158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.105 [2024-07-26 09:06:31.486184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.105 qpair failed and we were unable to recover it. 00:33:13.105 [2024-07-26 09:06:31.486357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.105 [2024-07-26 09:06:31.486383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.105 qpair failed and we were unable to recover it. 00:33:13.105 [2024-07-26 09:06:31.486499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.105 [2024-07-26 09:06:31.486524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.105 qpair failed and we were unable to recover it. 00:33:13.105 [2024-07-26 09:06:31.486663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.105 [2024-07-26 09:06:31.486689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.105 qpair failed and we were unable to recover it. 00:33:13.105 [2024-07-26 09:06:31.486862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.105 [2024-07-26 09:06:31.486887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.105 qpair failed and we were unable to recover it. 00:33:13.105 [2024-07-26 09:06:31.487034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.105 [2024-07-26 09:06:31.487064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.105 qpair failed and we were unable to recover it. 00:33:13.105 [2024-07-26 09:06:31.487184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.105 [2024-07-26 09:06:31.487209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.105 qpair failed and we were unable to recover it. 00:33:13.105 [2024-07-26 09:06:31.487334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.105 [2024-07-26 09:06:31.487361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.105 qpair failed and we were unable to recover it. 00:33:13.105 [2024-07-26 09:06:31.487503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.105 [2024-07-26 09:06:31.487528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.105 qpair failed and we were unable to recover it. 00:33:13.105 [2024-07-26 09:06:31.487677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.105 [2024-07-26 09:06:31.487703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.105 qpair failed and we were unable to recover it. 00:33:13.105 [2024-07-26 09:06:31.487842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.105 [2024-07-26 09:06:31.487868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.105 qpair failed and we were unable to recover it. 00:33:13.105 [2024-07-26 09:06:31.487993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.105 [2024-07-26 09:06:31.488032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.105 qpair failed and we were unable to recover it. 00:33:13.105 [2024-07-26 09:06:31.488195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.105 [2024-07-26 09:06:31.488224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.105 qpair failed and we were unable to recover it. 00:33:13.105 [2024-07-26 09:06:31.488346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.105 [2024-07-26 09:06:31.488372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.105 qpair failed and we were unable to recover it. 00:33:13.105 [2024-07-26 09:06:31.488516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.105 [2024-07-26 09:06:31.488542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.105 qpair failed and we were unable to recover it. 00:33:13.105 [2024-07-26 09:06:31.488690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.105 [2024-07-26 09:06:31.488716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.105 qpair failed and we were unable to recover it. 00:33:13.105 [2024-07-26 09:06:31.488845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.105 [2024-07-26 09:06:31.488871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.105 qpair failed and we were unable to recover it. 00:33:13.105 [2024-07-26 09:06:31.489055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.105 [2024-07-26 09:06:31.489094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.105 qpair failed and we were unable to recover it. 00:33:13.105 [2024-07-26 09:06:31.489245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.105 [2024-07-26 09:06:31.489270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.105 qpair failed and we were unable to recover it. 00:33:13.105 [2024-07-26 09:06:31.489442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.105 [2024-07-26 09:06:31.489467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.105 qpair failed and we were unable to recover it. 00:33:13.105 [2024-07-26 09:06:31.489609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.105 [2024-07-26 09:06:31.489634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.105 qpair failed and we were unable to recover it. 00:33:13.105 [2024-07-26 09:06:31.489783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.105 [2024-07-26 09:06:31.489808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.105 qpair failed and we were unable to recover it. 00:33:13.105 [2024-07-26 09:06:31.489960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.105 [2024-07-26 09:06:31.489986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.105 qpair failed and we were unable to recover it. 00:33:13.105 [2024-07-26 09:06:31.490133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.105 [2024-07-26 09:06:31.490171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.105 qpair failed and we were unable to recover it. 00:33:13.105 [2024-07-26 09:06:31.490349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.105 [2024-07-26 09:06:31.490376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.105 qpair failed and we were unable to recover it. 00:33:13.105 [2024-07-26 09:06:31.490532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.105 [2024-07-26 09:06:31.490557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.105 qpair failed and we were unable to recover it. 00:33:13.105 [2024-07-26 09:06:31.490741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.105 [2024-07-26 09:06:31.490766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.105 qpair failed and we were unable to recover it. 00:33:13.105 [2024-07-26 09:06:31.490886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.105 [2024-07-26 09:06:31.490911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.105 qpair failed and we were unable to recover it. 00:33:13.105 [2024-07-26 09:06:31.491084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.105 [2024-07-26 09:06:31.491110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.105 qpair failed and we were unable to recover it. 00:33:13.105 [2024-07-26 09:06:31.491280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.105 [2024-07-26 09:06:31.491305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.105 qpair failed and we were unable to recover it. 00:33:13.105 [2024-07-26 09:06:31.491458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.105 [2024-07-26 09:06:31.491483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.105 qpair failed and we were unable to recover it. 00:33:13.105 [2024-07-26 09:06:31.491610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.105 [2024-07-26 09:06:31.491636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.105 qpair failed and we were unable to recover it. 00:33:13.105 [2024-07-26 09:06:31.491749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.105 [2024-07-26 09:06:31.491774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.105 qpair failed and we were unable to recover it. 00:33:13.105 [2024-07-26 09:06:31.491940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.105 [2024-07-26 09:06:31.491965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.105 qpair failed and we were unable to recover it. 00:33:13.105 [2024-07-26 09:06:31.492108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.105 [2024-07-26 09:06:31.492134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.105 qpair failed and we were unable to recover it. 00:33:13.105 [2024-07-26 09:06:31.492301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.105 [2024-07-26 09:06:31.492326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.105 qpair failed and we were unable to recover it. 00:33:13.105 [2024-07-26 09:06:31.492472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.105 [2024-07-26 09:06:31.492497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.105 qpair failed and we were unable to recover it. 00:33:13.105 [2024-07-26 09:06:31.492628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.105 [2024-07-26 09:06:31.492653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.105 qpair failed and we were unable to recover it. 00:33:13.105 [2024-07-26 09:06:31.492832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.105 [2024-07-26 09:06:31.492858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.105 qpair failed and we were unable to recover it. 00:33:13.105 [2024-07-26 09:06:31.492982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.105 [2024-07-26 09:06:31.493007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.105 qpair failed and we were unable to recover it. 00:33:13.105 [2024-07-26 09:06:31.493186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.105 [2024-07-26 09:06:31.493212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.105 qpair failed and we were unable to recover it. 00:33:13.105 [2024-07-26 09:06:31.493359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.105 [2024-07-26 09:06:31.493385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.105 qpair failed and we were unable to recover it. 00:33:13.105 [2024-07-26 09:06:31.493528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.105 [2024-07-26 09:06:31.493554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.105 qpair failed and we were unable to recover it. 00:33:13.105 [2024-07-26 09:06:31.493699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.105 [2024-07-26 09:06:31.493725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.105 qpair failed and we were unable to recover it. 00:33:13.105 [2024-07-26 09:06:31.493865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.105 [2024-07-26 09:06:31.493890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.105 qpair failed and we were unable to recover it. 00:33:13.105 [2024-07-26 09:06:31.494067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.105 [2024-07-26 09:06:31.494094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.105 qpair failed and we were unable to recover it. 00:33:13.105 [2024-07-26 09:06:31.494220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.105 [2024-07-26 09:06:31.494246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.106 qpair failed and we were unable to recover it. 00:33:13.106 [2024-07-26 09:06:31.494415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.106 [2024-07-26 09:06:31.494441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.106 qpair failed and we were unable to recover it. 00:33:13.106 [2024-07-26 09:06:31.494582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.106 [2024-07-26 09:06:31.494607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.106 qpair failed and we were unable to recover it. 00:33:13.106 [2024-07-26 09:06:31.494757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.106 [2024-07-26 09:06:31.494783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.106 qpair failed and we were unable to recover it. 00:33:13.106 [2024-07-26 09:06:31.494960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.106 [2024-07-26 09:06:31.494986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.106 qpair failed and we were unable to recover it. 00:33:13.106 [2024-07-26 09:06:31.495107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.106 [2024-07-26 09:06:31.495138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.106 qpair failed and we were unable to recover it. 00:33:13.106 [2024-07-26 09:06:31.495282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.106 [2024-07-26 09:06:31.495307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.106 qpair failed and we were unable to recover it. 00:33:13.106 [2024-07-26 09:06:31.495480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.106 [2024-07-26 09:06:31.495506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.106 qpair failed and we were unable to recover it. 00:33:13.106 [2024-07-26 09:06:31.495654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.106 [2024-07-26 09:06:31.495681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.106 qpair failed and we were unable to recover it. 00:33:13.106 [2024-07-26 09:06:31.495852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.106 [2024-07-26 09:06:31.495878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.106 qpair failed and we were unable to recover it. 00:33:13.106 [2024-07-26 09:06:31.496050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.106 [2024-07-26 09:06:31.496082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.106 qpair failed and we were unable to recover it. 00:33:13.106 [2024-07-26 09:06:31.496255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.106 [2024-07-26 09:06:31.496280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.106 qpair failed and we were unable to recover it. 00:33:13.106 [2024-07-26 09:06:31.496420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.106 [2024-07-26 09:06:31.496446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.106 qpair failed and we were unable to recover it. 00:33:13.106 [2024-07-26 09:06:31.496592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.106 [2024-07-26 09:06:31.496617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.106 qpair failed and we were unable to recover it. 00:33:13.106 [2024-07-26 09:06:31.496740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.106 [2024-07-26 09:06:31.496766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.106 qpair failed and we were unable to recover it. 00:33:13.106 [2024-07-26 09:06:31.496912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.106 [2024-07-26 09:06:31.496939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.106 qpair failed and we were unable to recover it. 00:33:13.106 [2024-07-26 09:06:31.497097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.106 [2024-07-26 09:06:31.497124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.106 qpair failed and we were unable to recover it. 00:33:13.106 [2024-07-26 09:06:31.497239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.106 [2024-07-26 09:06:31.497265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.106 qpair failed and we were unable to recover it. 00:33:13.106 [2024-07-26 09:06:31.497440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.106 [2024-07-26 09:06:31.497466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.106 qpair failed and we were unable to recover it. 00:33:13.106 [2024-07-26 09:06:31.497617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.106 [2024-07-26 09:06:31.497643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.106 qpair failed and we were unable to recover it. 00:33:13.106 [2024-07-26 09:06:31.497787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.106 [2024-07-26 09:06:31.497813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.106 qpair failed and we were unable to recover it. 00:33:13.106 [2024-07-26 09:06:31.497935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.106 [2024-07-26 09:06:31.497961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.106 qpair failed and we were unable to recover it. 00:33:13.106 [2024-07-26 09:06:31.498102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.106 [2024-07-26 09:06:31.498141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.106 qpair failed and we were unable to recover it. 00:33:13.106 [2024-07-26 09:06:31.498288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.106 [2024-07-26 09:06:31.498315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.106 qpair failed and we were unable to recover it. 00:33:13.106 [2024-07-26 09:06:31.498467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.106 [2024-07-26 09:06:31.498493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.106 qpair failed and we were unable to recover it. 00:33:13.106 [2024-07-26 09:06:31.498663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.106 [2024-07-26 09:06:31.498689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.106 qpair failed and we were unable to recover it. 00:33:13.106 [2024-07-26 09:06:31.498806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.106 [2024-07-26 09:06:31.498831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.106 qpair failed and we were unable to recover it. 00:33:13.106 [2024-07-26 09:06:31.498977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.106 [2024-07-26 09:06:31.499003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.106 qpair failed and we were unable to recover it. 00:33:13.106 [2024-07-26 09:06:31.499179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.106 [2024-07-26 09:06:31.499205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.106 qpair failed and we were unable to recover it. 00:33:13.106 [2024-07-26 09:06:31.499330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.106 [2024-07-26 09:06:31.499359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.106 qpair failed and we were unable to recover it. 00:33:13.106 [2024-07-26 09:06:31.499478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.106 [2024-07-26 09:06:31.499502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.106 qpair failed and we were unable to recover it. 00:33:13.106 [2024-07-26 09:06:31.499657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.106 [2024-07-26 09:06:31.499684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.106 qpair failed and we were unable to recover it. 00:33:13.106 [2024-07-26 09:06:31.499842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.106 [2024-07-26 09:06:31.499868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.106 qpair failed and we were unable to recover it. 00:33:13.106 [2024-07-26 09:06:31.500018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.106 [2024-07-26 09:06:31.500045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.106 qpair failed and we were unable to recover it. 00:33:13.106 [2024-07-26 09:06:31.500204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.106 [2024-07-26 09:06:31.500230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.106 qpair failed and we were unable to recover it. 00:33:13.106 [2024-07-26 09:06:31.500350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.106 [2024-07-26 09:06:31.500377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.106 qpair failed and we were unable to recover it. 00:33:13.106 [2024-07-26 09:06:31.500529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.106 [2024-07-26 09:06:31.500556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.106 qpair failed and we were unable to recover it. 00:33:13.106 [2024-07-26 09:06:31.500728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.106 [2024-07-26 09:06:31.500753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.106 qpair failed and we were unable to recover it. 00:33:13.106 [2024-07-26 09:06:31.500921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.106 [2024-07-26 09:06:31.500947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.106 qpair failed and we were unable to recover it. 00:33:13.106 [2024-07-26 09:06:31.501118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.106 [2024-07-26 09:06:31.501144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.106 qpair failed and we were unable to recover it. 00:33:13.106 [2024-07-26 09:06:31.501286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.106 [2024-07-26 09:06:31.501311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.106 qpair failed and we were unable to recover it. 00:33:13.106 [2024-07-26 09:06:31.501439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.106 [2024-07-26 09:06:31.501466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.106 qpair failed and we were unable to recover it. 00:33:13.106 [2024-07-26 09:06:31.501646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.106 [2024-07-26 09:06:31.501671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.106 qpair failed and we were unable to recover it. 00:33:13.106 [2024-07-26 09:06:31.501820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.106 [2024-07-26 09:06:31.501846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.106 qpair failed and we were unable to recover it. 00:33:13.106 [2024-07-26 09:06:31.501999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.106 [2024-07-26 09:06:31.502024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.106 qpair failed and we were unable to recover it. 00:33:13.106 [2024-07-26 09:06:31.502174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.106 [2024-07-26 09:06:31.502205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.106 qpair failed and we were unable to recover it. 00:33:13.106 [2024-07-26 09:06:31.502376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.106 [2024-07-26 09:06:31.502401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.106 qpair failed and we were unable to recover it. 00:33:13.106 [2024-07-26 09:06:31.502520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.106 [2024-07-26 09:06:31.502546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.106 qpair failed and we were unable to recover it. 00:33:13.106 [2024-07-26 09:06:31.502690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.106 [2024-07-26 09:06:31.502716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.106 qpair failed and we were unable to recover it. 00:33:13.106 [2024-07-26 09:06:31.502861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.106 [2024-07-26 09:06:31.502888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.106 qpair failed and we were unable to recover it. 00:33:13.106 [2024-07-26 09:06:31.503065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.106 [2024-07-26 09:06:31.503091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.106 qpair failed and we were unable to recover it. 00:33:13.106 [2024-07-26 09:06:31.503265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.106 [2024-07-26 09:06:31.503291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.106 qpair failed and we were unable to recover it. 00:33:13.106 [2024-07-26 09:06:31.503410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.106 [2024-07-26 09:06:31.503435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.106 qpair failed and we were unable to recover it. 00:33:13.106 [2024-07-26 09:06:31.503550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.106 [2024-07-26 09:06:31.503576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.106 qpair failed and we were unable to recover it. 00:33:13.106 [2024-07-26 09:06:31.503727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.106 [2024-07-26 09:06:31.503753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.107 qpair failed and we were unable to recover it. 00:33:13.107 [2024-07-26 09:06:31.503926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.107 [2024-07-26 09:06:31.503951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.107 qpair failed and we were unable to recover it. 00:33:13.107 [2024-07-26 09:06:31.504098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.107 [2024-07-26 09:06:31.504125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.107 qpair failed and we were unable to recover it. 00:33:13.107 [2024-07-26 09:06:31.504277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.107 [2024-07-26 09:06:31.504303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.107 qpair failed and we were unable to recover it. 00:33:13.107 [2024-07-26 09:06:31.504443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.107 [2024-07-26 09:06:31.504468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.107 qpair failed and we were unable to recover it. 00:33:13.107 [2024-07-26 09:06:31.504642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.107 [2024-07-26 09:06:31.504668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.107 qpair failed and we were unable to recover it. 00:33:13.107 [2024-07-26 09:06:31.504843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.107 [2024-07-26 09:06:31.504869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.107 qpair failed and we were unable to recover it. 00:33:13.107 [2024-07-26 09:06:31.504988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.107 [2024-07-26 09:06:31.505015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.107 qpair failed and we were unable to recover it. 00:33:13.107 [2024-07-26 09:06:31.505142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.107 [2024-07-26 09:06:31.505168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.107 qpair failed and we were unable to recover it. 00:33:13.107 [2024-07-26 09:06:31.505323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.107 [2024-07-26 09:06:31.505349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.107 qpair failed and we were unable to recover it. 00:33:13.107 [2024-07-26 09:06:31.505499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.107 [2024-07-26 09:06:31.505524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.107 qpair failed and we were unable to recover it. 00:33:13.107 [2024-07-26 09:06:31.505654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.107 [2024-07-26 09:06:31.505680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.107 qpair failed and we were unable to recover it. 00:33:13.107 [2024-07-26 09:06:31.505832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.107 [2024-07-26 09:06:31.505858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.107 qpair failed and we were unable to recover it. 00:33:13.107 [2024-07-26 09:06:31.505977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.107 [2024-07-26 09:06:31.506002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.107 qpair failed and we were unable to recover it. 00:33:13.107 [2024-07-26 09:06:31.506168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.107 [2024-07-26 09:06:31.506197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.107 qpair failed and we were unable to recover it. 00:33:13.107 [2024-07-26 09:06:31.506351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.107 [2024-07-26 09:06:31.506380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.107 qpair failed and we were unable to recover it. 00:33:13.107 [2024-07-26 09:06:31.506565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.107 [2024-07-26 09:06:31.506595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.107 qpair failed and we were unable to recover it. 00:33:13.107 [2024-07-26 09:06:31.506799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.107 [2024-07-26 09:06:31.506828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.107 qpair failed and we were unable to recover it. 00:33:13.107 [2024-07-26 09:06:31.507047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.107 [2024-07-26 09:06:31.507083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.107 qpair failed and we were unable to recover it. 00:33:13.107 [2024-07-26 09:06:31.507242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.107 [2024-07-26 09:06:31.507268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.107 qpair failed and we were unable to recover it. 00:33:13.107 [2024-07-26 09:06:31.507411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.107 [2024-07-26 09:06:31.507436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.107 qpair failed and we were unable to recover it. 00:33:13.107 [2024-07-26 09:06:31.507594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.107 [2024-07-26 09:06:31.507621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.107 qpair failed and we were unable to recover it. 00:33:13.107 [2024-07-26 09:06:31.507768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.107 [2024-07-26 09:06:31.507794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.107 qpair failed and we were unable to recover it. 00:33:13.107 [2024-07-26 09:06:31.507937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.107 [2024-07-26 09:06:31.507964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.107 qpair failed and we were unable to recover it. 00:33:13.107 [2024-07-26 09:06:31.508144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.107 [2024-07-26 09:06:31.508171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.107 qpair failed and we were unable to recover it. 00:33:13.107 [2024-07-26 09:06:31.508355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.107 [2024-07-26 09:06:31.508381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.107 qpair failed and we were unable to recover it. 00:33:13.107 [2024-07-26 09:06:31.508501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.107 [2024-07-26 09:06:31.508527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.107 qpair failed and we were unable to recover it. 00:33:13.107 [2024-07-26 09:06:31.508651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.107 [2024-07-26 09:06:31.508677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.107 qpair failed and we were unable to recover it. 00:33:13.107 [2024-07-26 09:06:31.508825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.107 [2024-07-26 09:06:31.508851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.107 qpair failed and we were unable to recover it. 00:33:13.107 [2024-07-26 09:06:31.509002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.107 [2024-07-26 09:06:31.509028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.107 qpair failed and we were unable to recover it. 00:33:13.107 [2024-07-26 09:06:31.509199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.107 [2024-07-26 09:06:31.509225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.107 qpair failed and we were unable to recover it. 00:33:13.107 [2024-07-26 09:06:31.509372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.107 [2024-07-26 09:06:31.509398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.107 qpair failed and we were unable to recover it. 00:33:13.107 [2024-07-26 09:06:31.509548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.107 [2024-07-26 09:06:31.509574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.107 qpair failed and we were unable to recover it. 00:33:13.107 [2024-07-26 09:06:31.509746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.107 [2024-07-26 09:06:31.509771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.107 qpair failed and we were unable to recover it. 00:33:13.108 [2024-07-26 09:06:31.509912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.108 [2024-07-26 09:06:31.509938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.108 qpair failed and we were unable to recover it. 00:33:13.108 [2024-07-26 09:06:31.510087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.108 [2024-07-26 09:06:31.510113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.108 qpair failed and we were unable to recover it. 00:33:13.108 [2024-07-26 09:06:31.510239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.108 [2024-07-26 09:06:31.510265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.108 qpair failed and we were unable to recover it. 00:33:13.108 [2024-07-26 09:06:31.510408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.108 [2024-07-26 09:06:31.510433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.108 qpair failed and we were unable to recover it. 00:33:13.108 [2024-07-26 09:06:31.510599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.108 [2024-07-26 09:06:31.510625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.108 qpair failed and we were unable to recover it. 00:33:13.108 [2024-07-26 09:06:31.510751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.108 [2024-07-26 09:06:31.510778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.108 qpair failed and we were unable to recover it. 00:33:13.108 [2024-07-26 09:06:31.510922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.108 [2024-07-26 09:06:31.510949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.108 qpair failed and we were unable to recover it. 00:33:13.108 [2024-07-26 09:06:31.511070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.108 [2024-07-26 09:06:31.511097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.108 qpair failed and we were unable to recover it. 00:33:13.108 [2024-07-26 09:06:31.511283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.108 [2024-07-26 09:06:31.511308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.108 qpair failed and we were unable to recover it. 00:33:13.108 [2024-07-26 09:06:31.511467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.108 [2024-07-26 09:06:31.511493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.108 qpair failed and we were unable to recover it. 00:33:13.108 [2024-07-26 09:06:31.511663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.108 [2024-07-26 09:06:31.511689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.108 qpair failed and we were unable to recover it. 00:33:13.108 [2024-07-26 09:06:31.511817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.108 [2024-07-26 09:06:31.511843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.108 qpair failed and we were unable to recover it. 00:33:13.108 [2024-07-26 09:06:31.511981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.108 [2024-07-26 09:06:31.512007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.108 qpair failed and we were unable to recover it. 00:33:13.108 [2024-07-26 09:06:31.512173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.108 [2024-07-26 09:06:31.512199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.108 qpair failed and we were unable to recover it. 00:33:13.108 [2024-07-26 09:06:31.512351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.108 [2024-07-26 09:06:31.512378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.108 qpair failed and we were unable to recover it. 00:33:13.108 [2024-07-26 09:06:31.512525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.108 [2024-07-26 09:06:31.512551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.108 qpair failed and we were unable to recover it. 00:33:13.393 [2024-07-26 09:06:31.512722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.393 [2024-07-26 09:06:31.512748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.393 qpair failed and we were unable to recover it. 00:33:13.393 [2024-07-26 09:06:31.512923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.394 [2024-07-26 09:06:31.512950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.394 qpair failed and we were unable to recover it. 00:33:13.394 [2024-07-26 09:06:31.513095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.394 [2024-07-26 09:06:31.513122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.394 qpair failed and we were unable to recover it. 00:33:13.394 [2024-07-26 09:06:31.513269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.394 [2024-07-26 09:06:31.513294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.394 qpair failed and we were unable to recover it. 00:33:13.394 [2024-07-26 09:06:31.513470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.394 [2024-07-26 09:06:31.513497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.394 qpair failed and we were unable to recover it. 00:33:13.394 [2024-07-26 09:06:31.513646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.394 [2024-07-26 09:06:31.513672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.394 qpair failed and we were unable to recover it. 00:33:13.394 [2024-07-26 09:06:31.513830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.394 [2024-07-26 09:06:31.513857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.394 qpair failed and we were unable to recover it. 00:33:13.394 [2024-07-26 09:06:31.513989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.394 [2024-07-26 09:06:31.514015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.394 qpair failed and we were unable to recover it. 00:33:13.394 [2024-07-26 09:06:31.514187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.394 [2024-07-26 09:06:31.514217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.394 qpair failed and we were unable to recover it. 00:33:13.394 [2024-07-26 09:06:31.514365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.394 [2024-07-26 09:06:31.514391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.394 qpair failed and we were unable to recover it. 00:33:13.394 [2024-07-26 09:06:31.514541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.394 [2024-07-26 09:06:31.514569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.394 qpair failed and we were unable to recover it. 00:33:13.394 [2024-07-26 09:06:31.514745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.394 [2024-07-26 09:06:31.514770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.394 qpair failed and we were unable to recover it. 00:33:13.394 [2024-07-26 09:06:31.514889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.394 [2024-07-26 09:06:31.514916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.394 qpair failed and we were unable to recover it. 00:33:13.394 [2024-07-26 09:06:31.515043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.394 [2024-07-26 09:06:31.515074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.394 qpair failed and we were unable to recover it. 00:33:13.394 [2024-07-26 09:06:31.515248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.394 [2024-07-26 09:06:31.515274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.394 qpair failed and we were unable to recover it. 00:33:13.394 [2024-07-26 09:06:31.515446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.394 [2024-07-26 09:06:31.515472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.394 qpair failed and we were unable to recover it. 00:33:13.394 [2024-07-26 09:06:31.515631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.394 [2024-07-26 09:06:31.515657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.394 qpair failed and we were unable to recover it. 00:33:13.394 [2024-07-26 09:06:31.515787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.394 [2024-07-26 09:06:31.515813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.394 qpair failed and we were unable to recover it. 00:33:13.394 [2024-07-26 09:06:31.515947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.394 [2024-07-26 09:06:31.515975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.394 qpair failed and we were unable to recover it. 00:33:13.394 [2024-07-26 09:06:31.516130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.394 [2024-07-26 09:06:31.516157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.394 qpair failed and we were unable to recover it. 00:33:13.394 [2024-07-26 09:06:31.516299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.394 [2024-07-26 09:06:31.516324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.394 qpair failed and we were unable to recover it. 00:33:13.394 [2024-07-26 09:06:31.516465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.394 [2024-07-26 09:06:31.516491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.394 qpair failed and we were unable to recover it. 00:33:13.394 [2024-07-26 09:06:31.516650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.394 [2024-07-26 09:06:31.516676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.394 qpair failed and we were unable to recover it. 00:33:13.394 [2024-07-26 09:06:31.516825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.394 [2024-07-26 09:06:31.516850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.394 qpair failed and we were unable to recover it. 00:33:13.394 [2024-07-26 09:06:31.516971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.394 [2024-07-26 09:06:31.516997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.394 qpair failed and we were unable to recover it. 00:33:13.394 [2024-07-26 09:06:31.517110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.394 [2024-07-26 09:06:31.517136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.394 qpair failed and we were unable to recover it. 00:33:13.394 [2024-07-26 09:06:31.517261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.394 [2024-07-26 09:06:31.517288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.394 qpair failed and we were unable to recover it. 00:33:13.394 [2024-07-26 09:06:31.517429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.394 [2024-07-26 09:06:31.517459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.394 qpair failed and we were unable to recover it. 00:33:13.394 [2024-07-26 09:06:31.517672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.394 [2024-07-26 09:06:31.517701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.394 qpair failed and we were unable to recover it. 00:33:13.394 [2024-07-26 09:06:31.517887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.394 [2024-07-26 09:06:31.517917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.394 qpair failed and we were unable to recover it. 00:33:13.394 [2024-07-26 09:06:31.518080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.394 [2024-07-26 09:06:31.518106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.394 qpair failed and we were unable to recover it. 00:33:13.394 [2024-07-26 09:06:31.518257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.394 [2024-07-26 09:06:31.518283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.394 qpair failed and we were unable to recover it. 00:33:13.394 [2024-07-26 09:06:31.518400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.394 [2024-07-26 09:06:31.518427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.394 qpair failed and we were unable to recover it. 00:33:13.394 [2024-07-26 09:06:31.518600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.394 [2024-07-26 09:06:31.518627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.394 qpair failed and we were unable to recover it. 00:33:13.394 [2024-07-26 09:06:31.518756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.394 [2024-07-26 09:06:31.518781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.394 qpair failed and we were unable to recover it. 00:33:13.394 [2024-07-26 09:06:31.518933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.394 [2024-07-26 09:06:31.518959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.394 qpair failed and we were unable to recover it. 00:33:13.394 [2024-07-26 09:06:31.519114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.394 [2024-07-26 09:06:31.519141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.394 qpair failed and we were unable to recover it. 00:33:13.394 [2024-07-26 09:06:31.519291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.394 [2024-07-26 09:06:31.519318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.394 qpair failed and we were unable to recover it. 00:33:13.394 [2024-07-26 09:06:31.519463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.395 [2024-07-26 09:06:31.519490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.395 qpair failed and we were unable to recover it. 00:33:13.395 [2024-07-26 09:06:31.519639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.395 [2024-07-26 09:06:31.519665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.395 qpair failed and we were unable to recover it. 00:33:13.395 [2024-07-26 09:06:31.519844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.395 [2024-07-26 09:06:31.519870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.395 qpair failed and we were unable to recover it. 00:33:13.395 [2024-07-26 09:06:31.519992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.395 [2024-07-26 09:06:31.520017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.395 qpair failed and we were unable to recover it. 00:33:13.395 [2024-07-26 09:06:31.520140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.395 [2024-07-26 09:06:31.520166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.395 qpair failed and we were unable to recover it. 00:33:13.395 [2024-07-26 09:06:31.520307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.395 [2024-07-26 09:06:31.520333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.395 qpair failed and we were unable to recover it. 00:33:13.395 [2024-07-26 09:06:31.520480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.395 [2024-07-26 09:06:31.520506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.395 qpair failed and we were unable to recover it. 00:33:13.395 [2024-07-26 09:06:31.520630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.395 [2024-07-26 09:06:31.520656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.395 qpair failed and we were unable to recover it. 00:33:13.395 [2024-07-26 09:06:31.520834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.395 [2024-07-26 09:06:31.520861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.395 qpair failed and we were unable to recover it. 00:33:13.395 [2024-07-26 09:06:31.521027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.395 [2024-07-26 09:06:31.521055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.395 qpair failed and we were unable to recover it. 00:33:13.395 [2024-07-26 09:06:31.521260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.395 [2024-07-26 09:06:31.521290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.395 qpair failed and we were unable to recover it. 00:33:13.395 [2024-07-26 09:06:31.521438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.395 [2024-07-26 09:06:31.521465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.395 qpair failed and we were unable to recover it. 00:33:13.395 [2024-07-26 09:06:31.521667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.395 [2024-07-26 09:06:31.521695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.395 qpair failed and we were unable to recover it. 00:33:13.395 [2024-07-26 09:06:31.521882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.395 [2024-07-26 09:06:31.521910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.395 qpair failed and we were unable to recover it. 00:33:13.395 [2024-07-26 09:06:31.522057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.395 [2024-07-26 09:06:31.522091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.395 qpair failed and we were unable to recover it. 00:33:13.395 [2024-07-26 09:06:31.522205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.395 [2024-07-26 09:06:31.522231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.395 qpair failed and we were unable to recover it. 00:33:13.395 [2024-07-26 09:06:31.522350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.395 [2024-07-26 09:06:31.522376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.395 qpair failed and we were unable to recover it. 00:33:13.395 [2024-07-26 09:06:31.522531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.395 [2024-07-26 09:06:31.522557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.395 qpair failed and we were unable to recover it. 00:33:13.395 [2024-07-26 09:06:31.522706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.395 [2024-07-26 09:06:31.522731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.395 qpair failed and we were unable to recover it. 00:33:13.395 [2024-07-26 09:06:31.522873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.395 [2024-07-26 09:06:31.522899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.395 qpair failed and we were unable to recover it. 00:33:13.395 [2024-07-26 09:06:31.523029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.395 [2024-07-26 09:06:31.523056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.395 qpair failed and we were unable to recover it. 00:33:13.395 [2024-07-26 09:06:31.523210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.395 [2024-07-26 09:06:31.523236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.395 qpair failed and we were unable to recover it. 00:33:13.395 [2024-07-26 09:06:31.523354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.395 [2024-07-26 09:06:31.523380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.395 qpair failed and we were unable to recover it. 00:33:13.395 [2024-07-26 09:06:31.523553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.395 [2024-07-26 09:06:31.523579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.395 qpair failed and we were unable to recover it. 00:33:13.395 [2024-07-26 09:06:31.523694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.395 [2024-07-26 09:06:31.523720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.395 qpair failed and we were unable to recover it. 00:33:13.395 [2024-07-26 09:06:31.523891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.395 [2024-07-26 09:06:31.523917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.395 qpair failed and we were unable to recover it. 00:33:13.395 [2024-07-26 09:06:31.524089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.395 [2024-07-26 09:06:31.524115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.395 qpair failed and we were unable to recover it. 00:33:13.395 [2024-07-26 09:06:31.524286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.395 [2024-07-26 09:06:31.524312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.395 qpair failed and we were unable to recover it. 00:33:13.395 [2024-07-26 09:06:31.524457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.395 [2024-07-26 09:06:31.524483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.395 qpair failed and we were unable to recover it. 00:33:13.395 [2024-07-26 09:06:31.524607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.395 [2024-07-26 09:06:31.524634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.395 qpair failed and we were unable to recover it. 00:33:13.395 [2024-07-26 09:06:31.524785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.395 [2024-07-26 09:06:31.524811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.395 qpair failed and we were unable to recover it. 00:33:13.395 [2024-07-26 09:06:31.524954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.395 [2024-07-26 09:06:31.524979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.395 qpair failed and we were unable to recover it. 00:33:13.395 [2024-07-26 09:06:31.525158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.395 [2024-07-26 09:06:31.525185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.395 qpair failed and we were unable to recover it. 00:33:13.395 [2024-07-26 09:06:31.525304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.395 [2024-07-26 09:06:31.525331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.395 qpair failed and we were unable to recover it. 00:33:13.395 [2024-07-26 09:06:31.525502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.395 [2024-07-26 09:06:31.525527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.395 qpair failed and we were unable to recover it. 00:33:13.395 [2024-07-26 09:06:31.525699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.395 [2024-07-26 09:06:31.525725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.395 qpair failed and we were unable to recover it. 00:33:13.395 [2024-07-26 09:06:31.525897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.395 [2024-07-26 09:06:31.525922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.395 qpair failed and we were unable to recover it. 00:33:13.395 [2024-07-26 09:06:31.526049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.396 [2024-07-26 09:06:31.526083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.396 qpair failed and we were unable to recover it. 00:33:13.396 [2024-07-26 09:06:31.526228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.396 [2024-07-26 09:06:31.526254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.396 qpair failed and we were unable to recover it. 00:33:13.396 [2024-07-26 09:06:31.526398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.396 [2024-07-26 09:06:31.526423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.396 qpair failed and we were unable to recover it. 00:33:13.396 [2024-07-26 09:06:31.526542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.396 [2024-07-26 09:06:31.526569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.396 qpair failed and we were unable to recover it. 00:33:13.396 [2024-07-26 09:06:31.526745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.396 [2024-07-26 09:06:31.526772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.396 qpair failed and we were unable to recover it. 00:33:13.396 [2024-07-26 09:06:31.526887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.396 [2024-07-26 09:06:31.526913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.396 qpair failed and we were unable to recover it. 00:33:13.396 [2024-07-26 09:06:31.527114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.396 [2024-07-26 09:06:31.527140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.396 qpair failed and we were unable to recover it. 00:33:13.396 [2024-07-26 09:06:31.527268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.396 [2024-07-26 09:06:31.527294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.396 qpair failed and we were unable to recover it. 00:33:13.396 [2024-07-26 09:06:31.527440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.396 [2024-07-26 09:06:31.527467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.396 qpair failed and we were unable to recover it. 00:33:13.396 [2024-07-26 09:06:31.527611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.396 [2024-07-26 09:06:31.527636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.396 qpair failed and we were unable to recover it. 00:33:13.396 [2024-07-26 09:06:31.527757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.396 [2024-07-26 09:06:31.527783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.396 qpair failed and we were unable to recover it. 00:33:13.396 [2024-07-26 09:06:31.527902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.396 [2024-07-26 09:06:31.527929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.396 qpair failed and we were unable to recover it. 00:33:13.396 [2024-07-26 09:06:31.528076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.396 [2024-07-26 09:06:31.528103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.396 qpair failed and we were unable to recover it. 00:33:13.396 [2024-07-26 09:06:31.528255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.396 [2024-07-26 09:06:31.528287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.396 qpair failed and we were unable to recover it. 00:33:13.396 [2024-07-26 09:06:31.528460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.396 [2024-07-26 09:06:31.528486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.396 qpair failed and we were unable to recover it. 00:33:13.396 [2024-07-26 09:06:31.528603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.396 [2024-07-26 09:06:31.528630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.396 qpair failed and we were unable to recover it. 00:33:13.396 [2024-07-26 09:06:31.528788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.396 [2024-07-26 09:06:31.528814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.396 qpair failed and we were unable to recover it. 00:33:13.396 [2024-07-26 09:06:31.528992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.396 [2024-07-26 09:06:31.529018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.396 qpair failed and we were unable to recover it. 00:33:13.396 [2024-07-26 09:06:31.529182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.396 [2024-07-26 09:06:31.529209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.396 qpair failed and we were unable to recover it. 00:33:13.396 [2024-07-26 09:06:31.529395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.396 [2024-07-26 09:06:31.529421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.396 qpair failed and we were unable to recover it. 00:33:13.396 [2024-07-26 09:06:31.529594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.396 [2024-07-26 09:06:31.529620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.396 qpair failed and we were unable to recover it. 00:33:13.396 [2024-07-26 09:06:31.529767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.396 [2024-07-26 09:06:31.529793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.396 qpair failed and we were unable to recover it. 00:33:13.396 [2024-07-26 09:06:31.529941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.396 [2024-07-26 09:06:31.529968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.396 qpair failed and we were unable to recover it. 00:33:13.396 [2024-07-26 09:06:31.530115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.396 [2024-07-26 09:06:31.530142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.396 qpair failed and we were unable to recover it. 00:33:13.396 [2024-07-26 09:06:31.530297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.396 [2024-07-26 09:06:31.530323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.396 qpair failed and we were unable to recover it. 00:33:13.396 [2024-07-26 09:06:31.530441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.396 [2024-07-26 09:06:31.530467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.396 qpair failed and we were unable to recover it. 00:33:13.396 [2024-07-26 09:06:31.530636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.396 [2024-07-26 09:06:31.530661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.396 qpair failed and we were unable to recover it. 00:33:13.396 [2024-07-26 09:06:31.530793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.396 [2024-07-26 09:06:31.530820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.396 qpair failed and we were unable to recover it. 00:33:13.396 [2024-07-26 09:06:31.531002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.396 [2024-07-26 09:06:31.531029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.396 qpair failed and we were unable to recover it. 00:33:13.396 [2024-07-26 09:06:31.531184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.396 [2024-07-26 09:06:31.531210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.396 qpair failed and we were unable to recover it. 00:33:13.396 [2024-07-26 09:06:31.531333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.396 [2024-07-26 09:06:31.531358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.396 qpair failed and we were unable to recover it. 00:33:13.396 [2024-07-26 09:06:31.531488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.396 [2024-07-26 09:06:31.531515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.396 qpair failed and we were unable to recover it. 00:33:13.396 [2024-07-26 09:06:31.531655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.396 [2024-07-26 09:06:31.531682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.396 qpair failed and we were unable to recover it. 00:33:13.396 [2024-07-26 09:06:31.531826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.396 [2024-07-26 09:06:31.531852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.396 qpair failed and we were unable to recover it. 00:33:13.396 [2024-07-26 09:06:31.531977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.396 [2024-07-26 09:06:31.532004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.396 qpair failed and we were unable to recover it. 00:33:13.396 [2024-07-26 09:06:31.532175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.396 [2024-07-26 09:06:31.532201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.396 qpair failed and we were unable to recover it. 00:33:13.396 [2024-07-26 09:06:31.532347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.396 [2024-07-26 09:06:31.532372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.396 qpair failed and we were unable to recover it. 00:33:13.396 [2024-07-26 09:06:31.532488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.396 [2024-07-26 09:06:31.532515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.396 qpair failed and we were unable to recover it. 00:33:13.397 [2024-07-26 09:06:31.532632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.397 [2024-07-26 09:06:31.532658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.397 qpair failed and we were unable to recover it. 00:33:13.397 [2024-07-26 09:06:31.532781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.397 [2024-07-26 09:06:31.532807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.397 qpair failed and we were unable to recover it. 00:33:13.397 [2024-07-26 09:06:31.532960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.397 [2024-07-26 09:06:31.532986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.397 qpair failed and we were unable to recover it. 00:33:13.397 [2024-07-26 09:06:31.533155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.397 [2024-07-26 09:06:31.533181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.397 qpair failed and we were unable to recover it. 00:33:13.397 [2024-07-26 09:06:31.533353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.397 [2024-07-26 09:06:31.533379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.397 qpair failed and we were unable to recover it. 00:33:13.397 [2024-07-26 09:06:31.533528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.397 [2024-07-26 09:06:31.533554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.397 qpair failed and we were unable to recover it. 00:33:13.397 [2024-07-26 09:06:31.533727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.397 [2024-07-26 09:06:31.533753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.397 qpair failed and we were unable to recover it. 00:33:13.397 [2024-07-26 09:06:31.533892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.397 [2024-07-26 09:06:31.533917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.397 qpair failed and we were unable to recover it. 00:33:13.397 [2024-07-26 09:06:31.534094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.397 [2024-07-26 09:06:31.534121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.397 qpair failed and we were unable to recover it. 00:33:13.397 [2024-07-26 09:06:31.534261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.397 [2024-07-26 09:06:31.534287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.397 qpair failed and we were unable to recover it. 00:33:13.397 [2024-07-26 09:06:31.534433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.397 [2024-07-26 09:06:31.534459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.397 qpair failed and we were unable to recover it. 00:33:13.397 [2024-07-26 09:06:31.534611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.397 [2024-07-26 09:06:31.534638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.397 qpair failed and we were unable to recover it. 00:33:13.397 [2024-07-26 09:06:31.534811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.397 [2024-07-26 09:06:31.534837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.397 qpair failed and we were unable to recover it. 00:33:13.397 [2024-07-26 09:06:31.534987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.397 [2024-07-26 09:06:31.535013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.397 qpair failed and we were unable to recover it. 00:33:13.397 [2024-07-26 09:06:31.535168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.397 [2024-07-26 09:06:31.535195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.397 qpair failed and we were unable to recover it. 00:33:13.397 [2024-07-26 09:06:31.535339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.397 [2024-07-26 09:06:31.535369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.397 qpair failed and we were unable to recover it. 00:33:13.397 [2024-07-26 09:06:31.535541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.397 [2024-07-26 09:06:31.535566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.397 qpair failed and we were unable to recover it. 00:33:13.397 [2024-07-26 09:06:31.535695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.397 [2024-07-26 09:06:31.535722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.397 qpair failed and we were unable to recover it. 00:33:13.397 [2024-07-26 09:06:31.535884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.397 [2024-07-26 09:06:31.535913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.397 qpair failed and we were unable to recover it. 00:33:13.397 [2024-07-26 09:06:31.536075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.397 [2024-07-26 09:06:31.536102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.397 qpair failed and we were unable to recover it. 00:33:13.397 [2024-07-26 09:06:31.536259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.397 [2024-07-26 09:06:31.536286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.397 qpair failed and we were unable to recover it. 00:33:13.397 [2024-07-26 09:06:31.536435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.397 [2024-07-26 09:06:31.536461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.397 qpair failed and we were unable to recover it. 00:33:13.397 [2024-07-26 09:06:31.536634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.397 [2024-07-26 09:06:31.536660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.397 qpair failed and we were unable to recover it. 00:33:13.397 [2024-07-26 09:06:31.536810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.397 [2024-07-26 09:06:31.536836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.397 qpair failed and we were unable to recover it. 00:33:13.397 [2024-07-26 09:06:31.536981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.397 [2024-07-26 09:06:31.537008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.397 qpair failed and we were unable to recover it. 00:33:13.397 [2024-07-26 09:06:31.537176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.397 [2024-07-26 09:06:31.537203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.397 qpair failed and we were unable to recover it. 00:33:13.397 [2024-07-26 09:06:31.537325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.397 [2024-07-26 09:06:31.537351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.397 qpair failed and we were unable to recover it. 00:33:13.397 [2024-07-26 09:06:31.537460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.397 [2024-07-26 09:06:31.537485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.397 qpair failed and we were unable to recover it. 00:33:13.397 [2024-07-26 09:06:31.537630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.397 [2024-07-26 09:06:31.537655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.397 qpair failed and we were unable to recover it. 00:33:13.397 [2024-07-26 09:06:31.537810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.397 [2024-07-26 09:06:31.537837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.397 qpair failed and we were unable to recover it. 00:33:13.397 [2024-07-26 09:06:31.537980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.397 [2024-07-26 09:06:31.538006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.397 qpair failed and we were unable to recover it. 00:33:13.397 [2024-07-26 09:06:31.538165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.397 [2024-07-26 09:06:31.538191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.397 qpair failed and we were unable to recover it. 00:33:13.397 [2024-07-26 09:06:31.538345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.397 [2024-07-26 09:06:31.538371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.397 qpair failed and we were unable to recover it. 00:33:13.397 [2024-07-26 09:06:31.538534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.397 [2024-07-26 09:06:31.538563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.397 qpair failed and we were unable to recover it. 00:33:13.397 [2024-07-26 09:06:31.538746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.397 [2024-07-26 09:06:31.538787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.397 qpair failed and we were unable to recover it. 00:33:13.397 [2024-07-26 09:06:31.538946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.397 [2024-07-26 09:06:31.538990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.397 qpair failed and we were unable to recover it. 00:33:13.397 [2024-07-26 09:06:31.539166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.397 [2024-07-26 09:06:31.539210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.397 qpair failed and we were unable to recover it. 00:33:13.397 [2024-07-26 09:06:31.539404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.398 [2024-07-26 09:06:31.539433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.398 qpair failed and we were unable to recover it. 00:33:13.398 [2024-07-26 09:06:31.539611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.398 [2024-07-26 09:06:31.539654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.398 qpair failed and we were unable to recover it. 00:33:13.398 [2024-07-26 09:06:31.539841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.398 [2024-07-26 09:06:31.539871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.398 qpair failed and we were unable to recover it. 00:33:13.398 [2024-07-26 09:06:31.540075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.398 [2024-07-26 09:06:31.540120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.398 qpair failed and we were unable to recover it. 00:33:13.398 [2024-07-26 09:06:31.540251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.398 [2024-07-26 09:06:31.540277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.398 qpair failed and we were unable to recover it. 00:33:13.398 [2024-07-26 09:06:31.540453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.398 [2024-07-26 09:06:31.540488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.398 qpair failed and we were unable to recover it. 00:33:13.398 [2024-07-26 09:06:31.540640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.398 [2024-07-26 09:06:31.540673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.398 qpair failed and we were unable to recover it. 00:33:13.398 [2024-07-26 09:06:31.540825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.398 [2024-07-26 09:06:31.540852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.398 qpair failed and we were unable to recover it. 00:33:13.398 [2024-07-26 09:06:31.540969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.398 [2024-07-26 09:06:31.540994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.398 qpair failed and we were unable to recover it. 00:33:13.398 [2024-07-26 09:06:31.541140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.398 [2024-07-26 09:06:31.541167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.398 qpair failed and we were unable to recover it. 00:33:13.398 [2024-07-26 09:06:31.541315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.398 [2024-07-26 09:06:31.541341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.398 qpair failed and we were unable to recover it. 00:33:13.398 [2024-07-26 09:06:31.541491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.398 [2024-07-26 09:06:31.541518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.398 qpair failed and we were unable to recover it. 00:33:13.398 [2024-07-26 09:06:31.541693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.398 [2024-07-26 09:06:31.541722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.398 qpair failed and we were unable to recover it. 00:33:13.398 [2024-07-26 09:06:31.541936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.398 [2024-07-26 09:06:31.541966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.398 qpair failed and we were unable to recover it. 00:33:13.398 [2024-07-26 09:06:31.542133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.398 [2024-07-26 09:06:31.542160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.398 qpair failed and we were unable to recover it. 00:33:13.398 [2024-07-26 09:06:31.542320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.398 [2024-07-26 09:06:31.542348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.398 qpair failed and we were unable to recover it. 00:33:13.398 [2024-07-26 09:06:31.542509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.398 [2024-07-26 09:06:31.542535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.398 qpair failed and we were unable to recover it. 00:33:13.398 [2024-07-26 09:06:31.542685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.398 [2024-07-26 09:06:31.542711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.398 qpair failed and we were unable to recover it. 00:33:13.398 [2024-07-26 09:06:31.542879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.398 [2024-07-26 09:06:31.542913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.398 qpair failed and we were unable to recover it. 00:33:13.398 [2024-07-26 09:06:31.543118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.398 [2024-07-26 09:06:31.543145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.398 qpair failed and we were unable to recover it. 00:33:13.398 [2024-07-26 09:06:31.543268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.398 [2024-07-26 09:06:31.543296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.398 qpair failed and we were unable to recover it. 00:33:13.398 [2024-07-26 09:06:31.543469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.398 [2024-07-26 09:06:31.543495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.398 qpair failed and we were unable to recover it. 00:33:13.398 [2024-07-26 09:06:31.543659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.398 [2024-07-26 09:06:31.543685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.398 qpair failed and we were unable to recover it. 00:33:13.398 [2024-07-26 09:06:31.543817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.398 [2024-07-26 09:06:31.543844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.398 qpair failed and we were unable to recover it. 00:33:13.398 [2024-07-26 09:06:31.544009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.398 [2024-07-26 09:06:31.544039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.398 qpair failed and we were unable to recover it. 00:33:13.398 [2024-07-26 09:06:31.544268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.398 [2024-07-26 09:06:31.544297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.398 qpair failed and we were unable to recover it. 00:33:13.398 [2024-07-26 09:06:31.544445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.398 [2024-07-26 09:06:31.544471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.398 qpair failed and we were unable to recover it. 00:33:13.398 [2024-07-26 09:06:31.544631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.398 [2024-07-26 09:06:31.544657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.398 qpair failed and we were unable to recover it. 00:33:13.398 [2024-07-26 09:06:31.544844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.398 [2024-07-26 09:06:31.544874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.398 qpair failed and we were unable to recover it. 00:33:13.398 [2024-07-26 09:06:31.545036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.398 [2024-07-26 09:06:31.545070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.398 qpair failed and we were unable to recover it. 00:33:13.398 [2024-07-26 09:06:31.545215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.399 [2024-07-26 09:06:31.545241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.399 qpair failed and we were unable to recover it. 00:33:13.399 [2024-07-26 09:06:31.545399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.399 [2024-07-26 09:06:31.545426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.399 qpair failed and we were unable to recover it. 00:33:13.399 [2024-07-26 09:06:31.545615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.399 [2024-07-26 09:06:31.545642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.399 qpair failed and we were unable to recover it. 00:33:13.399 [2024-07-26 09:06:31.545764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.399 [2024-07-26 09:06:31.545790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.399 qpair failed and we were unable to recover it. 00:33:13.399 [2024-07-26 09:06:31.545963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.399 [2024-07-26 09:06:31.545989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.399 qpair failed and we were unable to recover it. 00:33:13.399 [2024-07-26 09:06:31.546134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.399 [2024-07-26 09:06:31.546162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.399 qpair failed and we were unable to recover it. 00:33:13.399 [2024-07-26 09:06:31.546287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.399 [2024-07-26 09:06:31.546319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.399 qpair failed and we were unable to recover it. 00:33:13.399 [2024-07-26 09:06:31.546446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.399 [2024-07-26 09:06:31.546472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.399 qpair failed and we were unable to recover it. 00:33:13.399 [2024-07-26 09:06:31.546624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.399 [2024-07-26 09:06:31.546649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.399 qpair failed and we were unable to recover it. 00:33:13.399 [2024-07-26 09:06:31.546795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.399 [2024-07-26 09:06:31.546820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.399 qpair failed and we were unable to recover it. 00:33:13.399 [2024-07-26 09:06:31.546972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.399 [2024-07-26 09:06:31.546998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.399 qpair failed and we were unable to recover it. 00:33:13.399 [2024-07-26 09:06:31.547121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.399 [2024-07-26 09:06:31.547150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.399 qpair failed and we were unable to recover it. 00:33:13.399 [2024-07-26 09:06:31.547263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.399 [2024-07-26 09:06:31.547289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.399 qpair failed and we were unable to recover it. 00:33:13.399 [2024-07-26 09:06:31.547447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.399 [2024-07-26 09:06:31.547473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.399 qpair failed and we were unable to recover it. 00:33:13.399 [2024-07-26 09:06:31.547623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.399 [2024-07-26 09:06:31.547649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.399 qpair failed and we were unable to recover it. 00:33:13.399 [2024-07-26 09:06:31.547808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.399 [2024-07-26 09:06:31.547835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.399 qpair failed and we were unable to recover it. 00:33:13.399 [2024-07-26 09:06:31.547988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.399 [2024-07-26 09:06:31.548017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.399 qpair failed and we were unable to recover it. 00:33:13.399 [2024-07-26 09:06:31.548162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.399 [2024-07-26 09:06:31.548189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.399 qpair failed and we were unable to recover it. 00:33:13.399 [2024-07-26 09:06:31.548332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.399 [2024-07-26 09:06:31.548358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.399 qpair failed and we were unable to recover it. 00:33:13.399 [2024-07-26 09:06:31.548531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.399 [2024-07-26 09:06:31.548558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.399 qpair failed and we were unable to recover it. 00:33:13.399 [2024-07-26 09:06:31.548711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.399 [2024-07-26 09:06:31.548737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.399 qpair failed and we were unable to recover it. 00:33:13.399 [2024-07-26 09:06:31.548876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.399 [2024-07-26 09:06:31.548901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.399 qpair failed and we were unable to recover it. 00:33:13.399 [2024-07-26 09:06:31.549057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.399 [2024-07-26 09:06:31.549102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.399 qpair failed and we were unable to recover it. 00:33:13.399 [2024-07-26 09:06:31.549226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.399 [2024-07-26 09:06:31.549253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.399 qpair failed and we were unable to recover it. 00:33:13.399 [2024-07-26 09:06:31.549397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.399 [2024-07-26 09:06:31.549423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.399 qpair failed and we were unable to recover it. 00:33:13.399 [2024-07-26 09:06:31.549596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.399 [2024-07-26 09:06:31.549622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.399 qpair failed and we were unable to recover it. 00:33:13.399 [2024-07-26 09:06:31.549797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.399 [2024-07-26 09:06:31.549823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.399 qpair failed and we were unable to recover it. 00:33:13.399 [2024-07-26 09:06:31.549950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.399 [2024-07-26 09:06:31.549978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.399 qpair failed and we were unable to recover it. 00:33:13.399 [2024-07-26 09:06:31.550159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.399 [2024-07-26 09:06:31.550190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.399 qpair failed and we were unable to recover it. 00:33:13.399 [2024-07-26 09:06:31.550337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.399 [2024-07-26 09:06:31.550364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.399 qpair failed and we were unable to recover it. 00:33:13.399 [2024-07-26 09:06:31.550479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.399 [2024-07-26 09:06:31.550504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.399 qpair failed and we were unable to recover it. 00:33:13.399 [2024-07-26 09:06:31.550633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.399 [2024-07-26 09:06:31.550660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.399 qpair failed and we were unable to recover it. 00:33:13.399 [2024-07-26 09:06:31.550812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.399 [2024-07-26 09:06:31.550838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.399 qpair failed and we were unable to recover it. 00:33:13.399 [2024-07-26 09:06:31.550989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.399 [2024-07-26 09:06:31.551015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.399 qpair failed and we were unable to recover it. 00:33:13.399 [2024-07-26 09:06:31.551197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.399 [2024-07-26 09:06:31.551224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.399 qpair failed and we were unable to recover it. 00:33:13.399 [2024-07-26 09:06:31.551376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.399 [2024-07-26 09:06:31.551404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.399 qpair failed and we were unable to recover it. 00:33:13.399 [2024-07-26 09:06:31.551520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.399 [2024-07-26 09:06:31.551549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.399 qpair failed and we were unable to recover it. 00:33:13.399 [2024-07-26 09:06:31.551685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.399 [2024-07-26 09:06:31.551712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.399 qpair failed and we were unable to recover it. 00:33:13.400 [2024-07-26 09:06:31.551860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.400 [2024-07-26 09:06:31.551886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.400 qpair failed and we were unable to recover it. 00:33:13.400 [2024-07-26 09:06:31.552032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.400 [2024-07-26 09:06:31.552066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.400 qpair failed and we were unable to recover it. 00:33:13.400 [2024-07-26 09:06:31.552238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.400 [2024-07-26 09:06:31.552267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.400 qpair failed and we were unable to recover it. 00:33:13.400 [2024-07-26 09:06:31.552417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.400 [2024-07-26 09:06:31.552443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.400 qpair failed and we were unable to recover it. 00:33:13.400 [2024-07-26 09:06:31.552597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.400 [2024-07-26 09:06:31.552623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.400 qpair failed and we were unable to recover it. 00:33:13.400 [2024-07-26 09:06:31.552792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.400 [2024-07-26 09:06:31.552823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.400 qpair failed and we were unable to recover it. 00:33:13.400 [2024-07-26 09:06:31.552949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.400 [2024-07-26 09:06:31.552978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.400 qpair failed and we were unable to recover it. 00:33:13.400 [2024-07-26 09:06:31.553139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.400 [2024-07-26 09:06:31.553165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.400 qpair failed and we were unable to recover it. 00:33:13.400 [2024-07-26 09:06:31.553297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.400 [2024-07-26 09:06:31.553323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.400 qpair failed and we were unable to recover it. 00:33:13.400 [2024-07-26 09:06:31.553439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.400 [2024-07-26 09:06:31.553465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.400 qpair failed and we were unable to recover it. 00:33:13.400 [2024-07-26 09:06:31.553625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.400 [2024-07-26 09:06:31.553651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.400 qpair failed and we were unable to recover it. 00:33:13.400 [2024-07-26 09:06:31.553817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.400 [2024-07-26 09:06:31.553845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.400 qpair failed and we were unable to recover it. 00:33:13.400 [2024-07-26 09:06:31.553961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.400 [2024-07-26 09:06:31.553988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.400 qpair failed and we were unable to recover it. 00:33:13.400 [2024-07-26 09:06:31.554140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.400 [2024-07-26 09:06:31.554166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.400 qpair failed and we were unable to recover it. 00:33:13.400 [2024-07-26 09:06:31.554296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.400 [2024-07-26 09:06:31.554323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.400 qpair failed and we were unable to recover it. 00:33:13.400 [2024-07-26 09:06:31.554483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.400 [2024-07-26 09:06:31.554512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.400 qpair failed and we were unable to recover it. 00:33:13.400 [2024-07-26 09:06:31.554651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.400 [2024-07-26 09:06:31.554679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.400 qpair failed and we were unable to recover it. 00:33:13.400 [2024-07-26 09:06:31.554831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.400 [2024-07-26 09:06:31.554865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.400 qpair failed and we were unable to recover it. 00:33:13.400 [2024-07-26 09:06:31.555020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.400 [2024-07-26 09:06:31.555046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.400 qpair failed and we were unable to recover it. 00:33:13.400 [2024-07-26 09:06:31.555210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.400 [2024-07-26 09:06:31.555236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.400 qpair failed and we were unable to recover it. 00:33:13.400 [2024-07-26 09:06:31.555353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.400 [2024-07-26 09:06:31.555379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.400 qpair failed and we were unable to recover it. 00:33:13.400 [2024-07-26 09:06:31.555503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.400 [2024-07-26 09:06:31.555531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.400 qpair failed and we were unable to recover it. 00:33:13.400 [2024-07-26 09:06:31.555679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.400 [2024-07-26 09:06:31.555709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.400 qpair failed and we were unable to recover it. 00:33:13.400 [2024-07-26 09:06:31.555914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.400 [2024-07-26 09:06:31.555940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.400 qpair failed and we were unable to recover it. 00:33:13.400 [2024-07-26 09:06:31.556049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.400 [2024-07-26 09:06:31.556082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.400 qpair failed and we were unable to recover it. 00:33:13.400 [2024-07-26 09:06:31.556217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.400 [2024-07-26 09:06:31.556244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.400 qpair failed and we were unable to recover it. 00:33:13.400 [2024-07-26 09:06:31.556469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.400 [2024-07-26 09:06:31.556495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.400 qpair failed and we were unable to recover it. 00:33:13.400 [2024-07-26 09:06:31.556618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.400 [2024-07-26 09:06:31.556644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.400 qpair failed and we were unable to recover it. 00:33:13.400 [2024-07-26 09:06:31.556763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.400 [2024-07-26 09:06:31.556796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.400 qpair failed and we were unable to recover it. 00:33:13.400 [2024-07-26 09:06:31.556943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.400 [2024-07-26 09:06:31.556969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.400 qpair failed and we were unable to recover it. 00:33:13.400 [2024-07-26 09:06:31.557083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.400 [2024-07-26 09:06:31.557118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.400 qpair failed and we were unable to recover it. 00:33:13.400 [2024-07-26 09:06:31.557285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.400 [2024-07-26 09:06:31.557310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.400 qpair failed and we were unable to recover it. 00:33:13.400 [2024-07-26 09:06:31.557490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.400 [2024-07-26 09:06:31.557516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.400 qpair failed and we were unable to recover it. 00:33:13.400 [2024-07-26 09:06:31.557657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.400 [2024-07-26 09:06:31.557688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.400 qpair failed and we were unable to recover it. 00:33:13.400 [2024-07-26 09:06:31.557859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.400 [2024-07-26 09:06:31.557887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.400 qpair failed and we were unable to recover it. 00:33:13.400 [2024-07-26 09:06:31.558111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.400 [2024-07-26 09:06:31.558139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.400 qpair failed and we were unable to recover it. 00:33:13.400 [2024-07-26 09:06:31.558257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.400 [2024-07-26 09:06:31.558283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.400 qpair failed and we were unable to recover it. 00:33:13.401 [2024-07-26 09:06:31.558440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.401 [2024-07-26 09:06:31.558465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.401 qpair failed and we were unable to recover it. 00:33:13.401 [2024-07-26 09:06:31.558593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.401 [2024-07-26 09:06:31.558620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.401 qpair failed and we were unable to recover it. 00:33:13.401 [2024-07-26 09:06:31.558745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.401 [2024-07-26 09:06:31.558773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.401 qpair failed and we were unable to recover it. 00:33:13.401 [2024-07-26 09:06:31.558940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.401 [2024-07-26 09:06:31.558998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.401 qpair failed and we were unable to recover it. 00:33:13.401 [2024-07-26 09:06:31.559157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.401 [2024-07-26 09:06:31.559189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.401 qpair failed and we were unable to recover it. 00:33:13.401 [2024-07-26 09:06:31.559336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.401 [2024-07-26 09:06:31.559365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.401 qpair failed and we were unable to recover it. 00:33:13.401 [2024-07-26 09:06:31.559486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.401 [2024-07-26 09:06:31.559514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.401 qpair failed and we were unable to recover it. 00:33:13.401 [2024-07-26 09:06:31.559673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.401 [2024-07-26 09:06:31.559702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.401 qpair failed and we were unable to recover it. 00:33:13.401 [2024-07-26 09:06:31.559829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.401 [2024-07-26 09:06:31.559857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.401 qpair failed and we were unable to recover it. 00:33:13.401 [2024-07-26 09:06:31.560051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.401 [2024-07-26 09:06:31.560089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.401 qpair failed and we were unable to recover it. 00:33:13.401 [2024-07-26 09:06:31.560228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.401 [2024-07-26 09:06:31.560255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.401 qpair failed and we were unable to recover it. 00:33:13.401 [2024-07-26 09:06:31.560404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.401 [2024-07-26 09:06:31.560431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.401 qpair failed and we were unable to recover it. 00:33:13.401 [2024-07-26 09:06:31.560554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.401 [2024-07-26 09:06:31.560581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.401 qpair failed and we were unable to recover it. 00:33:13.401 [2024-07-26 09:06:31.560724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.401 [2024-07-26 09:06:31.560750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.401 qpair failed and we were unable to recover it. 00:33:13.401 [2024-07-26 09:06:31.560903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.401 [2024-07-26 09:06:31.560929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.401 qpair failed and we were unable to recover it. 00:33:13.401 [2024-07-26 09:06:31.561076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.401 [2024-07-26 09:06:31.561102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.401 qpair failed and we were unable to recover it. 00:33:13.401 [2024-07-26 09:06:31.561314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.401 [2024-07-26 09:06:31.561340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.401 qpair failed and we were unable to recover it. 00:33:13.401 [2024-07-26 09:06:31.561493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.401 [2024-07-26 09:06:31.561522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.401 qpair failed and we were unable to recover it. 00:33:13.401 [2024-07-26 09:06:31.561658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.401 [2024-07-26 09:06:31.561687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.401 qpair failed and we were unable to recover it. 00:33:13.401 [2024-07-26 09:06:31.561875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.401 [2024-07-26 09:06:31.561902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.401 qpair failed and we were unable to recover it. 00:33:13.401 [2024-07-26 09:06:31.562024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.401 [2024-07-26 09:06:31.562050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.401 qpair failed and we were unable to recover it. 00:33:13.401 [2024-07-26 09:06:31.562212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.401 [2024-07-26 09:06:31.562238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.401 qpair failed and we were unable to recover it. 00:33:13.401 [2024-07-26 09:06:31.562365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.401 [2024-07-26 09:06:31.562391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.401 qpair failed and we were unable to recover it. 00:33:13.401 [2024-07-26 09:06:31.562544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.401 [2024-07-26 09:06:31.562587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.401 qpair failed and we were unable to recover it. 00:33:13.401 [2024-07-26 09:06:31.562786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.401 [2024-07-26 09:06:31.562818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.401 qpair failed and we were unable to recover it. 00:33:13.401 [2024-07-26 09:06:31.562958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.401 [2024-07-26 09:06:31.562986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.401 qpair failed and we were unable to recover it. 00:33:13.401 [2024-07-26 09:06:31.563131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.401 [2024-07-26 09:06:31.563158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.401 qpair failed and we were unable to recover it. 00:33:13.401 [2024-07-26 09:06:31.563278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.401 [2024-07-26 09:06:31.563304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.401 qpair failed and we were unable to recover it. 00:33:13.401 [2024-07-26 09:06:31.563452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.401 [2024-07-26 09:06:31.563479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.401 qpair failed and we were unable to recover it. 00:33:13.401 [2024-07-26 09:06:31.563594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.401 [2024-07-26 09:06:31.563620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.401 qpair failed and we were unable to recover it. 00:33:13.401 [2024-07-26 09:06:31.563758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.401 [2024-07-26 09:06:31.563784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.401 qpair failed and we were unable to recover it. 00:33:13.401 [2024-07-26 09:06:31.563930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.401 [2024-07-26 09:06:31.563960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.401 qpair failed and we were unable to recover it. 00:33:13.401 [2024-07-26 09:06:31.564133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.401 [2024-07-26 09:06:31.564160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.401 qpair failed and we were unable to recover it. 00:33:13.401 [2024-07-26 09:06:31.564304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.401 [2024-07-26 09:06:31.564337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.401 qpair failed and we were unable to recover it. 00:33:13.401 [2024-07-26 09:06:31.564452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.401 [2024-07-26 09:06:31.564479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.401 qpair failed and we were unable to recover it. 00:33:13.401 [2024-07-26 09:06:31.564626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.401 [2024-07-26 09:06:31.564652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.401 qpair failed and we were unable to recover it. 00:33:13.401 [2024-07-26 09:06:31.564803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.401 [2024-07-26 09:06:31.564830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.401 qpair failed and we were unable to recover it. 00:33:13.401 [2024-07-26 09:06:31.564955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.402 [2024-07-26 09:06:31.564982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.402 qpair failed and we were unable to recover it. 00:33:13.402 [2024-07-26 09:06:31.565123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.402 [2024-07-26 09:06:31.565151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.402 qpair failed and we were unable to recover it. 00:33:13.402 [2024-07-26 09:06:31.565300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.402 [2024-07-26 09:06:31.565327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.402 qpair failed and we were unable to recover it. 00:33:13.402 [2024-07-26 09:06:31.565477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.402 [2024-07-26 09:06:31.565503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.402 qpair failed and we were unable to recover it. 00:33:13.402 [2024-07-26 09:06:31.565620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.402 [2024-07-26 09:06:31.565649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.402 qpair failed and we were unable to recover it. 00:33:13.402 [2024-07-26 09:06:31.565812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.402 [2024-07-26 09:06:31.565839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.402 qpair failed and we were unable to recover it. 00:33:13.402 [2024-07-26 09:06:31.565964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.402 [2024-07-26 09:06:31.565990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.402 qpair failed and we were unable to recover it. 00:33:13.402 [2024-07-26 09:06:31.566138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.402 [2024-07-26 09:06:31.566165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.402 qpair failed and we were unable to recover it. 00:33:13.402 [2024-07-26 09:06:31.566315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.402 [2024-07-26 09:06:31.566345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.402 qpair failed and we were unable to recover it. 00:33:13.402 [2024-07-26 09:06:31.566493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.402 [2024-07-26 09:06:31.566520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.402 qpair failed and we were unable to recover it. 00:33:13.402 [2024-07-26 09:06:31.566642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.402 [2024-07-26 09:06:31.566671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.402 qpair failed and we were unable to recover it. 00:33:13.402 [2024-07-26 09:06:31.566851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.402 [2024-07-26 09:06:31.566877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.402 qpair failed and we were unable to recover it. 00:33:13.402 [2024-07-26 09:06:31.567027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.402 [2024-07-26 09:06:31.567054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.402 qpair failed and we were unable to recover it. 00:33:13.402 [2024-07-26 09:06:31.567211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.402 [2024-07-26 09:06:31.567237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.402 qpair failed and we were unable to recover it. 00:33:13.402 [2024-07-26 09:06:31.567424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.402 [2024-07-26 09:06:31.567453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.402 qpair failed and we were unable to recover it. 00:33:13.402 [2024-07-26 09:06:31.567628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.402 [2024-07-26 09:06:31.567655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.402 qpair failed and we were unable to recover it. 00:33:13.402 [2024-07-26 09:06:31.567800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.402 [2024-07-26 09:06:31.567829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.402 qpair failed and we were unable to recover it. 00:33:13.402 [2024-07-26 09:06:31.567974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.402 [2024-07-26 09:06:31.568001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.402 qpair failed and we were unable to recover it. 00:33:13.402 [2024-07-26 09:06:31.568145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.402 [2024-07-26 09:06:31.568172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.402 qpair failed and we were unable to recover it. 00:33:13.402 [2024-07-26 09:06:31.568293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.402 [2024-07-26 09:06:31.568319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.402 qpair failed and we were unable to recover it. 00:33:13.402 [2024-07-26 09:06:31.568436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.402 [2024-07-26 09:06:31.568462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.402 qpair failed and we were unable to recover it. 00:33:13.402 [2024-07-26 09:06:31.568584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.402 [2024-07-26 09:06:31.568610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.402 qpair failed and we were unable to recover it. 00:33:13.402 [2024-07-26 09:06:31.568754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.402 [2024-07-26 09:06:31.568780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.402 qpair failed and we were unable to recover it. 00:33:13.402 [2024-07-26 09:06:31.568925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.402 [2024-07-26 09:06:31.568955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.402 qpair failed and we were unable to recover it. 00:33:13.402 [2024-07-26 09:06:31.569107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.402 [2024-07-26 09:06:31.569135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.402 qpair failed and we were unable to recover it. 00:33:13.402 [2024-07-26 09:06:31.569287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.402 [2024-07-26 09:06:31.569313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.402 qpair failed and we were unable to recover it. 00:33:13.402 [2024-07-26 09:06:31.569543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.402 [2024-07-26 09:06:31.569569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.402 qpair failed and we were unable to recover it. 00:33:13.402 [2024-07-26 09:06:31.569694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.402 [2024-07-26 09:06:31.569721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.402 qpair failed and we were unable to recover it. 00:33:13.402 [2024-07-26 09:06:31.569846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.402 [2024-07-26 09:06:31.569872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.402 qpair failed and we were unable to recover it. 00:33:13.402 [2024-07-26 09:06:31.569993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.402 [2024-07-26 09:06:31.570021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.402 qpair failed and we were unable to recover it. 00:33:13.402 [2024-07-26 09:06:31.570177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.402 [2024-07-26 09:06:31.570212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.402 qpair failed and we were unable to recover it. 00:33:13.402 [2024-07-26 09:06:31.570333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.402 [2024-07-26 09:06:31.570359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.402 qpair failed and we were unable to recover it. 00:33:13.402 [2024-07-26 09:06:31.570476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.402 [2024-07-26 09:06:31.570502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.402 qpair failed and we were unable to recover it. 00:33:13.402 [2024-07-26 09:06:31.570650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.402 [2024-07-26 09:06:31.570678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.402 qpair failed and we were unable to recover it. 00:33:13.402 [2024-07-26 09:06:31.570822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.402 [2024-07-26 09:06:31.570851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.402 qpair failed and we were unable to recover it. 00:33:13.402 [2024-07-26 09:06:31.571005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.402 [2024-07-26 09:06:31.571035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.402 qpair failed and we were unable to recover it. 00:33:13.402 [2024-07-26 09:06:31.571193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.402 [2024-07-26 09:06:31.571224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.402 qpair failed and we were unable to recover it. 00:33:13.402 [2024-07-26 09:06:31.571369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.403 [2024-07-26 09:06:31.571397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.403 qpair failed and we were unable to recover it. 00:33:13.403 [2024-07-26 09:06:31.571515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.403 [2024-07-26 09:06:31.571542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.403 qpair failed and we were unable to recover it. 00:33:13.403 [2024-07-26 09:06:31.571690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.403 [2024-07-26 09:06:31.571716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.403 qpair failed and we were unable to recover it. 00:33:13.403 [2024-07-26 09:06:31.571860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.403 [2024-07-26 09:06:31.571886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.403 qpair failed and we were unable to recover it. 00:33:13.403 [2024-07-26 09:06:31.572048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.403 [2024-07-26 09:06:31.572085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.403 qpair failed and we were unable to recover it. 00:33:13.403 [2024-07-26 09:06:31.572228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.403 [2024-07-26 09:06:31.572256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.403 qpair failed and we were unable to recover it. 00:33:13.403 [2024-07-26 09:06:31.572377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.403 [2024-07-26 09:06:31.572404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.403 qpair failed and we were unable to recover it. 00:33:13.403 [2024-07-26 09:06:31.572531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.403 [2024-07-26 09:06:31.572557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.403 qpair failed and we were unable to recover it. 00:33:13.403 [2024-07-26 09:06:31.572705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.403 [2024-07-26 09:06:31.572732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.403 qpair failed and we were unable to recover it. 00:33:13.403 [2024-07-26 09:06:31.572856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.403 [2024-07-26 09:06:31.572882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.403 qpair failed and we were unable to recover it. 00:33:13.403 [2024-07-26 09:06:31.573027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.403 [2024-07-26 09:06:31.573053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.403 qpair failed and we were unable to recover it. 00:33:13.403 [2024-07-26 09:06:31.573184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.403 [2024-07-26 09:06:31.573212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.403 qpair failed and we were unable to recover it. 00:33:13.403 [2024-07-26 09:06:31.573335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.403 [2024-07-26 09:06:31.573362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.403 qpair failed and we were unable to recover it. 00:33:13.403 [2024-07-26 09:06:31.573494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.403 [2024-07-26 09:06:31.573521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.403 qpair failed and we were unable to recover it. 00:33:13.403 [2024-07-26 09:06:31.573642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.403 [2024-07-26 09:06:31.573669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.403 qpair failed and we were unable to recover it. 00:33:13.403 [2024-07-26 09:06:31.573809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.403 [2024-07-26 09:06:31.573853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.403 qpair failed and we were unable to recover it. 00:33:13.403 [2024-07-26 09:06:31.574043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.403 [2024-07-26 09:06:31.574078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.403 qpair failed and we were unable to recover it. 00:33:13.403 [2024-07-26 09:06:31.574221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.403 [2024-07-26 09:06:31.574247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.403 qpair failed and we were unable to recover it. 00:33:13.403 [2024-07-26 09:06:31.574393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.403 [2024-07-26 09:06:31.574419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.403 qpair failed and we were unable to recover it. 00:33:13.403 [2024-07-26 09:06:31.574572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.403 [2024-07-26 09:06:31.574598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.403 qpair failed and we were unable to recover it. 00:33:13.403 [2024-07-26 09:06:31.574717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.403 [2024-07-26 09:06:31.574743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.403 qpair failed and we were unable to recover it. 00:33:13.403 [2024-07-26 09:06:31.574874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.403 [2024-07-26 09:06:31.574900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.403 qpair failed and we were unable to recover it. 00:33:13.403 [2024-07-26 09:06:31.575050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.403 [2024-07-26 09:06:31.575082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.403 qpair failed and we were unable to recover it. 00:33:13.403 [2024-07-26 09:06:31.575226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.403 [2024-07-26 09:06:31.575253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.403 qpair failed and we were unable to recover it. 00:33:13.403 [2024-07-26 09:06:31.575374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.403 [2024-07-26 09:06:31.575400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.403 qpair failed and we were unable to recover it. 00:33:13.403 [2024-07-26 09:06:31.575547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.403 [2024-07-26 09:06:31.575573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.403 qpair failed and we were unable to recover it. 00:33:13.403 [2024-07-26 09:06:31.575730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.403 [2024-07-26 09:06:31.575757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.403 qpair failed and we were unable to recover it. 00:33:13.403 [2024-07-26 09:06:31.575881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.403 [2024-07-26 09:06:31.575906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.403 qpair failed and we were unable to recover it. 00:33:13.403 [2024-07-26 09:06:31.576055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.403 [2024-07-26 09:06:31.576087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.403 qpair failed and we were unable to recover it. 00:33:13.403 [2024-07-26 09:06:31.576240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.403 [2024-07-26 09:06:31.576266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.403 qpair failed and we were unable to recover it. 00:33:13.403 [2024-07-26 09:06:31.576389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.403 [2024-07-26 09:06:31.576433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.403 qpair failed and we were unable to recover it. 00:33:13.403 [2024-07-26 09:06:31.576604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.403 [2024-07-26 09:06:31.576633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.403 qpair failed and we were unable to recover it. 00:33:13.403 [2024-07-26 09:06:31.576804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.404 [2024-07-26 09:06:31.576833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.404 qpair failed and we were unable to recover it. 00:33:13.404 [2024-07-26 09:06:31.576968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.404 [2024-07-26 09:06:31.576994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.404 qpair failed and we were unable to recover it. 00:33:13.404 [2024-07-26 09:06:31.577142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.404 [2024-07-26 09:06:31.577169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.404 qpair failed and we were unable to recover it. 00:33:13.404 [2024-07-26 09:06:31.577319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.404 [2024-07-26 09:06:31.577345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.404 qpair failed and we were unable to recover it. 00:33:13.404 [2024-07-26 09:06:31.577506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.404 [2024-07-26 09:06:31.577531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.404 qpair failed and we were unable to recover it. 00:33:13.404 [2024-07-26 09:06:31.577707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.404 [2024-07-26 09:06:31.577735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.404 qpair failed and we were unable to recover it. 00:33:13.404 [2024-07-26 09:06:31.577887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.404 [2024-07-26 09:06:31.577913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.404 qpair failed and we were unable to recover it. 00:33:13.404 [2024-07-26 09:06:31.578070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.404 [2024-07-26 09:06:31.578101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.404 qpair failed and we were unable to recover it. 00:33:13.404 [2024-07-26 09:06:31.578222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.404 [2024-07-26 09:06:31.578249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.404 qpair failed and we were unable to recover it. 00:33:13.404 [2024-07-26 09:06:31.578376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.404 [2024-07-26 09:06:31.578402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.404 qpair failed and we were unable to recover it. 00:33:13.404 [2024-07-26 09:06:31.578546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.404 [2024-07-26 09:06:31.578573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.404 qpair failed and we were unable to recover it. 00:33:13.404 [2024-07-26 09:06:31.578717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.404 [2024-07-26 09:06:31.578743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.404 qpair failed and we were unable to recover it. 00:33:13.404 [2024-07-26 09:06:31.578900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.404 [2024-07-26 09:06:31.578928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.404 qpair failed and we were unable to recover it. 00:33:13.404 [2024-07-26 09:06:31.579123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.404 [2024-07-26 09:06:31.579150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.404 qpair failed and we were unable to recover it. 00:33:13.404 [2024-07-26 09:06:31.579301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.404 [2024-07-26 09:06:31.579327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.404 qpair failed and we were unable to recover it. 00:33:13.404 [2024-07-26 09:06:31.579444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.404 [2024-07-26 09:06:31.579473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.404 qpair failed and we were unable to recover it. 00:33:13.404 [2024-07-26 09:06:31.579623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.404 [2024-07-26 09:06:31.579649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.404 qpair failed and we were unable to recover it. 00:33:13.404 [2024-07-26 09:06:31.579776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.404 [2024-07-26 09:06:31.579802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.404 qpair failed and we were unable to recover it. 00:33:13.404 [2024-07-26 09:06:31.579948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.404 [2024-07-26 09:06:31.579974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.404 qpair failed and we were unable to recover it. 00:33:13.404 [2024-07-26 09:06:31.580099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.404 [2024-07-26 09:06:31.580128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.404 qpair failed and we were unable to recover it. 00:33:13.404 [2024-07-26 09:06:31.580273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.404 [2024-07-26 09:06:31.580300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.404 qpair failed and we were unable to recover it. 00:33:13.404 [2024-07-26 09:06:31.580492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.404 [2024-07-26 09:06:31.580519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.404 qpair failed and we were unable to recover it. 00:33:13.404 [2024-07-26 09:06:31.580668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.404 [2024-07-26 09:06:31.580695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.404 qpair failed and we were unable to recover it. 00:33:13.404 [2024-07-26 09:06:31.580849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.404 [2024-07-26 09:06:31.580876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.404 qpair failed and we were unable to recover it. 00:33:13.404 [2024-07-26 09:06:31.581030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.404 [2024-07-26 09:06:31.581056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.404 qpair failed and we were unable to recover it. 00:33:13.404 [2024-07-26 09:06:31.581218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.404 [2024-07-26 09:06:31.581244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.404 qpair failed and we were unable to recover it. 00:33:13.404 [2024-07-26 09:06:31.581374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.404 [2024-07-26 09:06:31.581400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.404 qpair failed and we were unable to recover it. 00:33:13.404 [2024-07-26 09:06:31.581530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.404 [2024-07-26 09:06:31.581556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.404 qpair failed and we were unable to recover it. 00:33:13.404 [2024-07-26 09:06:31.581698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.404 [2024-07-26 09:06:31.581724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.404 qpair failed and we were unable to recover it. 00:33:13.404 [2024-07-26 09:06:31.581870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.404 [2024-07-26 09:06:31.581896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.404 qpair failed and we were unable to recover it. 00:33:13.404 [2024-07-26 09:06:31.582073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.404 [2024-07-26 09:06:31.582110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.404 qpair failed and we were unable to recover it. 00:33:13.404 [2024-07-26 09:06:31.582233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.404 [2024-07-26 09:06:31.582260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.404 qpair failed and we were unable to recover it. 00:33:13.404 [2024-07-26 09:06:31.582398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.404 [2024-07-26 09:06:31.582425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.404 qpair failed and we were unable to recover it. 00:33:13.404 [2024-07-26 09:06:31.582591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.404 [2024-07-26 09:06:31.582620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.404 qpair failed and we were unable to recover it. 00:33:13.404 [2024-07-26 09:06:31.582810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.404 [2024-07-26 09:06:31.582839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.404 qpair failed and we were unable to recover it. 00:33:13.404 [2024-07-26 09:06:31.583036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.404 [2024-07-26 09:06:31.583071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.404 qpair failed and we were unable to recover it. 00:33:13.404 [2024-07-26 09:06:31.583229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.404 [2024-07-26 09:06:31.583255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.404 qpair failed and we were unable to recover it. 00:33:13.405 [2024-07-26 09:06:31.583429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.405 [2024-07-26 09:06:31.583454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.405 qpair failed and we were unable to recover it. 00:33:13.405 [2024-07-26 09:06:31.583593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.405 [2024-07-26 09:06:31.583619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.405 qpair failed and we were unable to recover it. 00:33:13.405 [2024-07-26 09:06:31.583767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.405 [2024-07-26 09:06:31.583794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.405 qpair failed and we were unable to recover it. 00:33:13.405 [2024-07-26 09:06:31.583938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.405 [2024-07-26 09:06:31.583965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.405 qpair failed and we were unable to recover it. 00:33:13.405 [2024-07-26 09:06:31.584115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.405 [2024-07-26 09:06:31.584142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.405 qpair failed and we were unable to recover it. 00:33:13.405 [2024-07-26 09:06:31.584264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.405 [2024-07-26 09:06:31.584292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.405 qpair failed and we were unable to recover it. 00:33:13.405 [2024-07-26 09:06:31.584415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.405 [2024-07-26 09:06:31.584442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.405 qpair failed and we were unable to recover it. 00:33:13.405 [2024-07-26 09:06:31.584557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.405 [2024-07-26 09:06:31.584583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.405 qpair failed and we were unable to recover it. 00:33:13.405 [2024-07-26 09:06:31.584738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.405 [2024-07-26 09:06:31.584764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.405 qpair failed and we were unable to recover it. 00:33:13.405 [2024-07-26 09:06:31.584911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.405 [2024-07-26 09:06:31.584938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.405 qpair failed and we were unable to recover it. 00:33:13.405 [2024-07-26 09:06:31.585109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.405 [2024-07-26 09:06:31.585142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.405 qpair failed and we were unable to recover it. 00:33:13.405 [2024-07-26 09:06:31.585321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.405 [2024-07-26 09:06:31.585350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.405 qpair failed and we were unable to recover it. 00:33:13.405 [2024-07-26 09:06:31.585468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.405 [2024-07-26 09:06:31.585494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.405 qpair failed and we were unable to recover it. 00:33:13.405 [2024-07-26 09:06:31.585637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.405 [2024-07-26 09:06:31.585663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.405 qpair failed and we were unable to recover it. 00:33:13.405 [2024-07-26 09:06:31.585787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.405 [2024-07-26 09:06:31.585813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.405 qpair failed and we were unable to recover it. 00:33:13.405 [2024-07-26 09:06:31.585934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.405 [2024-07-26 09:06:31.585960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.405 qpair failed and we were unable to recover it. 00:33:13.405 [2024-07-26 09:06:31.586081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.405 [2024-07-26 09:06:31.586116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.405 qpair failed and we were unable to recover it. 00:33:13.405 [2024-07-26 09:06:31.586238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.405 [2024-07-26 09:06:31.586267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.405 qpair failed and we were unable to recover it. 00:33:13.405 [2024-07-26 09:06:31.586411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.405 [2024-07-26 09:06:31.586439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.405 qpair failed and we were unable to recover it. 00:33:13.405 [2024-07-26 09:06:31.586584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.405 [2024-07-26 09:06:31.586610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.405 qpair failed and we were unable to recover it. 00:33:13.405 [2024-07-26 09:06:31.586738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.405 [2024-07-26 09:06:31.586764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.405 qpair failed and we were unable to recover it. 00:33:13.405 [2024-07-26 09:06:31.586882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.405 [2024-07-26 09:06:31.586907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.405 qpair failed and we were unable to recover it. 00:33:13.405 [2024-07-26 09:06:31.587017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.405 [2024-07-26 09:06:31.587043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.405 qpair failed and we were unable to recover it. 00:33:13.405 [2024-07-26 09:06:31.587184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.405 [2024-07-26 09:06:31.587211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.405 qpair failed and we were unable to recover it. 00:33:13.405 [2024-07-26 09:06:31.587335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.405 [2024-07-26 09:06:31.587362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.405 qpair failed and we were unable to recover it. 00:33:13.405 [2024-07-26 09:06:31.587524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.405 [2024-07-26 09:06:31.587550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.405 qpair failed and we were unable to recover it. 00:33:13.405 [2024-07-26 09:06:31.587667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.405 [2024-07-26 09:06:31.587693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.405 qpair failed and we were unable to recover it. 00:33:13.405 [2024-07-26 09:06:31.587813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.405 [2024-07-26 09:06:31.587840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.405 qpair failed and we were unable to recover it. 00:33:13.405 [2024-07-26 09:06:31.587987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.405 [2024-07-26 09:06:31.588016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.405 qpair failed and we were unable to recover it. 00:33:13.405 [2024-07-26 09:06:31.588138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.405 [2024-07-26 09:06:31.588165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.405 qpair failed and we were unable to recover it. 00:33:13.405 [2024-07-26 09:06:31.588323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.405 [2024-07-26 09:06:31.588349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.405 qpair failed and we were unable to recover it. 00:33:13.405 [2024-07-26 09:06:31.588465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.405 [2024-07-26 09:06:31.588491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.405 qpair failed and we were unable to recover it. 00:33:13.405 [2024-07-26 09:06:31.588643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.405 [2024-07-26 09:06:31.588676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.405 qpair failed and we were unable to recover it. 00:33:13.405 [2024-07-26 09:06:31.588795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.405 [2024-07-26 09:06:31.588821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.405 qpair failed and we were unable to recover it. 00:33:13.405 [2024-07-26 09:06:31.588937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.405 [2024-07-26 09:06:31.588964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.405 qpair failed and we were unable to recover it. 00:33:13.405 [2024-07-26 09:06:31.589091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.405 [2024-07-26 09:06:31.589117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.405 qpair failed and we were unable to recover it. 00:33:13.405 [2024-07-26 09:06:31.589266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.405 [2024-07-26 09:06:31.589293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.405 qpair failed and we were unable to recover it. 00:33:13.405 [2024-07-26 09:06:31.589434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.406 [2024-07-26 09:06:31.589470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.406 qpair failed and we were unable to recover it. 00:33:13.406 [2024-07-26 09:06:31.589594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.406 [2024-07-26 09:06:31.589622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.406 qpair failed and we were unable to recover it. 00:33:13.406 [2024-07-26 09:06:31.589775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.406 [2024-07-26 09:06:31.589804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.406 qpair failed and we were unable to recover it. 00:33:13.406 [2024-07-26 09:06:31.589925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.406 [2024-07-26 09:06:31.589952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.406 qpair failed and we were unable to recover it. 00:33:13.406 [2024-07-26 09:06:31.590130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.406 [2024-07-26 09:06:31.590158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.406 qpair failed and we were unable to recover it. 00:33:13.406 [2024-07-26 09:06:31.590305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.406 [2024-07-26 09:06:31.590339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.406 qpair failed and we were unable to recover it. 00:33:13.406 [2024-07-26 09:06:31.590464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.406 [2024-07-26 09:06:31.590492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.406 qpair failed and we were unable to recover it. 00:33:13.406 [2024-07-26 09:06:31.590641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.406 [2024-07-26 09:06:31.590670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.406 qpair failed and we were unable to recover it. 00:33:13.406 [2024-07-26 09:06:31.590812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.406 [2024-07-26 09:06:31.590838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.406 qpair failed and we were unable to recover it. 00:33:13.406 [2024-07-26 09:06:31.590956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.406 [2024-07-26 09:06:31.590983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.406 qpair failed and we were unable to recover it. 00:33:13.406 [2024-07-26 09:06:31.591155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.406 [2024-07-26 09:06:31.591182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.406 qpair failed and we were unable to recover it. 00:33:13.406 [2024-07-26 09:06:31.591301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.406 [2024-07-26 09:06:31.591333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.406 qpair failed and we were unable to recover it. 00:33:13.406 [2024-07-26 09:06:31.591457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.406 [2024-07-26 09:06:31.591484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.406 qpair failed and we were unable to recover it. 00:33:13.406 [2024-07-26 09:06:31.591634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.406 [2024-07-26 09:06:31.591665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.406 qpair failed and we were unable to recover it. 00:33:13.406 [2024-07-26 09:06:31.591792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.406 [2024-07-26 09:06:31.591819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.406 qpair failed and we were unable to recover it. 00:33:13.406 [2024-07-26 09:06:31.591966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.406 [2024-07-26 09:06:31.591995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.406 qpair failed and we were unable to recover it. 00:33:13.406 [2024-07-26 09:06:31.592135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.406 [2024-07-26 09:06:31.592161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.406 qpair failed and we were unable to recover it. 00:33:13.406 [2024-07-26 09:06:31.592292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.406 [2024-07-26 09:06:31.592329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.406 qpair failed and we were unable to recover it. 00:33:13.406 [2024-07-26 09:06:31.592489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.406 [2024-07-26 09:06:31.592515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.406 qpair failed and we were unable to recover it. 00:33:13.406 [2024-07-26 09:06:31.592641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.406 [2024-07-26 09:06:31.592667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.406 qpair failed and we were unable to recover it. 00:33:13.406 [2024-07-26 09:06:31.592809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.406 [2024-07-26 09:06:31.592835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.406 qpair failed and we were unable to recover it. 00:33:13.406 [2024-07-26 09:06:31.592988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.406 [2024-07-26 09:06:31.593014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.406 qpair failed and we were unable to recover it. 00:33:13.406 [2024-07-26 09:06:31.593139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.406 [2024-07-26 09:06:31.593168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.406 qpair failed and we were unable to recover it. 00:33:13.406 [2024-07-26 09:06:31.593314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.406 [2024-07-26 09:06:31.593343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.406 qpair failed and we were unable to recover it. 00:33:13.406 [2024-07-26 09:06:31.593529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.406 [2024-07-26 09:06:31.593555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.406 qpair failed and we were unable to recover it. 00:33:13.406 [2024-07-26 09:06:31.593698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.406 [2024-07-26 09:06:31.593727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.406 qpair failed and we were unable to recover it. 00:33:13.406 [2024-07-26 09:06:31.593875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.406 [2024-07-26 09:06:31.593902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.406 qpair failed and we were unable to recover it. 00:33:13.406 [2024-07-26 09:06:31.594029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.406 [2024-07-26 09:06:31.594056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.406 qpair failed and we were unable to recover it. 00:33:13.406 [2024-07-26 09:06:31.594207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.406 [2024-07-26 09:06:31.594236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.406 qpair failed and we were unable to recover it. 00:33:13.406 [2024-07-26 09:06:31.594364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.406 [2024-07-26 09:06:31.594391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.406 qpair failed and we were unable to recover it. 00:33:13.406 [2024-07-26 09:06:31.594544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.406 [2024-07-26 09:06:31.594570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.406 qpair failed and we were unable to recover it. 00:33:13.406 [2024-07-26 09:06:31.594717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.406 [2024-07-26 09:06:31.594744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.406 qpair failed and we were unable to recover it. 00:33:13.406 [2024-07-26 09:06:31.594899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.406 [2024-07-26 09:06:31.594924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.406 qpair failed and we were unable to recover it. 00:33:13.406 [2024-07-26 09:06:31.595085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.406 [2024-07-26 09:06:31.595124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.406 qpair failed and we were unable to recover it. 00:33:13.406 [2024-07-26 09:06:31.595241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.406 [2024-07-26 09:06:31.595267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.406 qpair failed and we were unable to recover it. 00:33:13.406 [2024-07-26 09:06:31.595410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.406 [2024-07-26 09:06:31.595437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.406 qpair failed and we were unable to recover it. 00:33:13.406 [2024-07-26 09:06:31.595557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.406 [2024-07-26 09:06:31.595583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.406 qpair failed and we were unable to recover it. 00:33:13.406 [2024-07-26 09:06:31.595728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.406 [2024-07-26 09:06:31.595757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.407 qpair failed and we were unable to recover it. 00:33:13.407 [2024-07-26 09:06:31.595934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.407 [2024-07-26 09:06:31.595960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.407 qpair failed and we were unable to recover it. 00:33:13.407 [2024-07-26 09:06:31.596112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.407 [2024-07-26 09:06:31.596138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.407 qpair failed and we were unable to recover it. 00:33:13.407 [2024-07-26 09:06:31.596304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.407 [2024-07-26 09:06:31.596345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:13.407 qpair failed and we were unable to recover it. 00:33:13.407 [2024-07-26 09:06:31.596472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.407 [2024-07-26 09:06:31.596509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:13.407 qpair failed and we were unable to recover it. 00:33:13.407 [2024-07-26 09:06:31.596679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.407 [2024-07-26 09:06:31.596732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:13.407 qpair failed and we were unable to recover it. 00:33:13.407 [2024-07-26 09:06:31.596938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.407 [2024-07-26 09:06:31.596968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.407 qpair failed and we were unable to recover it. 00:33:13.407 [2024-07-26 09:06:31.597113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.407 [2024-07-26 09:06:31.597140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.407 qpair failed and we were unable to recover it. 00:33:13.407 [2024-07-26 09:06:31.597290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.407 [2024-07-26 09:06:31.597318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.407 qpair failed and we were unable to recover it. 00:33:13.407 [2024-07-26 09:06:31.597477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.407 [2024-07-26 09:06:31.597507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.407 qpair failed and we were unable to recover it. 00:33:13.407 [2024-07-26 09:06:31.597635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.407 [2024-07-26 09:06:31.597664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.407 qpair failed and we were unable to recover it. 00:33:13.407 [2024-07-26 09:06:31.597813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.407 [2024-07-26 09:06:31.597842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.407 qpair failed and we were unable to recover it. 00:33:13.407 [2024-07-26 09:06:31.597982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.407 [2024-07-26 09:06:31.598012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.407 qpair failed and we were unable to recover it. 00:33:13.407 [2024-07-26 09:06:31.598181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.407 [2024-07-26 09:06:31.598208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.407 qpair failed and we were unable to recover it. 00:33:13.407 [2024-07-26 09:06:31.598360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.407 [2024-07-26 09:06:31.598385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.407 qpair failed and we were unable to recover it. 00:33:13.407 [2024-07-26 09:06:31.598507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.407 [2024-07-26 09:06:31.598536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.407 qpair failed and we were unable to recover it. 00:33:13.407 [2024-07-26 09:06:31.598681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.407 [2024-07-26 09:06:31.598714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.407 qpair failed and we were unable to recover it. 00:33:13.407 [2024-07-26 09:06:31.598892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.407 [2024-07-26 09:06:31.598923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.407 qpair failed and we were unable to recover it. 00:33:13.407 [2024-07-26 09:06:31.599071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.407 [2024-07-26 09:06:31.599098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.407 qpair failed and we were unable to recover it. 00:33:13.407 [2024-07-26 09:06:31.599219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.407 [2024-07-26 09:06:31.599246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.407 qpair failed and we were unable to recover it. 00:33:13.407 [2024-07-26 09:06:31.599402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.407 [2024-07-26 09:06:31.599429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.407 qpair failed and we were unable to recover it. 00:33:13.407 [2024-07-26 09:06:31.599596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.407 [2024-07-26 09:06:31.599625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.407 qpair failed and we were unable to recover it. 00:33:13.407 [2024-07-26 09:06:31.599786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.407 [2024-07-26 09:06:31.599816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.407 qpair failed and we were unable to recover it. 00:33:13.407 [2024-07-26 09:06:31.599970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.407 [2024-07-26 09:06:31.600000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.407 qpair failed and we were unable to recover it. 00:33:13.407 [2024-07-26 09:06:31.600177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.407 [2024-07-26 09:06:31.600203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.407 qpair failed and we were unable to recover it. 00:33:13.407 [2024-07-26 09:06:31.600345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.407 [2024-07-26 09:06:31.600372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.407 qpair failed and we were unable to recover it. 00:33:13.407 [2024-07-26 09:06:31.600491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.407 [2024-07-26 09:06:31.600533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.407 qpair failed and we were unable to recover it. 00:33:13.407 [2024-07-26 09:06:31.600678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.407 [2024-07-26 09:06:31.600707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.407 qpair failed and we were unable to recover it. 00:33:13.407 [2024-07-26 09:06:31.600874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.407 [2024-07-26 09:06:31.600903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.407 qpair failed and we were unable to recover it. 00:33:13.407 [2024-07-26 09:06:31.601069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.407 [2024-07-26 09:06:31.601107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.407 qpair failed and we were unable to recover it. 00:33:13.407 [2024-07-26 09:06:31.601282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.407 [2024-07-26 09:06:31.601310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.407 qpair failed and we were unable to recover it. 00:33:13.407 [2024-07-26 09:06:31.601451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.407 [2024-07-26 09:06:31.601477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.407 qpair failed and we were unable to recover it. 00:33:13.407 [2024-07-26 09:06:31.601593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.407 [2024-07-26 09:06:31.601622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.407 qpair failed and we were unable to recover it. 00:33:13.407 [2024-07-26 09:06:31.601790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.407 [2024-07-26 09:06:31.601820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.407 qpair failed and we were unable to recover it. 00:33:13.407 [2024-07-26 09:06:31.602016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.408 [2024-07-26 09:06:31.602046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.408 qpair failed and we were unable to recover it. 00:33:13.408 [2024-07-26 09:06:31.602193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.408 [2024-07-26 09:06:31.602221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.408 qpair failed and we were unable to recover it. 00:33:13.408 [2024-07-26 09:06:31.602376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.408 [2024-07-26 09:06:31.602402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.408 qpair failed and we were unable to recover it. 00:33:13.408 [2024-07-26 09:06:31.602550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.408 [2024-07-26 09:06:31.602576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.408 qpair failed and we were unable to recover it. 00:33:13.408 [2024-07-26 09:06:31.602747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.408 [2024-07-26 09:06:31.602777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.408 qpair failed and we were unable to recover it. 00:33:13.408 [2024-07-26 09:06:31.602957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.408 [2024-07-26 09:06:31.602986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.408 qpair failed and we were unable to recover it. 00:33:13.408 [2024-07-26 09:06:31.603133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.408 [2024-07-26 09:06:31.603161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.408 qpair failed and we were unable to recover it. 00:33:13.408 [2024-07-26 09:06:31.603329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.408 [2024-07-26 09:06:31.603372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.408 qpair failed and we were unable to recover it. 00:33:13.408 [2024-07-26 09:06:31.603567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.408 [2024-07-26 09:06:31.603596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.408 qpair failed and we were unable to recover it. 00:33:13.408 [2024-07-26 09:06:31.603740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.408 [2024-07-26 09:06:31.603789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.408 qpair failed and we were unable to recover it. 00:33:13.408 [2024-07-26 09:06:31.603948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.408 [2024-07-26 09:06:31.603978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.408 qpair failed and we were unable to recover it. 00:33:13.408 [2024-07-26 09:06:31.604148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.408 [2024-07-26 09:06:31.604176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.408 qpair failed and we were unable to recover it. 00:33:13.408 [2024-07-26 09:06:31.604337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.408 [2024-07-26 09:06:31.604364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.408 qpair failed and we were unable to recover it. 00:33:13.408 [2024-07-26 09:06:31.604507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.408 [2024-07-26 09:06:31.604536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.408 qpair failed and we were unable to recover it. 00:33:13.408 [2024-07-26 09:06:31.604717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.408 [2024-07-26 09:06:31.604746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.408 qpair failed and we were unable to recover it. 00:33:13.408 [2024-07-26 09:06:31.604932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.408 [2024-07-26 09:06:31.604965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.408 qpair failed and we were unable to recover it. 00:33:13.408 [2024-07-26 09:06:31.605118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.408 [2024-07-26 09:06:31.605145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.408 qpair failed and we were unable to recover it. 00:33:13.408 [2024-07-26 09:06:31.605270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.408 [2024-07-26 09:06:31.605296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.408 qpair failed and we were unable to recover it. 00:33:13.408 [2024-07-26 09:06:31.605473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.408 [2024-07-26 09:06:31.605499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.408 qpair failed and we were unable to recover it. 00:33:13.408 [2024-07-26 09:06:31.605621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.408 [2024-07-26 09:06:31.605648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.408 qpair failed and we were unable to recover it. 00:33:13.408 [2024-07-26 09:06:31.605799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.408 [2024-07-26 09:06:31.605831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.408 qpair failed and we were unable to recover it. 00:33:13.408 [2024-07-26 09:06:31.605974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.408 [2024-07-26 09:06:31.606000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.408 qpair failed and we were unable to recover it. 00:33:13.408 [2024-07-26 09:06:31.606149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.408 [2024-07-26 09:06:31.606178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.408 qpair failed and we were unable to recover it. 00:33:13.408 [2024-07-26 09:06:31.606312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.408 [2024-07-26 09:06:31.606340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.408 qpair failed and we were unable to recover it. 00:33:13.408 [2024-07-26 09:06:31.606488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.408 [2024-07-26 09:06:31.606515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.408 qpair failed and we were unable to recover it. 00:33:13.408 [2024-07-26 09:06:31.606681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.408 [2024-07-26 09:06:31.606710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.408 qpair failed and we were unable to recover it. 00:33:13.408 [2024-07-26 09:06:31.606892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.408 [2024-07-26 09:06:31.606924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.408 qpair failed and we were unable to recover it. 00:33:13.408 [2024-07-26 09:06:31.607056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.408 [2024-07-26 09:06:31.607090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.408 qpair failed and we were unable to recover it. 00:33:13.408 [2024-07-26 09:06:31.607269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.408 [2024-07-26 09:06:31.607295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.408 qpair failed and we were unable to recover it. 00:33:13.408 [2024-07-26 09:06:31.607458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.408 [2024-07-26 09:06:31.607489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.408 qpair failed and we were unable to recover it. 00:33:13.408 [2024-07-26 09:06:31.607682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.408 [2024-07-26 09:06:31.607714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.408 qpair failed and we were unable to recover it. 00:33:13.408 [2024-07-26 09:06:31.607883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.408 [2024-07-26 09:06:31.607912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.408 qpair failed and we were unable to recover it. 00:33:13.408 [2024-07-26 09:06:31.608052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.408 [2024-07-26 09:06:31.608083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.408 qpair failed and we were unable to recover it. 00:33:13.408 [2024-07-26 09:06:31.608236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.408 [2024-07-26 09:06:31.608263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.408 qpair failed and we were unable to recover it. 00:33:13.408 [2024-07-26 09:06:31.608380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.408 [2024-07-26 09:06:31.608426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.408 qpair failed and we were unable to recover it. 00:33:13.408 [2024-07-26 09:06:31.608656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.408 [2024-07-26 09:06:31.608687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.408 qpair failed and we were unable to recover it. 00:33:13.408 [2024-07-26 09:06:31.608890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.409 [2024-07-26 09:06:31.608920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.409 qpair failed and we were unable to recover it. 00:33:13.409 [2024-07-26 09:06:31.609051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.409 [2024-07-26 09:06:31.609086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.409 qpair failed and we were unable to recover it. 00:33:13.409 [2024-07-26 09:06:31.609227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.409 [2024-07-26 09:06:31.609254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.409 qpair failed and we were unable to recover it. 00:33:13.409 [2024-07-26 09:06:31.609374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.409 [2024-07-26 09:06:31.609400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.409 qpair failed and we were unable to recover it. 00:33:13.409 [2024-07-26 09:06:31.609570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.409 [2024-07-26 09:06:31.609601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.409 qpair failed and we were unable to recover it. 00:33:13.409 [2024-07-26 09:06:31.609733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.409 [2024-07-26 09:06:31.609763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.409 qpair failed and we were unable to recover it. 00:33:13.409 [2024-07-26 09:06:31.609896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.409 [2024-07-26 09:06:31.609925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.409 qpair failed and we were unable to recover it. 00:33:13.409 [2024-07-26 09:06:31.610102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.409 [2024-07-26 09:06:31.610128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.409 qpair failed and we were unable to recover it. 00:33:13.409 [2024-07-26 09:06:31.610279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.409 [2024-07-26 09:06:31.610306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.409 qpair failed and we were unable to recover it. 00:33:13.409 [2024-07-26 09:06:31.610454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.409 [2024-07-26 09:06:31.610480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.409 qpair failed and we were unable to recover it. 00:33:13.409 [2024-07-26 09:06:31.610632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.409 [2024-07-26 09:06:31.610660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.409 qpair failed and we were unable to recover it. 00:33:13.409 [2024-07-26 09:06:31.610807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.409 [2024-07-26 09:06:31.610836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.409 qpair failed and we were unable to recover it. 00:33:13.409 [2024-07-26 09:06:31.611024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.409 [2024-07-26 09:06:31.611053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.409 qpair failed and we were unable to recover it. 00:33:13.409 [2024-07-26 09:06:31.611208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.409 [2024-07-26 09:06:31.611239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.409 qpair failed and we were unable to recover it. 00:33:13.409 [2024-07-26 09:06:31.611412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.409 [2024-07-26 09:06:31.611439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.409 qpair failed and we were unable to recover it. 00:33:13.409 [2024-07-26 09:06:31.611582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.409 [2024-07-26 09:06:31.611612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.409 qpair failed and we were unable to recover it. 00:33:13.409 [2024-07-26 09:06:31.611815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.409 [2024-07-26 09:06:31.611842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.409 qpair failed and we were unable to recover it. 00:33:13.409 [2024-07-26 09:06:31.612008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.409 [2024-07-26 09:06:31.612037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.409 qpair failed and we were unable to recover it. 00:33:13.409 [2024-07-26 09:06:31.612251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.409 [2024-07-26 09:06:31.612278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.409 qpair failed and we were unable to recover it. 00:33:13.409 [2024-07-26 09:06:31.612427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.409 [2024-07-26 09:06:31.612454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.409 qpair failed and we were unable to recover it. 00:33:13.409 [2024-07-26 09:06:31.612609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.409 [2024-07-26 09:06:31.612636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.409 qpair failed and we were unable to recover it. 00:33:13.409 [2024-07-26 09:06:31.612758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.409 [2024-07-26 09:06:31.612785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.409 qpair failed and we were unable to recover it. 00:33:13.409 [2024-07-26 09:06:31.612934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.409 [2024-07-26 09:06:31.612960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.409 qpair failed and we were unable to recover it. 00:33:13.409 [2024-07-26 09:06:31.613113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.409 [2024-07-26 09:06:31.613143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.409 qpair failed and we were unable to recover it. 00:33:13.409 [2024-07-26 09:06:31.613264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.409 [2024-07-26 09:06:31.613291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.409 qpair failed and we were unable to recover it. 00:33:13.409 [2024-07-26 09:06:31.613442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.409 [2024-07-26 09:06:31.613468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.409 qpair failed and we were unable to recover it. 00:33:13.409 [2024-07-26 09:06:31.613618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.409 [2024-07-26 09:06:31.613644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.409 qpair failed and we were unable to recover it. 00:33:13.409 [2024-07-26 09:06:31.613770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.409 [2024-07-26 09:06:31.613796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.409 qpair failed and we were unable to recover it. 00:33:13.409 [2024-07-26 09:06:31.613913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.409 [2024-07-26 09:06:31.613939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.409 qpair failed and we were unable to recover it. 00:33:13.409 [2024-07-26 09:06:31.614057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.409 [2024-07-26 09:06:31.614093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.409 qpair failed and we were unable to recover it. 00:33:13.409 [2024-07-26 09:06:31.614237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.409 [2024-07-26 09:06:31.614264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.409 qpair failed and we were unable to recover it. 00:33:13.409 [2024-07-26 09:06:31.614413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.409 [2024-07-26 09:06:31.614440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.409 qpair failed and we were unable to recover it. 00:33:13.409 [2024-07-26 09:06:31.614604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.409 [2024-07-26 09:06:31.614630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.409 qpair failed and we were unable to recover it. 00:33:13.409 [2024-07-26 09:06:31.614778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.409 [2024-07-26 09:06:31.614805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.409 qpair failed and we were unable to recover it. 00:33:13.409 [2024-07-26 09:06:31.614923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.409 [2024-07-26 09:06:31.614949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.409 qpair failed and we were unable to recover it. 00:33:13.409 [2024-07-26 09:06:31.615090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.409 [2024-07-26 09:06:31.615118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.409 qpair failed and we were unable to recover it. 00:33:13.409 [2024-07-26 09:06:31.615311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.409 [2024-07-26 09:06:31.615341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.409 qpair failed and we were unable to recover it. 00:33:13.409 [2024-07-26 09:06:31.615506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.410 [2024-07-26 09:06:31.615537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.410 qpair failed and we were unable to recover it. 00:33:13.410 [2024-07-26 09:06:31.615706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.410 [2024-07-26 09:06:31.615732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.410 qpair failed and we were unable to recover it. 00:33:13.410 [2024-07-26 09:06:31.615879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.410 [2024-07-26 09:06:31.615904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.410 qpair failed and we were unable to recover it. 00:33:13.410 [2024-07-26 09:06:31.616063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.410 [2024-07-26 09:06:31.616093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.410 qpair failed and we were unable to recover it. 00:33:13.410 [2024-07-26 09:06:31.616263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.410 [2024-07-26 09:06:31.616289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.410 qpair failed and we were unable to recover it. 00:33:13.410 [2024-07-26 09:06:31.616408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.410 [2024-07-26 09:06:31.616435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.410 qpair failed and we were unable to recover it. 00:33:13.410 [2024-07-26 09:06:31.616576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.410 [2024-07-26 09:06:31.616603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.410 qpair failed and we were unable to recover it. 00:33:13.410 [2024-07-26 09:06:31.616725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.410 [2024-07-26 09:06:31.616754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.410 qpair failed and we were unable to recover it. 00:33:13.410 [2024-07-26 09:06:31.616875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.410 [2024-07-26 09:06:31.616902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.410 qpair failed and we were unable to recover it. 00:33:13.410 [2024-07-26 09:06:31.617052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.410 [2024-07-26 09:06:31.617089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.410 qpair failed and we were unable to recover it. 00:33:13.410 [2024-07-26 09:06:31.617234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.410 [2024-07-26 09:06:31.617260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.410 qpair failed and we were unable to recover it. 00:33:13.410 [2024-07-26 09:06:31.617406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.410 [2024-07-26 09:06:31.617432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.410 qpair failed and we were unable to recover it. 00:33:13.410 [2024-07-26 09:06:31.617579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.410 [2024-07-26 09:06:31.617606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.410 qpair failed and we were unable to recover it. 00:33:13.410 [2024-07-26 09:06:31.617768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.410 [2024-07-26 09:06:31.617810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.410 qpair failed and we were unable to recover it. 00:33:13.410 [2024-07-26 09:06:31.617972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.410 [2024-07-26 09:06:31.618003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.410 qpair failed and we were unable to recover it. 00:33:13.410 [2024-07-26 09:06:31.618177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.410 [2024-07-26 09:06:31.618204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.410 qpair failed and we were unable to recover it. 00:33:13.410 [2024-07-26 09:06:31.618321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.410 [2024-07-26 09:06:31.618352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.410 qpair failed and we were unable to recover it. 00:33:13.410 [2024-07-26 09:06:31.618499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.410 [2024-07-26 09:06:31.618525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.410 qpair failed and we were unable to recover it. 00:33:13.410 [2024-07-26 09:06:31.618676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.410 [2024-07-26 09:06:31.618703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.410 qpair failed and we were unable to recover it. 00:33:13.410 [2024-07-26 09:06:31.618867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.410 [2024-07-26 09:06:31.618895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.410 qpair failed and we were unable to recover it. 00:33:13.410 [2024-07-26 09:06:31.619011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.410 [2024-07-26 09:06:31.619037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.410 qpair failed and we were unable to recover it. 00:33:13.410 [2024-07-26 09:06:31.619164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.410 [2024-07-26 09:06:31.619193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.410 qpair failed and we were unable to recover it. 00:33:13.410 [2024-07-26 09:06:31.619315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.410 [2024-07-26 09:06:31.619342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.410 qpair failed and we were unable to recover it. 00:33:13.410 [2024-07-26 09:06:31.619451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.410 [2024-07-26 09:06:31.619476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.410 qpair failed and we were unable to recover it. 00:33:13.410 [2024-07-26 09:06:31.619620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.410 [2024-07-26 09:06:31.619650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.410 qpair failed and we were unable to recover it. 00:33:13.410 [2024-07-26 09:06:31.619816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.410 [2024-07-26 09:06:31.619843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.410 qpair failed and we were unable to recover it. 00:33:13.410 [2024-07-26 09:06:31.619981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.410 [2024-07-26 09:06:31.620008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.410 qpair failed and we were unable to recover it. 00:33:13.410 [2024-07-26 09:06:31.620169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.410 [2024-07-26 09:06:31.620196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.410 qpair failed and we were unable to recover it. 00:33:13.410 [2024-07-26 09:06:31.620347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.410 [2024-07-26 09:06:31.620374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.410 qpair failed and we were unable to recover it. 00:33:13.410 [2024-07-26 09:06:31.620499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.410 [2024-07-26 09:06:31.620525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.410 qpair failed and we were unable to recover it. 00:33:13.410 [2024-07-26 09:06:31.620681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.410 [2024-07-26 09:06:31.620708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.410 qpair failed and we were unable to recover it. 00:33:13.410 [2024-07-26 09:06:31.620828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.410 [2024-07-26 09:06:31.620855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.410 qpair failed and we were unable to recover it. 00:33:13.410 [2024-07-26 09:06:31.620979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.410 [2024-07-26 09:06:31.621006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.410 qpair failed and we were unable to recover it. 00:33:13.410 [2024-07-26 09:06:31.621124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.410 [2024-07-26 09:06:31.621151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.410 qpair failed and we were unable to recover it. 00:33:13.410 [2024-07-26 09:06:31.621276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.410 [2024-07-26 09:06:31.621302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.410 qpair failed and we were unable to recover it. 00:33:13.410 [2024-07-26 09:06:31.621423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.410 [2024-07-26 09:06:31.621450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.410 qpair failed and we were unable to recover it. 00:33:13.410 [2024-07-26 09:06:31.621577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.410 [2024-07-26 09:06:31.621605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.410 qpair failed and we were unable to recover it. 00:33:13.410 [2024-07-26 09:06:31.621735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.411 [2024-07-26 09:06:31.621762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.411 qpair failed and we were unable to recover it. 00:33:13.411 [2024-07-26 09:06:31.621923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.411 [2024-07-26 09:06:31.621952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.411 qpair failed and we were unable to recover it. 00:33:13.411 [2024-07-26 09:06:31.622080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.411 [2024-07-26 09:06:31.622110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.411 qpair failed and we were unable to recover it. 00:33:13.411 [2024-07-26 09:06:31.622276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.411 [2024-07-26 09:06:31.622302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.411 qpair failed and we were unable to recover it. 00:33:13.411 [2024-07-26 09:06:31.622432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.411 [2024-07-26 09:06:31.622458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.411 qpair failed and we were unable to recover it. 00:33:13.411 [2024-07-26 09:06:31.622602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.411 [2024-07-26 09:06:31.622628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.411 qpair failed and we were unable to recover it. 00:33:13.411 [2024-07-26 09:06:31.622775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.411 [2024-07-26 09:06:31.622802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.411 qpair failed and we were unable to recover it. 00:33:13.411 [2024-07-26 09:06:31.622921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.411 [2024-07-26 09:06:31.622948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.411 qpair failed and we were unable to recover it. 00:33:13.411 [2024-07-26 09:06:31.623097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.411 [2024-07-26 09:06:31.623124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.411 qpair failed and we were unable to recover it. 00:33:13.411 [2024-07-26 09:06:31.623272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.411 [2024-07-26 09:06:31.623300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.411 qpair failed and we were unable to recover it. 00:33:13.411 [2024-07-26 09:06:31.623420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.411 [2024-07-26 09:06:31.623450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.411 qpair failed and we were unable to recover it. 00:33:13.411 [2024-07-26 09:06:31.623596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.411 [2024-07-26 09:06:31.623623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.411 qpair failed and we were unable to recover it. 00:33:13.411 [2024-07-26 09:06:31.623743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.411 [2024-07-26 09:06:31.623769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.411 qpair failed and we were unable to recover it. 00:33:13.411 [2024-07-26 09:06:31.623881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.411 [2024-07-26 09:06:31.623910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.411 qpair failed and we were unable to recover it. 00:33:13.411 [2024-07-26 09:06:31.624067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.411 [2024-07-26 09:06:31.624095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.411 qpair failed and we were unable to recover it. 00:33:13.411 [2024-07-26 09:06:31.624247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.411 [2024-07-26 09:06:31.624273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.411 qpair failed and we were unable to recover it. 00:33:13.411 [2024-07-26 09:06:31.624434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.411 [2024-07-26 09:06:31.624463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.411 qpair failed and we were unable to recover it. 00:33:13.411 [2024-07-26 09:06:31.624621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.411 [2024-07-26 09:06:31.624650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.411 qpair failed and we were unable to recover it. 00:33:13.411 [2024-07-26 09:06:31.624812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.411 [2024-07-26 09:06:31.624838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.411 qpair failed and we were unable to recover it. 00:33:13.411 [2024-07-26 09:06:31.624955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.411 [2024-07-26 09:06:31.624987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.411 qpair failed and we were unable to recover it. 00:33:13.411 [2024-07-26 09:06:31.625163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.411 [2024-07-26 09:06:31.625192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.411 qpair failed and we were unable to recover it. 00:33:13.411 [2024-07-26 09:06:31.625317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.411 [2024-07-26 09:06:31.625343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.411 qpair failed and we were unable to recover it. 00:33:13.411 [2024-07-26 09:06:31.625465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.411 [2024-07-26 09:06:31.625509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.411 qpair failed and we were unable to recover it. 00:33:13.411 [2024-07-26 09:06:31.625695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.411 [2024-07-26 09:06:31.625724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.411 qpair failed and we were unable to recover it. 00:33:13.411 [2024-07-26 09:06:31.625882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.411 [2024-07-26 09:06:31.625909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.411 qpair failed and we were unable to recover it. 00:33:13.411 [2024-07-26 09:06:31.626053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.411 [2024-07-26 09:06:31.626084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.411 qpair failed and we were unable to recover it. 00:33:13.411 [2024-07-26 09:06:31.626256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.411 [2024-07-26 09:06:31.626283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.411 qpair failed and we were unable to recover it. 00:33:13.411 [2024-07-26 09:06:31.626406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.411 [2024-07-26 09:06:31.626434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.411 qpair failed and we were unable to recover it. 00:33:13.411 [2024-07-26 09:06:31.626589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.411 [2024-07-26 09:06:31.626615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.411 qpair failed and we were unable to recover it. 00:33:13.411 [2024-07-26 09:06:31.626758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.411 [2024-07-26 09:06:31.626784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.411 qpair failed and we were unable to recover it. 00:33:13.411 [2024-07-26 09:06:31.626905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.411 [2024-07-26 09:06:31.626931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.411 qpair failed and we were unable to recover it. 00:33:13.411 [2024-07-26 09:06:31.627081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.411 [2024-07-26 09:06:31.627109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.411 qpair failed and we were unable to recover it. 00:33:13.411 [2024-07-26 09:06:31.627262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.411 [2024-07-26 09:06:31.627288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.411 qpair failed and we were unable to recover it. 00:33:13.411 [2024-07-26 09:06:31.627441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.411 [2024-07-26 09:06:31.627467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.411 qpair failed and we were unable to recover it. 00:33:13.411 [2024-07-26 09:06:31.627628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.411 [2024-07-26 09:06:31.627655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.411 qpair failed and we were unable to recover it. 00:33:13.411 [2024-07-26 09:06:31.627820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.411 [2024-07-26 09:06:31.627846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.411 qpair failed and we were unable to recover it. 00:33:13.411 [2024-07-26 09:06:31.627994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.411 [2024-07-26 09:06:31.628021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.411 qpair failed and we were unable to recover it. 00:33:13.411 [2024-07-26 09:06:31.628153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.411 [2024-07-26 09:06:31.628182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.411 qpair failed and we were unable to recover it. 00:33:13.412 [2024-07-26 09:06:31.628313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.412 [2024-07-26 09:06:31.628340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.412 qpair failed and we were unable to recover it. 00:33:13.412 [2024-07-26 09:06:31.628487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.412 [2024-07-26 09:06:31.628513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.412 qpair failed and we were unable to recover it. 00:33:13.412 [2024-07-26 09:06:31.628628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.412 [2024-07-26 09:06:31.628656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.412 qpair failed and we were unable to recover it. 00:33:13.412 [2024-07-26 09:06:31.628805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.412 [2024-07-26 09:06:31.628833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.412 qpair failed and we were unable to recover it. 00:33:13.412 [2024-07-26 09:06:31.629008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.412 [2024-07-26 09:06:31.629037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.412 qpair failed and we were unable to recover it. 00:33:13.412 [2024-07-26 09:06:31.629216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.412 [2024-07-26 09:06:31.629244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.412 qpair failed and we were unable to recover it. 00:33:13.412 [2024-07-26 09:06:31.629370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.412 [2024-07-26 09:06:31.629396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.412 qpair failed and we were unable to recover it. 00:33:13.412 [2024-07-26 09:06:31.629568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.412 [2024-07-26 09:06:31.629595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.412 qpair failed and we were unable to recover it. 00:33:13.412 [2024-07-26 09:06:31.629762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.412 [2024-07-26 09:06:31.629791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.412 qpair failed and we were unable to recover it. 00:33:13.412 [2024-07-26 09:06:31.629951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.412 [2024-07-26 09:06:31.629981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.412 qpair failed and we were unable to recover it. 00:33:13.412 [2024-07-26 09:06:31.630147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.412 [2024-07-26 09:06:31.630175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.412 qpair failed and we were unable to recover it. 00:33:13.412 [2024-07-26 09:06:31.630314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.412 [2024-07-26 09:06:31.630341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.412 qpair failed and we were unable to recover it. 00:33:13.412 [2024-07-26 09:06:31.630515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.412 [2024-07-26 09:06:31.630542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.412 qpair failed and we were unable to recover it. 00:33:13.412 [2024-07-26 09:06:31.630694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.412 [2024-07-26 09:06:31.630722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.412 qpair failed and we were unable to recover it. 00:33:13.412 [2024-07-26 09:06:31.630842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.412 [2024-07-26 09:06:31.630868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.412 qpair failed and we were unable to recover it. 00:33:13.412 [2024-07-26 09:06:31.631003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.412 [2024-07-26 09:06:31.631029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.412 qpair failed and we were unable to recover it. 00:33:13.412 [2024-07-26 09:06:31.631159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.412 [2024-07-26 09:06:31.631186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.412 qpair failed and we were unable to recover it. 00:33:13.412 [2024-07-26 09:06:31.631336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.412 [2024-07-26 09:06:31.631363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.412 qpair failed and we were unable to recover it. 00:33:13.412 [2024-07-26 09:06:31.631506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.412 [2024-07-26 09:06:31.631535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.412 qpair failed and we were unable to recover it. 00:33:13.412 [2024-07-26 09:06:31.631704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.412 [2024-07-26 09:06:31.631731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.412 qpair failed and we were unable to recover it. 00:33:13.412 [2024-07-26 09:06:31.631858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.412 [2024-07-26 09:06:31.631885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.412 qpair failed and we were unable to recover it. 00:33:13.412 [2024-07-26 09:06:31.632050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.412 [2024-07-26 09:06:31.632090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.412 qpair failed and we were unable to recover it. 00:33:13.412 [2024-07-26 09:06:31.632246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.412 [2024-07-26 09:06:31.632273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.412 qpair failed and we were unable to recover it. 00:33:13.412 [2024-07-26 09:06:31.632418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.412 [2024-07-26 09:06:31.632448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.412 qpair failed and we were unable to recover it. 00:33:13.412 [2024-07-26 09:06:31.632579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.412 [2024-07-26 09:06:31.632605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.412 qpair failed and we were unable to recover it. 00:33:13.412 [2024-07-26 09:06:31.632766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.412 [2024-07-26 09:06:31.632793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.412 qpair failed and we were unable to recover it. 00:33:13.412 [2024-07-26 09:06:31.632956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.412 [2024-07-26 09:06:31.632984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.412 qpair failed and we were unable to recover it. 00:33:13.412 [2024-07-26 09:06:31.633152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.412 [2024-07-26 09:06:31.633182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.412 qpair failed and we were unable to recover it. 00:33:13.412 [2024-07-26 09:06:31.633341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.412 [2024-07-26 09:06:31.633369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.412 qpair failed and we were unable to recover it. 00:33:13.412 [2024-07-26 09:06:31.633496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.412 [2024-07-26 09:06:31.633522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.412 qpair failed and we were unable to recover it. 00:33:13.412 [2024-07-26 09:06:31.633642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.412 [2024-07-26 09:06:31.633668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.412 qpair failed and we were unable to recover it. 00:33:13.412 [2024-07-26 09:06:31.633830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.412 [2024-07-26 09:06:31.633857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.412 qpair failed and we were unable to recover it. 00:33:13.413 [2024-07-26 09:06:31.633988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.413 [2024-07-26 09:06:31.634015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.413 qpair failed and we were unable to recover it. 00:33:13.413 [2024-07-26 09:06:31.634163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.413 [2024-07-26 09:06:31.634189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.413 qpair failed and we were unable to recover it. 00:33:13.413 [2024-07-26 09:06:31.634333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.413 [2024-07-26 09:06:31.634360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.413 qpair failed and we were unable to recover it. 00:33:13.413 [2024-07-26 09:06:31.634511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.413 [2024-07-26 09:06:31.634538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.413 qpair failed and we were unable to recover it. 00:33:13.413 [2024-07-26 09:06:31.634721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.413 [2024-07-26 09:06:31.634747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.413 qpair failed and we were unable to recover it. 00:33:13.413 [2024-07-26 09:06:31.634910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.413 [2024-07-26 09:06:31.634953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.413 qpair failed and we were unable to recover it. 00:33:13.413 [2024-07-26 09:06:31.635150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.413 [2024-07-26 09:06:31.635180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.413 qpair failed and we were unable to recover it. 00:33:13.413 [2024-07-26 09:06:31.635330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.413 [2024-07-26 09:06:31.635373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.413 qpair failed and we were unable to recover it. 00:33:13.413 [2024-07-26 09:06:31.635545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.413 [2024-07-26 09:06:31.635572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.413 qpair failed and we were unable to recover it. 00:33:13.413 [2024-07-26 09:06:31.635717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.413 [2024-07-26 09:06:31.635744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.413 qpair failed and we were unable to recover it. 00:33:13.413 [2024-07-26 09:06:31.635892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.413 [2024-07-26 09:06:31.635918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.413 qpair failed and we were unable to recover it. 00:33:13.413 [2024-07-26 09:06:31.636080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.413 [2024-07-26 09:06:31.636109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.413 qpair failed and we were unable to recover it. 00:33:13.413 [2024-07-26 09:06:31.636261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.413 [2024-07-26 09:06:31.636289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.413 qpair failed and we were unable to recover it. 00:33:13.413 [2024-07-26 09:06:31.636479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.413 [2024-07-26 09:06:31.636508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.413 qpair failed and we were unable to recover it. 00:33:13.413 [2024-07-26 09:06:31.636691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.413 [2024-07-26 09:06:31.636720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.413 qpair failed and we were unable to recover it. 00:33:13.413 [2024-07-26 09:06:31.636865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.413 [2024-07-26 09:06:31.636891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.413 qpair failed and we were unable to recover it. 00:33:13.413 [2024-07-26 09:06:31.637012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.413 [2024-07-26 09:06:31.637040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.413 qpair failed and we were unable to recover it. 00:33:13.413 [2024-07-26 09:06:31.637219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.413 [2024-07-26 09:06:31.637248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.413 qpair failed and we were unable to recover it. 00:33:13.413 [2024-07-26 09:06:31.637372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.413 [2024-07-26 09:06:31.637401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.413 qpair failed and we were unable to recover it. 00:33:13.413 [2024-07-26 09:06:31.637522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.413 [2024-07-26 09:06:31.637548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.413 qpair failed and we were unable to recover it. 00:33:13.413 [2024-07-26 09:06:31.637693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.413 [2024-07-26 09:06:31.637720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.413 qpair failed and we were unable to recover it. 00:33:13.413 [2024-07-26 09:06:31.637892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.413 [2024-07-26 09:06:31.637919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.413 qpair failed and we were unable to recover it. 00:33:13.413 [2024-07-26 09:06:31.638066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.413 [2024-07-26 09:06:31.638092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.413 qpair failed and we were unable to recover it. 00:33:13.413 [2024-07-26 09:06:31.638269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.413 [2024-07-26 09:06:31.638297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.413 qpair failed and we were unable to recover it. 00:33:13.413 [2024-07-26 09:06:31.638445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.413 [2024-07-26 09:06:31.638471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.413 qpair failed and we were unable to recover it. 00:33:13.413 [2024-07-26 09:06:31.638636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.413 [2024-07-26 09:06:31.638665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.413 qpair failed and we were unable to recover it. 00:33:13.413 [2024-07-26 09:06:31.638834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.413 [2024-07-26 09:06:31.638860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.413 qpair failed and we were unable to recover it. 00:33:13.413 [2024-07-26 09:06:31.638983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.413 [2024-07-26 09:06:31.639009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.413 qpair failed and we were unable to recover it. 00:33:13.413 [2024-07-26 09:06:31.639133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.413 [2024-07-26 09:06:31.639159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.413 qpair failed and we were unable to recover it. 00:33:13.413 [2024-07-26 09:06:31.639302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.413 [2024-07-26 09:06:31.639332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.413 qpair failed and we were unable to recover it. 00:33:13.413 [2024-07-26 09:06:31.639487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.413 [2024-07-26 09:06:31.639514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.413 qpair failed and we were unable to recover it. 00:33:13.413 [2024-07-26 09:06:31.639665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.413 [2024-07-26 09:06:31.639691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.413 qpair failed and we were unable to recover it. 00:33:13.413 [2024-07-26 09:06:31.639879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.413 [2024-07-26 09:06:31.639907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.413 qpair failed and we were unable to recover it. 00:33:13.413 [2024-07-26 09:06:31.640051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.413 [2024-07-26 09:06:31.640092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.413 qpair failed and we were unable to recover it. 00:33:13.413 [2024-07-26 09:06:31.640244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.413 [2024-07-26 09:06:31.640271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.413 qpair failed and we were unable to recover it. 00:33:13.413 [2024-07-26 09:06:31.640416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.413 [2024-07-26 09:06:31.640442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.413 qpair failed and we were unable to recover it. 00:33:13.413 [2024-07-26 09:06:31.640564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.413 [2024-07-26 09:06:31.640590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.413 qpair failed and we were unable to recover it. 00:33:13.414 [2024-07-26 09:06:31.640726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.414 [2024-07-26 09:06:31.640752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.414 qpair failed and we were unable to recover it. 00:33:13.414 [2024-07-26 09:06:31.640916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.414 [2024-07-26 09:06:31.640942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.414 qpair failed and we were unable to recover it. 00:33:13.414 [2024-07-26 09:06:31.641113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.414 [2024-07-26 09:06:31.641143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.414 qpair failed and we were unable to recover it. 00:33:13.414 [2024-07-26 09:06:31.641282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.414 [2024-07-26 09:06:31.641310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.414 qpair failed and we were unable to recover it. 00:33:13.414 [2024-07-26 09:06:31.641459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.414 [2024-07-26 09:06:31.641485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.414 qpair failed and we were unable to recover it. 00:33:13.414 [2024-07-26 09:06:31.641638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.414 [2024-07-26 09:06:31.641664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.414 qpair failed and we were unable to recover it. 00:33:13.414 [2024-07-26 09:06:31.641842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.414 [2024-07-26 09:06:31.641869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.414 qpair failed and we were unable to recover it. 00:33:13.414 [2024-07-26 09:06:31.642044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.414 [2024-07-26 09:06:31.642081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.414 qpair failed and we were unable to recover it. 00:33:13.414 [2024-07-26 09:06:31.642218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.414 [2024-07-26 09:06:31.642244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.414 qpair failed and we were unable to recover it. 00:33:13.414 [2024-07-26 09:06:31.642415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.414 [2024-07-26 09:06:31.642443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.414 qpair failed and we were unable to recover it. 00:33:13.414 [2024-07-26 09:06:31.642581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.414 [2024-07-26 09:06:31.642609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.414 qpair failed and we were unable to recover it. 00:33:13.414 [2024-07-26 09:06:31.642738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.414 [2024-07-26 09:06:31.642766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.414 qpair failed and we were unable to recover it. 00:33:13.414 [2024-07-26 09:06:31.642993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.414 [2024-07-26 09:06:31.643019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.414 qpair failed and we were unable to recover it. 00:33:13.414 [2024-07-26 09:06:31.643156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.414 [2024-07-26 09:06:31.643191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.414 qpair failed and we were unable to recover it. 00:33:13.414 [2024-07-26 09:06:31.643324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.414 [2024-07-26 09:06:31.643353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.414 qpair failed and we were unable to recover it. 00:33:13.414 [2024-07-26 09:06:31.643545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.414 [2024-07-26 09:06:31.643574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.414 qpair failed and we were unable to recover it. 00:33:13.414 [2024-07-26 09:06:31.643717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.414 [2024-07-26 09:06:31.643743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.414 qpair failed and we were unable to recover it. 00:33:13.414 [2024-07-26 09:06:31.643890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.414 [2024-07-26 09:06:31.643919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.414 qpair failed and we were unable to recover it. 00:33:13.414 [2024-07-26 09:06:31.644049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.414 [2024-07-26 09:06:31.644081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.414 qpair failed and we were unable to recover it. 00:33:13.414 [2024-07-26 09:06:31.644209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.414 [2024-07-26 09:06:31.644236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.414 qpair failed and we were unable to recover it. 00:33:13.414 [2024-07-26 09:06:31.644415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.414 [2024-07-26 09:06:31.644441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.414 qpair failed and we were unable to recover it. 00:33:13.414 [2024-07-26 09:06:31.644567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.414 [2024-07-26 09:06:31.644594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.414 qpair failed and we were unable to recover it. 00:33:13.414 [2024-07-26 09:06:31.644739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.414 [2024-07-26 09:06:31.644766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.414 qpair failed and we were unable to recover it. 00:33:13.414 [2024-07-26 09:06:31.644914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.414 [2024-07-26 09:06:31.644940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.414 qpair failed and we were unable to recover it. 00:33:13.414 [2024-07-26 09:06:31.645080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.414 [2024-07-26 09:06:31.645116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.414 qpair failed and we were unable to recover it. 00:33:13.414 [2024-07-26 09:06:31.645239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.414 [2024-07-26 09:06:31.645265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.414 qpair failed and we were unable to recover it. 00:33:13.414 [2024-07-26 09:06:31.645418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.414 [2024-07-26 09:06:31.645444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.414 qpair failed and we were unable to recover it. 00:33:13.414 [2024-07-26 09:06:31.645566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.414 [2024-07-26 09:06:31.645592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.414 qpair failed and we were unable to recover it. 00:33:13.414 [2024-07-26 09:06:31.645741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.414 [2024-07-26 09:06:31.645767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.414 qpair failed and we were unable to recover it. 00:33:13.414 [2024-07-26 09:06:31.645894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.414 [2024-07-26 09:06:31.645920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.414 qpair failed and we were unable to recover it. 00:33:13.414 [2024-07-26 09:06:31.646117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.414 [2024-07-26 09:06:31.646144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.414 qpair failed and we were unable to recover it. 00:33:13.414 [2024-07-26 09:06:31.646294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.414 [2024-07-26 09:06:31.646321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.414 qpair failed and we were unable to recover it. 00:33:13.414 [2024-07-26 09:06:31.646440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.414 [2024-07-26 09:06:31.646473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.414 qpair failed and we were unable to recover it. 00:33:13.414 [2024-07-26 09:06:31.646603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.414 [2024-07-26 09:06:31.646629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.414 qpair failed and we were unable to recover it. 00:33:13.414 [2024-07-26 09:06:31.646773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.414 [2024-07-26 09:06:31.646798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.414 qpair failed and we were unable to recover it. 00:33:13.414 [2024-07-26 09:06:31.646948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.414 [2024-07-26 09:06:31.646974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.414 qpair failed and we were unable to recover it. 00:33:13.414 [2024-07-26 09:06:31.647121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.415 [2024-07-26 09:06:31.647166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.415 qpair failed and we were unable to recover it. 00:33:13.415 [2024-07-26 09:06:31.647340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.415 [2024-07-26 09:06:31.647366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.415 qpair failed and we were unable to recover it. 00:33:13.415 [2024-07-26 09:06:31.647512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.415 [2024-07-26 09:06:31.647538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.415 qpair failed and we were unable to recover it. 00:33:13.415 [2024-07-26 09:06:31.647715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.415 [2024-07-26 09:06:31.647741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.415 qpair failed and we were unable to recover it. 00:33:13.415 [2024-07-26 09:06:31.647889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.415 [2024-07-26 09:06:31.647915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.415 qpair failed and we were unable to recover it. 00:33:13.415 [2024-07-26 09:06:31.648038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.415 [2024-07-26 09:06:31.648083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.415 qpair failed and we were unable to recover it. 00:33:13.415 [2024-07-26 09:06:31.648277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.415 [2024-07-26 09:06:31.648304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.415 qpair failed and we were unable to recover it. 00:33:13.415 [2024-07-26 09:06:31.648427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.415 [2024-07-26 09:06:31.648453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.415 qpair failed and we were unable to recover it. 00:33:13.415 [2024-07-26 09:06:31.648571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.415 [2024-07-26 09:06:31.648598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.415 qpair failed and we were unable to recover it. 00:33:13.415 [2024-07-26 09:06:31.648779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.415 [2024-07-26 09:06:31.648808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.415 qpair failed and we were unable to recover it. 00:33:13.415 [2024-07-26 09:06:31.648958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.415 [2024-07-26 09:06:31.648984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.415 qpair failed and we were unable to recover it. 00:33:13.415 [2024-07-26 09:06:31.649138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.415 [2024-07-26 09:06:31.649165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.415 qpair failed and we were unable to recover it. 00:33:13.415 [2024-07-26 09:06:31.649291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.415 [2024-07-26 09:06:31.649317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.415 qpair failed and we were unable to recover it. 00:33:13.415 [2024-07-26 09:06:31.649466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.415 [2024-07-26 09:06:31.649492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.415 qpair failed and we were unable to recover it. 00:33:13.415 [2024-07-26 09:06:31.649616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.415 [2024-07-26 09:06:31.649642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.415 qpair failed and we were unable to recover it. 00:33:13.415 [2024-07-26 09:06:31.649761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.415 [2024-07-26 09:06:31.649788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.415 qpair failed and we were unable to recover it. 00:33:13.415 [2024-07-26 09:06:31.649960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.415 [2024-07-26 09:06:31.649987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.415 qpair failed and we were unable to recover it. 00:33:13.415 [2024-07-26 09:06:31.650133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.415 [2024-07-26 09:06:31.650162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.415 qpair failed and we were unable to recover it. 00:33:13.415 [2024-07-26 09:06:31.650337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.415 [2024-07-26 09:06:31.650363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.415 qpair failed and we were unable to recover it. 00:33:13.415 [2024-07-26 09:06:31.650511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.415 [2024-07-26 09:06:31.650539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.415 qpair failed and we were unable to recover it. 00:33:13.415 [2024-07-26 09:06:31.650685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.415 [2024-07-26 09:06:31.650711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.415 qpair failed and we were unable to recover it. 00:33:13.415 [2024-07-26 09:06:31.650841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.415 [2024-07-26 09:06:31.650867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.415 qpair failed and we were unable to recover it. 00:33:13.415 [2024-07-26 09:06:31.651068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.415 [2024-07-26 09:06:31.651116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.415 qpair failed and we were unable to recover it. 00:33:13.415 [2024-07-26 09:06:31.651242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.415 [2024-07-26 09:06:31.651270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.415 qpair failed and we were unable to recover it. 00:33:13.415 [2024-07-26 09:06:31.651420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.415 [2024-07-26 09:06:31.651448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.415 qpair failed and we were unable to recover it. 00:33:13.415 [2024-07-26 09:06:31.651599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.415 [2024-07-26 09:06:31.651625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.415 qpair failed and we were unable to recover it. 00:33:13.415 [2024-07-26 09:06:31.651802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.415 [2024-07-26 09:06:31.651828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.415 qpair failed and we were unable to recover it. 00:33:13.415 [2024-07-26 09:06:31.651973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.415 [2024-07-26 09:06:31.652001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.415 qpair failed and we were unable to recover it. 00:33:13.415 [2024-07-26 09:06:31.652150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.415 [2024-07-26 09:06:31.652178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.415 qpair failed and we were unable to recover it. 00:33:13.415 [2024-07-26 09:06:31.652320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.415 [2024-07-26 09:06:31.652363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.415 qpair failed and we were unable to recover it. 00:33:13.415 [2024-07-26 09:06:31.652547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.415 [2024-07-26 09:06:31.652576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.415 qpair failed and we were unable to recover it. 00:33:13.415 [2024-07-26 09:06:31.652767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.415 [2024-07-26 09:06:31.652792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.415 qpair failed and we were unable to recover it. 00:33:13.415 [2024-07-26 09:06:31.652927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.415 [2024-07-26 09:06:31.652956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.415 qpair failed and we were unable to recover it. 00:33:13.415 [2024-07-26 09:06:31.653131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.415 [2024-07-26 09:06:31.653160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.415 qpair failed and we were unable to recover it. 00:33:13.415 [2024-07-26 09:06:31.653297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.415 [2024-07-26 09:06:31.653323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.415 qpair failed and we were unable to recover it. 00:33:13.415 [2024-07-26 09:06:31.653473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.415 [2024-07-26 09:06:31.653515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.415 qpair failed and we were unable to recover it. 00:33:13.415 [2024-07-26 09:06:31.653704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.415 [2024-07-26 09:06:31.653738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.415 qpair failed and we were unable to recover it. 00:33:13.415 [2024-07-26 09:06:31.653902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.415 [2024-07-26 09:06:31.653928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.416 qpair failed and we were unable to recover it. 00:33:13.416 [2024-07-26 09:06:31.654049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.416 [2024-07-26 09:06:31.654100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.416 qpair failed and we were unable to recover it. 00:33:13.416 [2024-07-26 09:06:31.654270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.416 [2024-07-26 09:06:31.654296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.416 qpair failed and we were unable to recover it. 00:33:13.416 [2024-07-26 09:06:31.654446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.416 [2024-07-26 09:06:31.654472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.416 qpair failed and we were unable to recover it. 00:33:13.416 [2024-07-26 09:06:31.654672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.416 [2024-07-26 09:06:31.654701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.416 qpair failed and we were unable to recover it. 00:33:13.416 [2024-07-26 09:06:31.654891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.416 [2024-07-26 09:06:31.654919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.416 qpair failed and we were unable to recover it. 00:33:13.416 [2024-07-26 09:06:31.655117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.416 [2024-07-26 09:06:31.655144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.416 qpair failed and we were unable to recover it. 00:33:13.416 [2024-07-26 09:06:31.655311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.416 [2024-07-26 09:06:31.655340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.416 qpair failed and we were unable to recover it. 00:33:13.416 [2024-07-26 09:06:31.655529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.416 [2024-07-26 09:06:31.655557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.416 qpair failed and we were unable to recover it. 00:33:13.416 [2024-07-26 09:06:31.655750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.416 [2024-07-26 09:06:31.655775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.416 qpair failed and we were unable to recover it. 00:33:13.416 [2024-07-26 09:06:31.655966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.416 [2024-07-26 09:06:31.655995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.416 qpair failed and we were unable to recover it. 00:33:13.416 [2024-07-26 09:06:31.656183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.416 [2024-07-26 09:06:31.656212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.416 qpair failed and we were unable to recover it. 00:33:13.416 [2024-07-26 09:06:31.656382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.416 [2024-07-26 09:06:31.656408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.416 qpair failed and we were unable to recover it. 00:33:13.416 [2024-07-26 09:06:31.656589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.416 [2024-07-26 09:06:31.656617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.416 qpair failed and we were unable to recover it. 00:33:13.416 [2024-07-26 09:06:31.656808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.416 [2024-07-26 09:06:31.656836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.416 qpair failed and we were unable to recover it. 00:33:13.416 [2024-07-26 09:06:31.656973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.416 [2024-07-26 09:06:31.656998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.416 qpair failed and we were unable to recover it. 00:33:13.416 [2024-07-26 09:06:31.657139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.416 [2024-07-26 09:06:31.657183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.416 qpair failed and we were unable to recover it. 00:33:13.416 [2024-07-26 09:06:31.657346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.416 [2024-07-26 09:06:31.657376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.416 qpair failed and we were unable to recover it. 00:33:13.416 [2024-07-26 09:06:31.657542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.416 [2024-07-26 09:06:31.657568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.416 qpair failed and we were unable to recover it. 00:33:13.416 [2024-07-26 09:06:31.657732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.416 [2024-07-26 09:06:31.657761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.416 qpair failed and we were unable to recover it. 00:33:13.416 [2024-07-26 09:06:31.657951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.416 [2024-07-26 09:06:31.657980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.416 qpair failed and we were unable to recover it. 00:33:13.416 [2024-07-26 09:06:31.658159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.416 [2024-07-26 09:06:31.658186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.416 qpair failed and we were unable to recover it. 00:33:13.416 [2024-07-26 09:06:31.658349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.416 [2024-07-26 09:06:31.658378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.416 qpair failed and we were unable to recover it. 00:33:13.416 [2024-07-26 09:06:31.658537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.416 [2024-07-26 09:06:31.658565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.416 qpair failed and we were unable to recover it. 00:33:13.416 [2024-07-26 09:06:31.658712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.416 [2024-07-26 09:06:31.658738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.416 qpair failed and we were unable to recover it. 00:33:13.416 [2024-07-26 09:06:31.658853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.416 [2024-07-26 09:06:31.658879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.416 qpair failed and we were unable to recover it. 00:33:13.416 [2024-07-26 09:06:31.659043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.416 [2024-07-26 09:06:31.659089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.416 qpair failed and we were unable to recover it. 00:33:13.416 [2024-07-26 09:06:31.659225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.416 [2024-07-26 09:06:31.659253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.416 qpair failed and we were unable to recover it. 00:33:13.416 [2024-07-26 09:06:31.659374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.416 [2024-07-26 09:06:31.659415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.416 qpair failed and we were unable to recover it. 00:33:13.416 [2024-07-26 09:06:31.659605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.416 [2024-07-26 09:06:31.659659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.416 qpair failed and we were unable to recover it. 00:33:13.416 [2024-07-26 09:06:31.659797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.416 [2024-07-26 09:06:31.659822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.416 qpair failed and we were unable to recover it. 00:33:13.416 [2024-07-26 09:06:31.659943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.416 [2024-07-26 09:06:31.659969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.416 qpair failed and we were unable to recover it. 00:33:13.416 [2024-07-26 09:06:31.660184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.416 [2024-07-26 09:06:31.660212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.416 qpair failed and we were unable to recover it. 00:33:13.416 [2024-07-26 09:06:31.660360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.416 [2024-07-26 09:06:31.660386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.416 qpair failed and we were unable to recover it. 00:33:13.416 [2024-07-26 09:06:31.660553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.416 [2024-07-26 09:06:31.660583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.416 qpair failed and we were unable to recover it. 00:33:13.416 [2024-07-26 09:06:31.660767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.416 [2024-07-26 09:06:31.660796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.416 qpair failed and we were unable to recover it. 00:33:13.416 [2024-07-26 09:06:31.660986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.416 [2024-07-26 09:06:31.661012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.416 qpair failed and we were unable to recover it. 00:33:13.416 [2024-07-26 09:06:31.661203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.416 [2024-07-26 09:06:31.661232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.416 qpair failed and we were unable to recover it. 00:33:13.417 [2024-07-26 09:06:31.661393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.417 [2024-07-26 09:06:31.661422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.417 qpair failed and we were unable to recover it. 00:33:13.417 [2024-07-26 09:06:31.661586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.417 [2024-07-26 09:06:31.661616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.417 qpair failed and we were unable to recover it. 00:33:13.417 [2024-07-26 09:06:31.661760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.417 [2024-07-26 09:06:31.661786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.417 qpair failed and we were unable to recover it. 00:33:13.417 [2024-07-26 09:06:31.661968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.417 [2024-07-26 09:06:31.661997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.417 qpair failed and we were unable to recover it. 00:33:13.417 [2024-07-26 09:06:31.662203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.417 [2024-07-26 09:06:31.662229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.417 qpair failed and we were unable to recover it. 00:33:13.417 [2024-07-26 09:06:31.662353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.417 [2024-07-26 09:06:31.662379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.417 qpair failed and we were unable to recover it. 00:33:13.417 [2024-07-26 09:06:31.662503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.417 [2024-07-26 09:06:31.662528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.417 qpair failed and we were unable to recover it. 00:33:13.417 [2024-07-26 09:06:31.662703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.417 [2024-07-26 09:06:31.662729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.417 qpair failed and we were unable to recover it. 00:33:13.417 [2024-07-26 09:06:31.662874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.417 [2024-07-26 09:06:31.662900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.417 qpair failed and we were unable to recover it. 00:33:13.417 [2024-07-26 09:06:31.663091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.417 [2024-07-26 09:06:31.663123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.417 qpair failed and we were unable to recover it. 00:33:13.417 [2024-07-26 09:06:31.663268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.417 [2024-07-26 09:06:31.663295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.417 qpair failed and we were unable to recover it. 00:33:13.417 [2024-07-26 09:06:31.663442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.417 [2024-07-26 09:06:31.663486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.417 qpair failed and we were unable to recover it. 00:33:13.417 [2024-07-26 09:06:31.663664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.417 [2024-07-26 09:06:31.663714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.417 qpair failed and we were unable to recover it. 00:33:13.417 [2024-07-26 09:06:31.663856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.417 [2024-07-26 09:06:31.663882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.417 qpair failed and we were unable to recover it. 00:33:13.417 [2024-07-26 09:06:31.664027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.417 [2024-07-26 09:06:31.664052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.417 qpair failed and we were unable to recover it. 00:33:13.417 [2024-07-26 09:06:31.664255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.417 [2024-07-26 09:06:31.664281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.417 qpair failed and we were unable to recover it. 00:33:13.417 [2024-07-26 09:06:31.664431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.417 [2024-07-26 09:06:31.664456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.417 qpair failed and we were unable to recover it. 00:33:13.417 [2024-07-26 09:06:31.664626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.417 [2024-07-26 09:06:31.664654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.417 qpair failed and we were unable to recover it. 00:33:13.417 [2024-07-26 09:06:31.664791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.417 [2024-07-26 09:06:31.664819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.417 qpair failed and we were unable to recover it. 00:33:13.417 [2024-07-26 09:06:31.664984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.417 [2024-07-26 09:06:31.665011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.417 qpair failed and we were unable to recover it. 00:33:13.417 [2024-07-26 09:06:31.665134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.417 [2024-07-26 09:06:31.665160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.417 qpair failed and we were unable to recover it. 00:33:13.417 [2024-07-26 09:06:31.665351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.417 [2024-07-26 09:06:31.665381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.417 qpair failed and we were unable to recover it. 00:33:13.417 [2024-07-26 09:06:31.665548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.417 [2024-07-26 09:06:31.665574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.417 qpair failed and we were unable to recover it. 00:33:13.417 [2024-07-26 09:06:31.665718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.417 [2024-07-26 09:06:31.665759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.417 qpair failed and we were unable to recover it. 00:33:13.417 [2024-07-26 09:06:31.665882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.417 [2024-07-26 09:06:31.665911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.417 qpair failed and we were unable to recover it. 00:33:13.417 [2024-07-26 09:06:31.666072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.417 [2024-07-26 09:06:31.666125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.417 qpair failed and we were unable to recover it. 00:33:13.417 [2024-07-26 09:06:31.666246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.417 [2024-07-26 09:06:31.666274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.417 qpair failed and we were unable to recover it. 00:33:13.417 [2024-07-26 09:06:31.666455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.417 [2024-07-26 09:06:31.666481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.417 qpair failed and we were unable to recover it. 00:33:13.417 [2024-07-26 09:06:31.666637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.417 [2024-07-26 09:06:31.666663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.417 qpair failed and we were unable to recover it. 00:33:13.417 [2024-07-26 09:06:31.666772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.417 [2024-07-26 09:06:31.666798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.417 qpair failed and we were unable to recover it. 00:33:13.417 [2024-07-26 09:06:31.666943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.417 [2024-07-26 09:06:31.666969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.417 qpair failed and we were unable to recover it. 00:33:13.417 [2024-07-26 09:06:31.667117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.417 [2024-07-26 09:06:31.667143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.417 qpair failed and we were unable to recover it. 00:33:13.417 [2024-07-26 09:06:31.667291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.417 [2024-07-26 09:06:31.667317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.418 qpair failed and we were unable to recover it. 00:33:13.418 [2024-07-26 09:06:31.667523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.418 [2024-07-26 09:06:31.667576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.418 qpair failed and we were unable to recover it. 00:33:13.418 [2024-07-26 09:06:31.667746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.418 [2024-07-26 09:06:31.667771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.418 qpair failed and we were unable to recover it. 00:33:13.418 [2024-07-26 09:06:31.667884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.418 [2024-07-26 09:06:31.667925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.418 qpair failed and we were unable to recover it. 00:33:13.418 [2024-07-26 09:06:31.668067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.418 [2024-07-26 09:06:31.668104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.418 qpair failed and we were unable to recover it. 00:33:13.418 [2024-07-26 09:06:31.668263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.418 [2024-07-26 09:06:31.668289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.418 qpair failed and we were unable to recover it. 00:33:13.418 [2024-07-26 09:06:31.668465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.418 [2024-07-26 09:06:31.668508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.418 qpair failed and we were unable to recover it. 00:33:13.418 [2024-07-26 09:06:31.668708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.418 [2024-07-26 09:06:31.668733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.418 qpair failed and we were unable to recover it. 00:33:13.418 [2024-07-26 09:06:31.668848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.418 [2024-07-26 09:06:31.668875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.418 qpair failed and we were unable to recover it. 00:33:13.418 [2024-07-26 09:06:31.669021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.418 [2024-07-26 09:06:31.669051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.418 qpair failed and we were unable to recover it. 00:33:13.418 [2024-07-26 09:06:31.669181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.418 [2024-07-26 09:06:31.669208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.418 qpair failed and we were unable to recover it. 00:33:13.418 [2024-07-26 09:06:31.669374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.418 [2024-07-26 09:06:31.669399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.418 qpair failed and we were unable to recover it. 00:33:13.418 [2024-07-26 09:06:31.669528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.418 [2024-07-26 09:06:31.669556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.418 qpair failed and we were unable to recover it. 00:33:13.418 [2024-07-26 09:06:31.669758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.418 [2024-07-26 09:06:31.669812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.418 qpair failed and we were unable to recover it. 00:33:13.418 [2024-07-26 09:06:31.670007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.418 [2024-07-26 09:06:31.670032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.418 qpair failed and we were unable to recover it. 00:33:13.418 [2024-07-26 09:06:31.670160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.418 [2024-07-26 09:06:31.670194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.418 qpair failed and we were unable to recover it. 00:33:13.418 [2024-07-26 09:06:31.670366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.418 [2024-07-26 09:06:31.670409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.418 qpair failed and we were unable to recover it. 00:33:13.418 [2024-07-26 09:06:31.670588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.418 [2024-07-26 09:06:31.670614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.418 qpair failed and we were unable to recover it. 00:33:13.418 [2024-07-26 09:06:31.670777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.418 [2024-07-26 09:06:31.670805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.418 qpair failed and we were unable to recover it. 00:33:13.418 [2024-07-26 09:06:31.670944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.418 [2024-07-26 09:06:31.670972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.418 qpair failed and we were unable to recover it. 00:33:13.418 [2024-07-26 09:06:31.671143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.418 [2024-07-26 09:06:31.671169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.418 qpair failed and we were unable to recover it. 00:33:13.418 [2024-07-26 09:06:31.671292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.418 [2024-07-26 09:06:31.671317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.418 qpair failed and we were unable to recover it. 00:33:13.418 [2024-07-26 09:06:31.671490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.418 [2024-07-26 09:06:31.671532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.418 qpair failed and we were unable to recover it. 00:33:13.418 [2024-07-26 09:06:31.671710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.418 [2024-07-26 09:06:31.671735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.418 qpair failed and we were unable to recover it. 00:33:13.418 [2024-07-26 09:06:31.671851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.418 [2024-07-26 09:06:31.671876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.418 qpair failed and we were unable to recover it. 00:33:13.418 [2024-07-26 09:06:31.672051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.418 [2024-07-26 09:06:31.672085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.418 qpair failed and we were unable to recover it. 00:33:13.418 [2024-07-26 09:06:31.672260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.418 [2024-07-26 09:06:31.672285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.418 qpair failed and we were unable to recover it. 00:33:13.418 [2024-07-26 09:06:31.672453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.418 [2024-07-26 09:06:31.672478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.418 qpair failed and we were unable to recover it. 00:33:13.418 [2024-07-26 09:06:31.672722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.418 [2024-07-26 09:06:31.672769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.418 qpair failed and we were unable to recover it. 00:33:13.418 [2024-07-26 09:06:31.672939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.418 [2024-07-26 09:06:31.672964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.418 qpair failed and we were unable to recover it. 00:33:13.418 [2024-07-26 09:06:31.673135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.418 [2024-07-26 09:06:31.673164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.418 qpair failed and we were unable to recover it. 00:33:13.418 [2024-07-26 09:06:31.673300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.418 [2024-07-26 09:06:31.673328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.418 qpair failed and we were unable to recover it. 00:33:13.418 [2024-07-26 09:06:31.673497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.418 [2024-07-26 09:06:31.673522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.418 qpair failed and we were unable to recover it. 00:33:13.418 [2024-07-26 09:06:31.673684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.418 [2024-07-26 09:06:31.673712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.418 qpair failed and we were unable to recover it. 00:33:13.418 [2024-07-26 09:06:31.673876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.418 [2024-07-26 09:06:31.673904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.418 qpair failed and we were unable to recover it. 00:33:13.418 [2024-07-26 09:06:31.674075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.418 [2024-07-26 09:06:31.674101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.418 qpair failed and we were unable to recover it. 00:33:13.418 [2024-07-26 09:06:31.674272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.418 [2024-07-26 09:06:31.674302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.418 qpair failed and we were unable to recover it. 00:33:13.418 [2024-07-26 09:06:31.674467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.418 [2024-07-26 09:06:31.674495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.418 qpair failed and we were unable to recover it. 00:33:13.419 [2024-07-26 09:06:31.674642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.419 [2024-07-26 09:06:31.674667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.419 qpair failed and we were unable to recover it. 00:33:13.419 [2024-07-26 09:06:31.674810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.419 [2024-07-26 09:06:31.674852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.419 qpair failed and we were unable to recover it. 00:33:13.419 [2024-07-26 09:06:31.675000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.419 [2024-07-26 09:06:31.675028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.419 qpair failed and we were unable to recover it. 00:33:13.419 [2024-07-26 09:06:31.675176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.419 [2024-07-26 09:06:31.675202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.419 qpair failed and we were unable to recover it. 00:33:13.419 [2024-07-26 09:06:31.675320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.419 [2024-07-26 09:06:31.675347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.419 qpair failed and we were unable to recover it. 00:33:13.419 [2024-07-26 09:06:31.675491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.419 [2024-07-26 09:06:31.675517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.419 qpair failed and we were unable to recover it. 00:33:13.419 [2024-07-26 09:06:31.675706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.419 [2024-07-26 09:06:31.675731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.419 qpair failed and we were unable to recover it. 00:33:13.419 [2024-07-26 09:06:31.675910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.419 [2024-07-26 09:06:31.675938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.419 qpair failed and we were unable to recover it. 00:33:13.419 [2024-07-26 09:06:31.676098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.419 [2024-07-26 09:06:31.676127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.419 qpair failed and we were unable to recover it. 00:33:13.419 [2024-07-26 09:06:31.676320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.419 [2024-07-26 09:06:31.676346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.419 qpair failed and we were unable to recover it. 00:33:13.419 [2024-07-26 09:06:31.676514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.419 [2024-07-26 09:06:31.676542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.419 qpair failed and we were unable to recover it. 00:33:13.419 [2024-07-26 09:06:31.676682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.419 [2024-07-26 09:06:31.676713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.419 qpair failed and we were unable to recover it. 00:33:13.419 [2024-07-26 09:06:31.676862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.419 [2024-07-26 09:06:31.676888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.419 qpair failed and we were unable to recover it. 00:33:13.419 [2024-07-26 09:06:31.677043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.419 [2024-07-26 09:06:31.677079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.419 qpair failed and we were unable to recover it. 00:33:13.419 [2024-07-26 09:06:31.677223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.419 [2024-07-26 09:06:31.677252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.419 qpair failed and we were unable to recover it. 00:33:13.419 [2024-07-26 09:06:31.677399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.419 [2024-07-26 09:06:31.677425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.419 qpair failed and we were unable to recover it. 00:33:13.419 [2024-07-26 09:06:31.677597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.419 [2024-07-26 09:06:31.677622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.419 qpair failed and we were unable to recover it. 00:33:13.419 [2024-07-26 09:06:31.677802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.419 [2024-07-26 09:06:31.677856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.419 qpair failed and we were unable to recover it. 00:33:13.419 [2024-07-26 09:06:31.677996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.419 [2024-07-26 09:06:31.678022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.419 qpair failed and we were unable to recover it. 00:33:13.419 [2024-07-26 09:06:31.678142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.419 [2024-07-26 09:06:31.678169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.419 qpair failed and we were unable to recover it. 00:33:13.419 [2024-07-26 09:06:31.678329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.419 [2024-07-26 09:06:31.678355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.419 qpair failed and we were unable to recover it. 00:33:13.419 [2024-07-26 09:06:31.678491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.419 [2024-07-26 09:06:31.678517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.419 qpair failed and we were unable to recover it. 00:33:13.419 [2024-07-26 09:06:31.678626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.419 [2024-07-26 09:06:31.678667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.419 qpair failed and we were unable to recover it. 00:33:13.419 [2024-07-26 09:06:31.678834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.419 [2024-07-26 09:06:31.678862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.419 qpair failed and we were unable to recover it. 00:33:13.419 [2024-07-26 09:06:31.679046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.419 [2024-07-26 09:06:31.679080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.419 qpair failed and we were unable to recover it. 00:33:13.419 [2024-07-26 09:06:31.679282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.419 [2024-07-26 09:06:31.679315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.419 qpair failed and we were unable to recover it. 00:33:13.419 [2024-07-26 09:06:31.679530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.419 [2024-07-26 09:06:31.679555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.419 qpair failed and we were unable to recover it. 00:33:13.419 [2024-07-26 09:06:31.679732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.419 [2024-07-26 09:06:31.679758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.419 qpair failed and we were unable to recover it. 00:33:13.419 [2024-07-26 09:06:31.679921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.419 [2024-07-26 09:06:31.679950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.419 qpair failed and we were unable to recover it. 00:33:13.419 [2024-07-26 09:06:31.680133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.419 [2024-07-26 09:06:31.680163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.419 qpair failed and we were unable to recover it. 00:33:13.419 [2024-07-26 09:06:31.680310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.419 [2024-07-26 09:06:31.680336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.419 qpair failed and we were unable to recover it. 00:33:13.419 [2024-07-26 09:06:31.680455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.419 [2024-07-26 09:06:31.680481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.419 qpair failed and we were unable to recover it. 00:33:13.419 [2024-07-26 09:06:31.680626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.419 [2024-07-26 09:06:31.680652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.419 qpair failed and we were unable to recover it. 00:33:13.419 [2024-07-26 09:06:31.680833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.419 [2024-07-26 09:06:31.680858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.419 qpair failed and we were unable to recover it. 00:33:13.419 [2024-07-26 09:06:31.681008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.419 [2024-07-26 09:06:31.681049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.419 qpair failed and we were unable to recover it. 00:33:13.419 [2024-07-26 09:06:31.681219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.419 [2024-07-26 09:06:31.681248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.419 qpair failed and we were unable to recover it. 00:33:13.419 [2024-07-26 09:06:31.681389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.419 [2024-07-26 09:06:31.681416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.419 qpair failed and we were unable to recover it. 00:33:13.419 [2024-07-26 09:06:31.681568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.420 [2024-07-26 09:06:31.681594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.420 qpair failed and we were unable to recover it. 00:33:13.420 [2024-07-26 09:06:31.681773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.420 [2024-07-26 09:06:31.681801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.420 qpair failed and we were unable to recover it. 00:33:13.420 [2024-07-26 09:06:31.681966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.420 [2024-07-26 09:06:31.681992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.420 qpair failed and we were unable to recover it. 00:33:13.420 [2024-07-26 09:06:31.682154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.420 [2024-07-26 09:06:31.682191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.420 qpair failed and we were unable to recover it. 00:33:13.420 [2024-07-26 09:06:31.682344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.420 [2024-07-26 09:06:31.682370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.420 qpair failed and we were unable to recover it. 00:33:13.420 [2024-07-26 09:06:31.682541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.420 [2024-07-26 09:06:31.682567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.420 qpair failed and we were unable to recover it. 00:33:13.420 [2024-07-26 09:06:31.682717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.420 [2024-07-26 09:06:31.682742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.420 qpair failed and we were unable to recover it. 00:33:13.420 [2024-07-26 09:06:31.682860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.420 [2024-07-26 09:06:31.682885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.420 qpair failed and we were unable to recover it. 00:33:13.420 [2024-07-26 09:06:31.683011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.420 [2024-07-26 09:06:31.683036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.420 qpair failed and we were unable to recover it. 00:33:13.420 [2024-07-26 09:06:31.683164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.420 [2024-07-26 09:06:31.683191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.420 qpair failed and we were unable to recover it. 00:33:13.420 [2024-07-26 09:06:31.683308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.420 [2024-07-26 09:06:31.683332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.420 qpair failed and we were unable to recover it. 00:33:13.420 [2024-07-26 09:06:31.683448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.420 [2024-07-26 09:06:31.683473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.420 qpair failed and we were unable to recover it. 00:33:13.420 [2024-07-26 09:06:31.683613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.420 [2024-07-26 09:06:31.683638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.420 qpair failed and we were unable to recover it. 00:33:13.420 [2024-07-26 09:06:31.683833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.420 [2024-07-26 09:06:31.683861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.420 qpair failed and we were unable to recover it. 00:33:13.420 [2024-07-26 09:06:31.684019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.420 [2024-07-26 09:06:31.684052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.420 qpair failed and we were unable to recover it. 00:33:13.420 [2024-07-26 09:06:31.684235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.420 [2024-07-26 09:06:31.684261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.420 qpair failed and we were unable to recover it. 00:33:13.420 [2024-07-26 09:06:31.684436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.420 [2024-07-26 09:06:31.684462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.420 qpair failed and we were unable to recover it. 00:33:13.420 [2024-07-26 09:06:31.684607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.420 [2024-07-26 09:06:31.684632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.420 qpair failed and we were unable to recover it. 00:33:13.420 [2024-07-26 09:06:31.684779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.420 [2024-07-26 09:06:31.684805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.420 qpair failed and we were unable to recover it. 00:33:13.420 [2024-07-26 09:06:31.684975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.420 [2024-07-26 09:06:31.685004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.420 qpair failed and we were unable to recover it. 00:33:13.420 [2024-07-26 09:06:31.685173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.420 [2024-07-26 09:06:31.685209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.420 qpair failed and we were unable to recover it. 00:33:13.420 [2024-07-26 09:06:31.685370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.420 [2024-07-26 09:06:31.685397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.420 qpair failed and we were unable to recover it. 00:33:13.420 [2024-07-26 09:06:31.685578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.420 [2024-07-26 09:06:31.685629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.420 qpair failed and we were unable to recover it. 00:33:13.420 [2024-07-26 09:06:31.685767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.420 [2024-07-26 09:06:31.685792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.420 qpair failed and we were unable to recover it. 00:33:13.420 [2024-07-26 09:06:31.685940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.420 [2024-07-26 09:06:31.685983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.420 qpair failed and we were unable to recover it. 00:33:13.420 [2024-07-26 09:06:31.686147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.420 [2024-07-26 09:06:31.686176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.420 qpair failed and we were unable to recover it. 00:33:13.420 [2024-07-26 09:06:31.686366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.420 [2024-07-26 09:06:31.686390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.420 qpair failed and we were unable to recover it. 00:33:13.420 [2024-07-26 09:06:31.686590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.420 [2024-07-26 09:06:31.686618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.420 qpair failed and we were unable to recover it. 00:33:13.420 [2024-07-26 09:06:31.686789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.420 [2024-07-26 09:06:31.686818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.420 qpair failed and we were unable to recover it. 00:33:13.420 [2024-07-26 09:06:31.686961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.420 [2024-07-26 09:06:31.686986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.420 qpair failed and we were unable to recover it. 00:33:13.420 [2024-07-26 09:06:31.687137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.420 [2024-07-26 09:06:31.687178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.420 qpair failed and we were unable to recover it. 00:33:13.420 [2024-07-26 09:06:31.687357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.420 [2024-07-26 09:06:31.687385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.420 qpair failed and we were unable to recover it. 00:33:13.420 [2024-07-26 09:06:31.687549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.420 [2024-07-26 09:06:31.687574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.420 qpair failed and we were unable to recover it. 00:33:13.420 [2024-07-26 09:06:31.687733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.420 [2024-07-26 09:06:31.687761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.420 qpair failed and we were unable to recover it. 00:33:13.420 [2024-07-26 09:06:31.687928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.420 [2024-07-26 09:06:31.687953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.420 qpair failed and we were unable to recover it. 00:33:13.420 [2024-07-26 09:06:31.688078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.420 [2024-07-26 09:06:31.688104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.420 qpair failed and we were unable to recover it. 00:33:13.420 [2024-07-26 09:06:31.688245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.420 [2024-07-26 09:06:31.688287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.420 qpair failed and we were unable to recover it. 00:33:13.420 [2024-07-26 09:06:31.688413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.420 [2024-07-26 09:06:31.688440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.421 qpair failed and we were unable to recover it. 00:33:13.421 [2024-07-26 09:06:31.688600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.421 [2024-07-26 09:06:31.688625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.421 qpair failed and we were unable to recover it. 00:33:13.421 [2024-07-26 09:06:31.688824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.421 [2024-07-26 09:06:31.688852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.421 qpair failed and we were unable to recover it. 00:33:13.421 [2024-07-26 09:06:31.689017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.421 [2024-07-26 09:06:31.689046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.421 qpair failed and we were unable to recover it. 00:33:13.421 [2024-07-26 09:06:31.689215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.421 [2024-07-26 09:06:31.689241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.421 qpair failed and we were unable to recover it. 00:33:13.421 [2024-07-26 09:06:31.689389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.421 [2024-07-26 09:06:31.689430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.421 qpair failed and we were unable to recover it. 00:33:13.421 [2024-07-26 09:06:31.689589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.421 [2024-07-26 09:06:31.689617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.421 qpair failed and we were unable to recover it. 00:33:13.421 [2024-07-26 09:06:31.689803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.421 [2024-07-26 09:06:31.689829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.421 qpair failed and we were unable to recover it. 00:33:13.421 [2024-07-26 09:06:31.689991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.421 [2024-07-26 09:06:31.690018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.421 qpair failed and we were unable to recover it. 00:33:13.421 [2024-07-26 09:06:31.690193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.421 [2024-07-26 09:06:31.690219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.421 qpair failed and we were unable to recover it. 00:33:13.421 [2024-07-26 09:06:31.690362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.421 [2024-07-26 09:06:31.690387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.421 qpair failed and we were unable to recover it. 00:33:13.421 [2024-07-26 09:06:31.690556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.421 [2024-07-26 09:06:31.690584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.421 qpair failed and we were unable to recover it. 00:33:13.421 [2024-07-26 09:06:31.690720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.421 [2024-07-26 09:06:31.690747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.421 qpair failed and we were unable to recover it. 00:33:13.421 [2024-07-26 09:06:31.690903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.421 [2024-07-26 09:06:31.690932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.421 qpair failed and we were unable to recover it. 00:33:13.421 [2024-07-26 09:06:31.691137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.421 [2024-07-26 09:06:31.691164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.421 qpair failed and we were unable to recover it. 00:33:13.421 [2024-07-26 09:06:31.691291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.421 [2024-07-26 09:06:31.691316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.421 qpair failed and we were unable to recover it. 00:33:13.421 [2024-07-26 09:06:31.691491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.421 [2024-07-26 09:06:31.691516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.421 qpair failed and we were unable to recover it. 00:33:13.421 [2024-07-26 09:06:31.691664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.421 [2024-07-26 09:06:31.691705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.421 qpair failed and we were unable to recover it. 00:33:13.421 [2024-07-26 09:06:31.691876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.421 [2024-07-26 09:06:31.691904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.421 qpair failed and we were unable to recover it. 00:33:13.421 [2024-07-26 09:06:31.692097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.421 [2024-07-26 09:06:31.692122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.421 qpair failed and we were unable to recover it. 00:33:13.421 [2024-07-26 09:06:31.692259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.421 [2024-07-26 09:06:31.692284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.421 qpair failed and we were unable to recover it. 00:33:13.421 [2024-07-26 09:06:31.692453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.421 [2024-07-26 09:06:31.692495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.421 qpair failed and we were unable to recover it. 00:33:13.421 [2024-07-26 09:06:31.692664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.421 [2024-07-26 09:06:31.692690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.421 qpair failed and we were unable to recover it. 00:33:13.421 [2024-07-26 09:06:31.692849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.421 [2024-07-26 09:06:31.692877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.421 qpair failed and we were unable to recover it. 00:33:13.421 [2024-07-26 09:06:31.693035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.421 [2024-07-26 09:06:31.693080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.421 qpair failed and we were unable to recover it. 00:33:13.421 [2024-07-26 09:06:31.693214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.421 [2024-07-26 09:06:31.693239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.421 qpair failed and we were unable to recover it. 00:33:13.421 [2024-07-26 09:06:31.693381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.421 [2024-07-26 09:06:31.693423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.421 qpair failed and we were unable to recover it. 00:33:13.421 [2024-07-26 09:06:31.693608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.421 [2024-07-26 09:06:31.693635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.421 qpair failed and we were unable to recover it. 00:33:13.421 [2024-07-26 09:06:31.693778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.421 [2024-07-26 09:06:31.693802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.421 qpair failed and we were unable to recover it. 00:33:13.421 [2024-07-26 09:06:31.693923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.421 [2024-07-26 09:06:31.693950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.421 qpair failed and we were unable to recover it. 00:33:13.421 [2024-07-26 09:06:31.694099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.421 [2024-07-26 09:06:31.694125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.421 qpair failed and we were unable to recover it. 00:33:13.421 [2024-07-26 09:06:31.694304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.421 [2024-07-26 09:06:31.694330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.421 qpair failed and we were unable to recover it. 00:33:13.421 [2024-07-26 09:06:31.694496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.421 [2024-07-26 09:06:31.694525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.421 qpair failed and we were unable to recover it. 00:33:13.421 [2024-07-26 09:06:31.694655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.421 [2024-07-26 09:06:31.694682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.421 qpair failed and we were unable to recover it. 00:33:13.421 [2024-07-26 09:06:31.694872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.422 [2024-07-26 09:06:31.694897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.422 qpair failed and we were unable to recover it. 00:33:13.422 [2024-07-26 09:06:31.695018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.422 [2024-07-26 09:06:31.695065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.422 qpair failed and we were unable to recover it. 00:33:13.422 [2024-07-26 09:06:31.695235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.422 [2024-07-26 09:06:31.695263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.422 qpair failed and we were unable to recover it. 00:33:13.422 [2024-07-26 09:06:31.695410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.422 [2024-07-26 09:06:31.695435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.422 qpair failed and we were unable to recover it. 00:33:13.422 [2024-07-26 09:06:31.695582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.422 [2024-07-26 09:06:31.695607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.422 qpair failed and we were unable to recover it. 00:33:13.422 [2024-07-26 09:06:31.695821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.422 [2024-07-26 09:06:31.695846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.422 qpair failed and we were unable to recover it. 00:33:13.422 [2024-07-26 09:06:31.695986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.422 [2024-07-26 09:06:31.696010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.422 qpair failed and we were unable to recover it. 00:33:13.422 [2024-07-26 09:06:31.696188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.422 [2024-07-26 09:06:31.696213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.422 qpair failed and we were unable to recover it. 00:33:13.422 [2024-07-26 09:06:31.696379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.422 [2024-07-26 09:06:31.696406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.422 qpair failed and we were unable to recover it. 00:33:13.422 [2024-07-26 09:06:31.696552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.422 [2024-07-26 09:06:31.696578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.422 qpair failed and we were unable to recover it. 00:33:13.422 [2024-07-26 09:06:31.696725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.422 [2024-07-26 09:06:31.696753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.422 qpair failed and we were unable to recover it. 00:33:13.422 [2024-07-26 09:06:31.696923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.422 [2024-07-26 09:06:31.696951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.422 qpair failed and we were unable to recover it. 00:33:13.422 [2024-07-26 09:06:31.697090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.422 [2024-07-26 09:06:31.697116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.422 qpair failed and we were unable to recover it. 00:33:13.422 [2024-07-26 09:06:31.697268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.422 [2024-07-26 09:06:31.697293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.422 qpair failed and we were unable to recover it. 00:33:13.422 [2024-07-26 09:06:31.697488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.422 [2024-07-26 09:06:31.697516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.422 qpair failed and we were unable to recover it. 00:33:13.422 [2024-07-26 09:06:31.697658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.422 [2024-07-26 09:06:31.697684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.422 qpair failed and we were unable to recover it. 00:33:13.422 [2024-07-26 09:06:31.697825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.422 [2024-07-26 09:06:31.697849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.422 qpair failed and we were unable to recover it. 00:33:13.422 [2024-07-26 09:06:31.698019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.422 [2024-07-26 09:06:31.698046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.422 qpair failed and we were unable to recover it. 00:33:13.422 [2024-07-26 09:06:31.698205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.422 [2024-07-26 09:06:31.698230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.422 qpair failed and we were unable to recover it. 00:33:13.422 [2024-07-26 09:06:31.698374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.422 [2024-07-26 09:06:31.698400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.422 qpair failed and we were unable to recover it. 00:33:13.422 [2024-07-26 09:06:31.698557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.422 [2024-07-26 09:06:31.698583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.422 qpair failed and we were unable to recover it. 00:33:13.422 [2024-07-26 09:06:31.698699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.422 [2024-07-26 09:06:31.698726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.422 qpair failed and we were unable to recover it. 00:33:13.422 [2024-07-26 09:06:31.698872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.422 [2024-07-26 09:06:31.698896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.422 qpair failed and we were unable to recover it. 00:33:13.422 [2024-07-26 09:06:31.699052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.422 [2024-07-26 09:06:31.699100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.422 qpair failed and we were unable to recover it. 00:33:13.422 [2024-07-26 09:06:31.699247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.422 [2024-07-26 09:06:31.699273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.422 qpair failed and we were unable to recover it. 00:33:13.422 [2024-07-26 09:06:31.699423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.422 [2024-07-26 09:06:31.699447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.422 qpair failed and we were unable to recover it. 00:33:13.422 [2024-07-26 09:06:31.699627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.422 [2024-07-26 09:06:31.699654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.422 qpair failed and we were unable to recover it. 00:33:13.422 [2024-07-26 09:06:31.699803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.422 [2024-07-26 09:06:31.699828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.422 qpair failed and we were unable to recover it. 00:33:13.422 [2024-07-26 09:06:31.699971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.422 [2024-07-26 09:06:31.699996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.422 qpair failed and we were unable to recover it. 00:33:13.422 [2024-07-26 09:06:31.700165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.422 [2024-07-26 09:06:31.700190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.422 qpair failed and we were unable to recover it. 00:33:13.422 [2024-07-26 09:06:31.700364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.422 [2024-07-26 09:06:31.700389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.422 qpair failed and we were unable to recover it. 00:33:13.422 [2024-07-26 09:06:31.700515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.422 [2024-07-26 09:06:31.700540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.422 qpair failed and we were unable to recover it. 00:33:13.422 [2024-07-26 09:06:31.700688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.422 [2024-07-26 09:06:31.700712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.422 qpair failed and we were unable to recover it. 00:33:13.422 [2024-07-26 09:06:31.700894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.422 [2024-07-26 09:06:31.700918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.422 qpair failed and we were unable to recover it. 00:33:13.422 [2024-07-26 09:06:31.701083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.422 [2024-07-26 09:06:31.701110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.422 qpair failed and we were unable to recover it. 00:33:13.422 [2024-07-26 09:06:31.701294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.422 [2024-07-26 09:06:31.701320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.422 qpair failed and we were unable to recover it. 00:33:13.422 [2024-07-26 09:06:31.701474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.423 [2024-07-26 09:06:31.701499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.423 qpair failed and we were unable to recover it. 00:33:13.423 [2024-07-26 09:06:31.701668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.423 [2024-07-26 09:06:31.701697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.423 qpair failed and we were unable to recover it. 00:33:13.423 [2024-07-26 09:06:31.701851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.423 [2024-07-26 09:06:31.701880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.423 qpair failed and we were unable to recover it. 00:33:13.423 [2024-07-26 09:06:31.702067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.423 [2024-07-26 09:06:31.702096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.423 qpair failed and we were unable to recover it. 00:33:13.423 [2024-07-26 09:06:31.702266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.423 [2024-07-26 09:06:31.702291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.423 qpair failed and we were unable to recover it. 00:33:13.423 [2024-07-26 09:06:31.702458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.423 [2024-07-26 09:06:31.702487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.423 qpair failed and we were unable to recover it. 00:33:13.423 [2024-07-26 09:06:31.702620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.423 [2024-07-26 09:06:31.702646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.423 qpair failed and we were unable to recover it. 00:33:13.423 [2024-07-26 09:06:31.702822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.423 [2024-07-26 09:06:31.702864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.423 qpair failed and we were unable to recover it. 00:33:13.423 [2024-07-26 09:06:31.703026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.423 [2024-07-26 09:06:31.703054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.423 qpair failed and we were unable to recover it. 00:33:13.423 [2024-07-26 09:06:31.703232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.423 [2024-07-26 09:06:31.703257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.423 qpair failed and we were unable to recover it. 00:33:13.423 [2024-07-26 09:06:31.703444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.423 [2024-07-26 09:06:31.703473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.423 qpair failed and we were unable to recover it. 00:33:13.423 [2024-07-26 09:06:31.703640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.423 [2024-07-26 09:06:31.703668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.423 qpair failed and we were unable to recover it. 00:33:13.423 [2024-07-26 09:06:31.703846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.423 [2024-07-26 09:06:31.703871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.423 qpair failed and we were unable to recover it. 00:33:13.423 [2024-07-26 09:06:31.704011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.423 [2024-07-26 09:06:31.704036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.423 qpair failed and we were unable to recover it. 00:33:13.423 [2024-07-26 09:06:31.704217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.423 [2024-07-26 09:06:31.704250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.423 qpair failed and we were unable to recover it. 00:33:13.423 [2024-07-26 09:06:31.704390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.423 [2024-07-26 09:06:31.704415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.423 qpair failed and we were unable to recover it. 00:33:13.423 [2024-07-26 09:06:31.704601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.423 [2024-07-26 09:06:31.704628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.423 qpair failed and we were unable to recover it. 00:33:13.423 [2024-07-26 09:06:31.704797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.423 [2024-07-26 09:06:31.704822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.423 qpair failed and we were unable to recover it. 00:33:13.423 [2024-07-26 09:06:31.704970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.423 [2024-07-26 09:06:31.704995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.423 qpair failed and we were unable to recover it. 00:33:13.423 [2024-07-26 09:06:31.705189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.423 [2024-07-26 09:06:31.705218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.423 qpair failed and we were unable to recover it. 00:33:13.423 [2024-07-26 09:06:31.705373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.423 [2024-07-26 09:06:31.705401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.423 qpair failed and we were unable to recover it. 00:33:13.423 [2024-07-26 09:06:31.705563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.423 [2024-07-26 09:06:31.705587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.423 qpair failed and we were unable to recover it. 00:33:13.423 [2024-07-26 09:06:31.705713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.423 [2024-07-26 09:06:31.705738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.423 qpair failed and we were unable to recover it. 00:33:13.423 [2024-07-26 09:06:31.705888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.423 [2024-07-26 09:06:31.705913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.423 qpair failed and we were unable to recover it. 00:33:13.423 [2024-07-26 09:06:31.706025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.423 [2024-07-26 09:06:31.706050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.423 qpair failed and we were unable to recover it. 00:33:13.423 [2024-07-26 09:06:31.706247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.423 [2024-07-26 09:06:31.706275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.423 qpair failed and we were unable to recover it. 00:33:13.423 [2024-07-26 09:06:31.706468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.423 [2024-07-26 09:06:31.706496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.423 qpair failed and we were unable to recover it. 00:33:13.423 [2024-07-26 09:06:31.706664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.423 [2024-07-26 09:06:31.706690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.423 qpair failed and we were unable to recover it. 00:33:13.423 [2024-07-26 09:06:31.706887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.423 [2024-07-26 09:06:31.706915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.423 qpair failed and we were unable to recover it. 00:33:13.423 [2024-07-26 09:06:31.707086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.423 [2024-07-26 09:06:31.707112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.423 qpair failed and we were unable to recover it. 00:33:13.423 [2024-07-26 09:06:31.707261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.423 [2024-07-26 09:06:31.707287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.423 qpair failed and we were unable to recover it. 00:33:13.423 [2024-07-26 09:06:31.707410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.423 [2024-07-26 09:06:31.707434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.423 qpair failed and we were unable to recover it. 00:33:13.423 [2024-07-26 09:06:31.707575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.423 [2024-07-26 09:06:31.707604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.423 qpair failed and we were unable to recover it. 00:33:13.423 [2024-07-26 09:06:31.707765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.423 [2024-07-26 09:06:31.707790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.423 qpair failed and we were unable to recover it. 00:33:13.423 [2024-07-26 09:06:31.707939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.423 [2024-07-26 09:06:31.707964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.423 qpair failed and we were unable to recover it. 00:33:13.423 [2024-07-26 09:06:31.708112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.423 [2024-07-26 09:06:31.708138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.423 qpair failed and we were unable to recover it. 00:33:13.423 [2024-07-26 09:06:31.708261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.423 [2024-07-26 09:06:31.708286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.423 qpair failed and we were unable to recover it. 00:33:13.423 [2024-07-26 09:06:31.708408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.423 [2024-07-26 09:06:31.708433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.424 qpair failed and we were unable to recover it. 00:33:13.424 [2024-07-26 09:06:31.708582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.424 [2024-07-26 09:06:31.708607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.424 qpair failed and we were unable to recover it. 00:33:13.424 [2024-07-26 09:06:31.708753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.424 [2024-07-26 09:06:31.708778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.424 qpair failed and we were unable to recover it. 00:33:13.424 [2024-07-26 09:06:31.708947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.424 [2024-07-26 09:06:31.708974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.424 qpair failed and we were unable to recover it. 00:33:13.424 [2024-07-26 09:06:31.709153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.424 [2024-07-26 09:06:31.709179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.424 qpair failed and we were unable to recover it. 00:33:13.424 [2024-07-26 09:06:31.709291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.424 [2024-07-26 09:06:31.709316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.424 qpair failed and we were unable to recover it. 00:33:13.424 [2024-07-26 09:06:31.709463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.424 [2024-07-26 09:06:31.709489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.424 qpair failed and we were unable to recover it. 00:33:13.424 [2024-07-26 09:06:31.709687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.424 [2024-07-26 09:06:31.709715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.424 qpair failed and we were unable to recover it. 00:33:13.424 [2024-07-26 09:06:31.709867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.424 [2024-07-26 09:06:31.709891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.424 qpair failed and we were unable to recover it. 00:33:13.424 [2024-07-26 09:06:31.710039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.424 [2024-07-26 09:06:31.710069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.424 qpair failed and we were unable to recover it. 00:33:13.424 [2024-07-26 09:06:31.710220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.424 [2024-07-26 09:06:31.710246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.424 qpair failed and we were unable to recover it. 00:33:13.424 [2024-07-26 09:06:31.710393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.424 [2024-07-26 09:06:31.710419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.424 qpair failed and we were unable to recover it. 00:33:13.424 [2024-07-26 09:06:31.710569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.424 [2024-07-26 09:06:31.710595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.424 qpair failed and we were unable to recover it. 00:33:13.424 [2024-07-26 09:06:31.710745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.424 [2024-07-26 09:06:31.710771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.424 qpair failed and we were unable to recover it. 00:33:13.424 [2024-07-26 09:06:31.710892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.424 [2024-07-26 09:06:31.710917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.424 qpair failed and we were unable to recover it. 00:33:13.424 [2024-07-26 09:06:31.711027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.424 [2024-07-26 09:06:31.711052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.424 qpair failed and we were unable to recover it. 00:33:13.424 [2024-07-26 09:06:31.711202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.424 [2024-07-26 09:06:31.711227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.424 qpair failed and we were unable to recover it. 00:33:13.424 [2024-07-26 09:06:31.711373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.424 [2024-07-26 09:06:31.711403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.424 qpair failed and we were unable to recover it. 00:33:13.424 [2024-07-26 09:06:31.711573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.424 [2024-07-26 09:06:31.711601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.424 qpair failed and we were unable to recover it. 00:33:13.424 [2024-07-26 09:06:31.711758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.424 [2024-07-26 09:06:31.711785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.424 qpair failed and we were unable to recover it. 00:33:13.424 [2024-07-26 09:06:31.711950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.424 [2024-07-26 09:06:31.711975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.424 qpair failed and we were unable to recover it. 00:33:13.424 [2024-07-26 09:06:31.712138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.424 [2024-07-26 09:06:31.712167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.424 qpair failed and we were unable to recover it. 00:33:13.424 [2024-07-26 09:06:31.712333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.424 [2024-07-26 09:06:31.712361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.424 qpair failed and we were unable to recover it. 00:33:13.424 [2024-07-26 09:06:31.712523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.424 [2024-07-26 09:06:31.712547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.424 qpair failed and we were unable to recover it. 00:33:13.424 [2024-07-26 09:06:31.712691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.424 [2024-07-26 09:06:31.712735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.424 qpair failed and we were unable to recover it. 00:33:13.424 [2024-07-26 09:06:31.712919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.424 [2024-07-26 09:06:31.712947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.424 qpair failed and we were unable to recover it. 00:33:13.424 [2024-07-26 09:06:31.713087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.424 [2024-07-26 09:06:31.713113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.424 qpair failed and we were unable to recover it. 00:33:13.424 [2024-07-26 09:06:31.713235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.424 [2024-07-26 09:06:31.713260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.424 qpair failed and we were unable to recover it. 00:33:13.424 [2024-07-26 09:06:31.713402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.424 [2024-07-26 09:06:31.713428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.424 qpair failed and we were unable to recover it. 00:33:13.424 [2024-07-26 09:06:31.713547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.424 [2024-07-26 09:06:31.713572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.424 qpair failed and we were unable to recover it. 00:33:13.424 [2024-07-26 09:06:31.713717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.424 [2024-07-26 09:06:31.713742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.424 qpair failed and we were unable to recover it. 00:33:13.424 [2024-07-26 09:06:31.713867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.424 [2024-07-26 09:06:31.713892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.424 qpair failed and we were unable to recover it. 00:33:13.424 [2024-07-26 09:06:31.714068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.424 [2024-07-26 09:06:31.714095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.424 qpair failed and we were unable to recover it. 00:33:13.424 [2024-07-26 09:06:31.714224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.424 [2024-07-26 09:06:31.714253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.424 qpair failed and we were unable to recover it. 00:33:13.424 [2024-07-26 09:06:31.714412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.424 [2024-07-26 09:06:31.714441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.424 qpair failed and we were unable to recover it. 00:33:13.424 [2024-07-26 09:06:31.714571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.424 [2024-07-26 09:06:31.714596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.424 qpair failed and we were unable to recover it. 00:33:13.424 [2024-07-26 09:06:31.714739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.424 [2024-07-26 09:06:31.714765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.424 qpair failed and we were unable to recover it. 00:33:13.425 [2024-07-26 09:06:31.714969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.425 [2024-07-26 09:06:31.714997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.425 qpair failed and we were unable to recover it. 00:33:13.425 [2024-07-26 09:06:31.715139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.425 [2024-07-26 09:06:31.715164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.425 qpair failed and we were unable to recover it. 00:33:13.425 [2024-07-26 09:06:31.715354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.425 [2024-07-26 09:06:31.715381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.425 qpair failed and we were unable to recover it. 00:33:13.425 [2024-07-26 09:06:31.715544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.425 [2024-07-26 09:06:31.715573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.425 qpair failed and we were unable to recover it. 00:33:13.425 [2024-07-26 09:06:31.715751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.425 [2024-07-26 09:06:31.715776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.425 qpair failed and we were unable to recover it. 00:33:13.425 [2024-07-26 09:06:31.715963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.425 [2024-07-26 09:06:31.715991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.425 qpair failed and we were unable to recover it. 00:33:13.425 [2024-07-26 09:06:31.716147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.425 [2024-07-26 09:06:31.716175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.425 qpair failed and we were unable to recover it. 00:33:13.425 [2024-07-26 09:06:31.716316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.425 [2024-07-26 09:06:31.716341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.425 qpair failed and we were unable to recover it. 00:33:13.425 [2024-07-26 09:06:31.716486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.425 [2024-07-26 09:06:31.716528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.425 qpair failed and we were unable to recover it. 00:33:13.425 [2024-07-26 09:06:31.716716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.425 [2024-07-26 09:06:31.716744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.425 qpair failed and we were unable to recover it. 00:33:13.425 [2024-07-26 09:06:31.716889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.425 [2024-07-26 09:06:31.716915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.425 qpair failed and we were unable to recover it. 00:33:13.425 [2024-07-26 09:06:31.717040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.425 [2024-07-26 09:06:31.717076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.425 qpair failed and we were unable to recover it. 00:33:13.425 [2024-07-26 09:06:31.717224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.425 [2024-07-26 09:06:31.717250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.425 qpair failed and we were unable to recover it. 00:33:13.425 [2024-07-26 09:06:31.717372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.425 [2024-07-26 09:06:31.717397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.425 qpair failed and we were unable to recover it. 00:33:13.425 [2024-07-26 09:06:31.717544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.425 [2024-07-26 09:06:31.717570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.425 qpair failed and we were unable to recover it. 00:33:13.425 [2024-07-26 09:06:31.717714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.425 [2024-07-26 09:06:31.717756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.425 qpair failed and we were unable to recover it. 00:33:13.425 [2024-07-26 09:06:31.717920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.425 [2024-07-26 09:06:31.717945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.425 qpair failed and we were unable to recover it. 00:33:13.425 [2024-07-26 09:06:31.718134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.425 [2024-07-26 09:06:31.718163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.425 qpair failed and we were unable to recover it. 00:33:13.425 [2024-07-26 09:06:31.718322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.425 [2024-07-26 09:06:31.718350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.425 qpair failed and we were unable to recover it. 00:33:13.425 [2024-07-26 09:06:31.718487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.425 [2024-07-26 09:06:31.718512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.425 qpair failed and we were unable to recover it. 00:33:13.425 [2024-07-26 09:06:31.718654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.425 [2024-07-26 09:06:31.718684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.425 qpair failed and we were unable to recover it. 00:33:13.425 [2024-07-26 09:06:31.718894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.425 [2024-07-26 09:06:31.718919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.425 qpair failed and we were unable to recover it. 00:33:13.425 [2024-07-26 09:06:31.719105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.425 [2024-07-26 09:06:31.719130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.425 qpair failed and we were unable to recover it. 00:33:13.425 [2024-07-26 09:06:31.719299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.425 [2024-07-26 09:06:31.719324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.425 qpair failed and we were unable to recover it. 00:33:13.425 [2024-07-26 09:06:31.719479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.425 [2024-07-26 09:06:31.719507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.425 qpair failed and we were unable to recover it. 00:33:13.425 [2024-07-26 09:06:31.719660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.425 [2024-07-26 09:06:31.719686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.425 qpair failed and we were unable to recover it. 00:33:13.425 [2024-07-26 09:06:31.719828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.425 [2024-07-26 09:06:31.719853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.425 qpair failed and we were unable to recover it. 00:33:13.425 [2024-07-26 09:06:31.719983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.425 [2024-07-26 09:06:31.720008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.425 qpair failed and we were unable to recover it. 00:33:13.425 [2024-07-26 09:06:31.720194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.425 [2024-07-26 09:06:31.720220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.425 qpair failed and we were unable to recover it. 00:33:13.425 [2024-07-26 09:06:31.720376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.425 [2024-07-26 09:06:31.720404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.425 qpair failed and we were unable to recover it. 00:33:13.425 [2024-07-26 09:06:31.720579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.425 [2024-07-26 09:06:31.720604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.425 qpair failed and we were unable to recover it. 00:33:13.425 [2024-07-26 09:06:31.720727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.425 [2024-07-26 09:06:31.720754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.425 qpair failed and we were unable to recover it. 00:33:13.425 [2024-07-26 09:06:31.720945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.425 [2024-07-26 09:06:31.720974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.425 qpair failed and we were unable to recover it. 00:33:13.425 [2024-07-26 09:06:31.721132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.425 [2024-07-26 09:06:31.721161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.425 qpair failed and we were unable to recover it. 00:33:13.425 [2024-07-26 09:06:31.721360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.425 [2024-07-26 09:06:31.721385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.425 qpair failed and we were unable to recover it. 00:33:13.425 [2024-07-26 09:06:31.721535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.425 [2024-07-26 09:06:31.721579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.425 qpair failed and we were unable to recover it. 00:33:13.425 [2024-07-26 09:06:31.721765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.425 [2024-07-26 09:06:31.721794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.425 qpair failed and we were unable to recover it. 00:33:13.425 [2024-07-26 09:06:31.721938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.426 [2024-07-26 09:06:31.721964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.426 qpair failed and we were unable to recover it. 00:33:13.426 [2024-07-26 09:06:31.722110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.426 [2024-07-26 09:06:31.722152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.426 qpair failed and we were unable to recover it. 00:33:13.426 [2024-07-26 09:06:31.722349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.426 [2024-07-26 09:06:31.722374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.426 qpair failed and we were unable to recover it. 00:33:13.426 [2024-07-26 09:06:31.722548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.426 [2024-07-26 09:06:31.722572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.426 qpair failed and we were unable to recover it. 00:33:13.426 [2024-07-26 09:06:31.722759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.426 [2024-07-26 09:06:31.722786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.426 qpair failed and we were unable to recover it. 00:33:13.426 [2024-07-26 09:06:31.722948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.426 [2024-07-26 09:06:31.722975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.426 qpair failed and we were unable to recover it. 00:33:13.426 [2024-07-26 09:06:31.723153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.426 [2024-07-26 09:06:31.723179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.426 qpair failed and we were unable to recover it. 00:33:13.426 [2024-07-26 09:06:31.723297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.426 [2024-07-26 09:06:31.723322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.426 qpair failed and we were unable to recover it. 00:33:13.426 [2024-07-26 09:06:31.723497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.426 [2024-07-26 09:06:31.723526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.426 qpair failed and we were unable to recover it. 00:33:13.426 [2024-07-26 09:06:31.723666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.426 [2024-07-26 09:06:31.723693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.426 qpair failed and we were unable to recover it. 00:33:13.426 [2024-07-26 09:06:31.723846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.426 [2024-07-26 09:06:31.723871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.426 qpair failed and we were unable to recover it. 00:33:13.426 [2024-07-26 09:06:31.724017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.426 [2024-07-26 09:06:31.724044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.426 qpair failed and we were unable to recover it. 00:33:13.426 [2024-07-26 09:06:31.724191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.426 [2024-07-26 09:06:31.724218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.426 qpair failed and we were unable to recover it. 00:33:13.426 [2024-07-26 09:06:31.724392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.426 [2024-07-26 09:06:31.724417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.426 qpair failed and we were unable to recover it. 00:33:13.426 [2024-07-26 09:06:31.724580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.426 [2024-07-26 09:06:31.724606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.426 qpair failed and we were unable to recover it. 00:33:13.426 [2024-07-26 09:06:31.724727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.426 [2024-07-26 09:06:31.724753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.426 qpair failed and we were unable to recover it. 00:33:13.426 [2024-07-26 09:06:31.724893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.426 [2024-07-26 09:06:31.724922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.426 qpair failed and we were unable to recover it. 00:33:13.426 [2024-07-26 09:06:31.725085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.426 [2024-07-26 09:06:31.725127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.426 qpair failed and we were unable to recover it. 00:33:13.426 [2024-07-26 09:06:31.725252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.426 [2024-07-26 09:06:31.725278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.426 qpair failed and we were unable to recover it. 00:33:13.426 [2024-07-26 09:06:31.725424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.426 [2024-07-26 09:06:31.725450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.426 qpair failed and we were unable to recover it. 00:33:13.426 [2024-07-26 09:06:31.725624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.426 [2024-07-26 09:06:31.725653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.426 qpair failed and we were unable to recover it. 00:33:13.426 [2024-07-26 09:06:31.725793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.426 [2024-07-26 09:06:31.725818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.426 qpair failed and we were unable to recover it. 00:33:13.426 [2024-07-26 09:06:31.725997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.426 [2024-07-26 09:06:31.726040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.426 qpair failed and we were unable to recover it. 00:33:13.426 [2024-07-26 09:06:31.726175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.426 [2024-07-26 09:06:31.726207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.426 qpair failed and we were unable to recover it. 00:33:13.426 [2024-07-26 09:06:31.726377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.426 [2024-07-26 09:06:31.726403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.426 qpair failed and we were unable to recover it. 00:33:13.426 [2024-07-26 09:06:31.726593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.426 [2024-07-26 09:06:31.726621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.426 qpair failed and we were unable to recover it. 00:33:13.426 [2024-07-26 09:06:31.726754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.426 [2024-07-26 09:06:31.726783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.426 qpair failed and we were unable to recover it. 00:33:13.426 [2024-07-26 09:06:31.726922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.426 [2024-07-26 09:06:31.726948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.426 qpair failed and we were unable to recover it. 00:33:13.426 [2024-07-26 09:06:31.727095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.426 [2024-07-26 09:06:31.727137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.426 qpair failed and we were unable to recover it. 00:33:13.426 [2024-07-26 09:06:31.727297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.426 [2024-07-26 09:06:31.727325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.426 qpair failed and we were unable to recover it. 00:33:13.426 [2024-07-26 09:06:31.727502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.426 [2024-07-26 09:06:31.727526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.426 qpair failed and we were unable to recover it. 00:33:13.426 [2024-07-26 09:06:31.727644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.426 [2024-07-26 09:06:31.727685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.426 qpair failed and we were unable to recover it. 00:33:13.426 [2024-07-26 09:06:31.727814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.427 [2024-07-26 09:06:31.727842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.427 qpair failed and we were unable to recover it. 00:33:13.427 [2024-07-26 09:06:31.727977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.427 [2024-07-26 09:06:31.728002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.427 qpair failed and we were unable to recover it. 00:33:13.427 [2024-07-26 09:06:31.728149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.427 [2024-07-26 09:06:31.728192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.427 qpair failed and we were unable to recover it. 00:33:13.427 [2024-07-26 09:06:31.728369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.427 [2024-07-26 09:06:31.728394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.427 qpair failed and we were unable to recover it. 00:33:13.427 [2024-07-26 09:06:31.728537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.427 [2024-07-26 09:06:31.728561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.427 qpair failed and we were unable to recover it. 00:33:13.427 [2024-07-26 09:06:31.728726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.427 [2024-07-26 09:06:31.728754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.427 qpair failed and we were unable to recover it. 00:33:13.427 [2024-07-26 09:06:31.728916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.427 [2024-07-26 09:06:31.728945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.427 qpair failed and we were unable to recover it. 00:33:13.427 [2024-07-26 09:06:31.729101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.427 [2024-07-26 09:06:31.729126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.427 qpair failed and we were unable to recover it. 00:33:13.427 [2024-07-26 09:06:31.729319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.427 [2024-07-26 09:06:31.729347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.427 qpair failed and we were unable to recover it. 00:33:13.427 [2024-07-26 09:06:31.729523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.427 [2024-07-26 09:06:31.729548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.427 qpair failed and we were unable to recover it. 00:33:13.427 [2024-07-26 09:06:31.729696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.427 [2024-07-26 09:06:31.729721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.427 qpair failed and we were unable to recover it. 00:33:13.427 [2024-07-26 09:06:31.729906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.427 [2024-07-26 09:06:31.729934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.427 qpair failed and we were unable to recover it. 00:33:13.427 [2024-07-26 09:06:31.730077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.427 [2024-07-26 09:06:31.730108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.427 qpair failed and we were unable to recover it. 00:33:13.427 [2024-07-26 09:06:31.730300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.427 [2024-07-26 09:06:31.730325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.427 qpair failed and we were unable to recover it. 00:33:13.427 [2024-07-26 09:06:31.730519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.427 [2024-07-26 09:06:31.730548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.427 qpair failed and we were unable to recover it. 00:33:13.427 [2024-07-26 09:06:31.730679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.427 [2024-07-26 09:06:31.730706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.427 qpair failed and we were unable to recover it. 00:33:13.427 [2024-07-26 09:06:31.730879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.427 [2024-07-26 09:06:31.730905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.427 qpair failed and we were unable to recover it. 00:33:13.427 [2024-07-26 09:06:31.731067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.427 [2024-07-26 09:06:31.731095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.427 qpair failed and we were unable to recover it. 00:33:13.427 [2024-07-26 09:06:31.731234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.427 [2024-07-26 09:06:31.731263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.427 qpair failed and we were unable to recover it. 00:33:13.427 [2024-07-26 09:06:31.731406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.427 [2024-07-26 09:06:31.731431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.427 qpair failed and we were unable to recover it. 00:33:13.427 [2024-07-26 09:06:31.731545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.427 [2024-07-26 09:06:31.731570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.427 qpair failed and we were unable to recover it. 00:33:13.427 [2024-07-26 09:06:31.731736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.427 [2024-07-26 09:06:31.731761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.427 qpair failed and we were unable to recover it. 00:33:13.427 [2024-07-26 09:06:31.731898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.427 [2024-07-26 09:06:31.731927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.427 qpair failed and we were unable to recover it. 00:33:13.427 [2024-07-26 09:06:31.732086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.427 [2024-07-26 09:06:31.732127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.427 qpair failed and we were unable to recover it. 00:33:13.427 [2024-07-26 09:06:31.732246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.427 [2024-07-26 09:06:31.732272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.427 qpair failed and we were unable to recover it. 00:33:13.427 [2024-07-26 09:06:31.732416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.427 [2024-07-26 09:06:31.732441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.427 qpair failed and we were unable to recover it. 00:33:13.427 [2024-07-26 09:06:31.732632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.427 [2024-07-26 09:06:31.732660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.427 qpair failed and we were unable to recover it. 00:33:13.427 [2024-07-26 09:06:31.732815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.427 [2024-07-26 09:06:31.732843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.427 qpair failed and we were unable to recover it. 00:33:13.427 [2024-07-26 09:06:31.733013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.427 [2024-07-26 09:06:31.733038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.427 qpair failed and we were unable to recover it. 00:33:13.427 [2024-07-26 09:06:31.733191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.427 [2024-07-26 09:06:31.733216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.427 qpair failed and we were unable to recover it. 00:33:13.427 [2024-07-26 09:06:31.733415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.427 [2024-07-26 09:06:31.733443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.427 qpair failed and we were unable to recover it. 00:33:13.427 [2024-07-26 09:06:31.733612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.427 [2024-07-26 09:06:31.733641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.427 qpair failed and we were unable to recover it. 00:33:13.427 [2024-07-26 09:06:31.733800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.427 [2024-07-26 09:06:31.733827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.427 qpair failed and we were unable to recover it. 00:33:13.427 [2024-07-26 09:06:31.733998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.427 [2024-07-26 09:06:31.734024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.427 qpair failed and we were unable to recover it. 00:33:13.427 [2024-07-26 09:06:31.734175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.427 [2024-07-26 09:06:31.734201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.427 qpair failed and we were unable to recover it. 00:33:13.427 [2024-07-26 09:06:31.734369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.427 [2024-07-26 09:06:31.734397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.427 qpair failed and we were unable to recover it. 00:33:13.427 [2024-07-26 09:06:31.734594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.427 [2024-07-26 09:06:31.734620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.427 qpair failed and we were unable to recover it. 00:33:13.427 [2024-07-26 09:06:31.734793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.427 [2024-07-26 09:06:31.734818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.428 qpair failed and we were unable to recover it. 00:33:13.428 [2024-07-26 09:06:31.734986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.428 [2024-07-26 09:06:31.735014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.428 qpair failed and we were unable to recover it. 00:33:13.428 [2024-07-26 09:06:31.735186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.428 [2024-07-26 09:06:31.735212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.428 qpair failed and we were unable to recover it. 00:33:13.428 [2024-07-26 09:06:31.735370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.428 [2024-07-26 09:06:31.735395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.428 qpair failed and we were unable to recover it. 00:33:13.428 [2024-07-26 09:06:31.735511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.428 [2024-07-26 09:06:31.735555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.428 qpair failed and we were unable to recover it. 00:33:13.428 [2024-07-26 09:06:31.735720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.428 [2024-07-26 09:06:31.735748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.428 qpair failed and we were unable to recover it. 00:33:13.428 [2024-07-26 09:06:31.735911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.428 [2024-07-26 09:06:31.735936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.428 qpair failed and we were unable to recover it. 00:33:13.428 [2024-07-26 09:06:31.736082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.428 [2024-07-26 09:06:31.736108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.428 qpair failed and we were unable to recover it. 00:33:13.428 [2024-07-26 09:06:31.736265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.428 [2024-07-26 09:06:31.736291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.428 qpair failed and we were unable to recover it. 00:33:13.428 [2024-07-26 09:06:31.736464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.428 [2024-07-26 09:06:31.736490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.428 qpair failed and we were unable to recover it. 00:33:13.428 [2024-07-26 09:06:31.736678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.428 [2024-07-26 09:06:31.736705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.428 qpair failed and we were unable to recover it. 00:33:13.428 [2024-07-26 09:06:31.736839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.428 [2024-07-26 09:06:31.736868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.428 qpair failed and we were unable to recover it. 00:33:13.428 [2024-07-26 09:06:31.737083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.428 [2024-07-26 09:06:31.737126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.428 qpair failed and we were unable to recover it. 00:33:13.428 [2024-07-26 09:06:31.737250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.428 [2024-07-26 09:06:31.737277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.428 qpair failed and we were unable to recover it. 00:33:13.428 [2024-07-26 09:06:31.737427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.428 [2024-07-26 09:06:31.737456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.428 qpair failed and we were unable to recover it. 00:33:13.428 [2024-07-26 09:06:31.737597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.428 [2024-07-26 09:06:31.737623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.428 qpair failed and we were unable to recover it. 00:33:13.428 [2024-07-26 09:06:31.737768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.428 [2024-07-26 09:06:31.737795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.428 qpair failed and we were unable to recover it. 00:33:13.428 [2024-07-26 09:06:31.737996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.428 [2024-07-26 09:06:31.738024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.428 qpair failed and we were unable to recover it. 00:33:13.428 [2024-07-26 09:06:31.738163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.428 [2024-07-26 09:06:31.738188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.428 qpair failed and we were unable to recover it. 00:33:13.428 [2024-07-26 09:06:31.738313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.428 [2024-07-26 09:06:31.738339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.428 qpair failed and we were unable to recover it. 00:33:13.428 [2024-07-26 09:06:31.738513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.428 [2024-07-26 09:06:31.738537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.428 qpair failed and we were unable to recover it. 00:33:13.428 [2024-07-26 09:06:31.738662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.428 [2024-07-26 09:06:31.738687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.428 qpair failed and we were unable to recover it. 00:33:13.428 [2024-07-26 09:06:31.738809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.428 [2024-07-26 09:06:31.738834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.428 qpair failed and we were unable to recover it. 00:33:13.428 [2024-07-26 09:06:31.738958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.428 [2024-07-26 09:06:31.738983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.428 qpair failed and we were unable to recover it. 00:33:13.428 [2024-07-26 09:06:31.739149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.428 [2024-07-26 09:06:31.739175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.428 qpair failed and we were unable to recover it. 00:33:13.428 [2024-07-26 09:06:31.739323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.428 [2024-07-26 09:06:31.739348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.428 qpair failed and we were unable to recover it. 00:33:13.428 [2024-07-26 09:06:31.739529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.428 [2024-07-26 09:06:31.739555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.428 qpair failed and we were unable to recover it. 00:33:13.428 [2024-07-26 09:06:31.739698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.428 [2024-07-26 09:06:31.739723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.428 qpair failed and we were unable to recover it. 00:33:13.428 [2024-07-26 09:06:31.739883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.428 [2024-07-26 09:06:31.739912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.428 qpair failed and we were unable to recover it. 00:33:13.428 [2024-07-26 09:06:31.740101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.428 [2024-07-26 09:06:31.740128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.428 qpair failed and we were unable to recover it. 00:33:13.428 [2024-07-26 09:06:31.740240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.428 [2024-07-26 09:06:31.740264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.428 qpair failed and we were unable to recover it. 00:33:13.428 [2024-07-26 09:06:31.740410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.428 [2024-07-26 09:06:31.740452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.428 qpair failed and we were unable to recover it. 00:33:13.428 [2024-07-26 09:06:31.740651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.428 [2024-07-26 09:06:31.740677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.428 qpair failed and we were unable to recover it. 00:33:13.428 [2024-07-26 09:06:31.740849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.428 [2024-07-26 09:06:31.740874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.428 qpair failed and we were unable to recover it. 00:33:13.428 [2024-07-26 09:06:31.741040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.428 [2024-07-26 09:06:31.741078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.428 qpair failed and we were unable to recover it. 00:33:13.428 [2024-07-26 09:06:31.741250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.428 [2024-07-26 09:06:31.741275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.428 qpair failed and we were unable to recover it. 00:33:13.428 [2024-07-26 09:06:31.741429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.428 [2024-07-26 09:06:31.741455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.428 qpair failed and we were unable to recover it. 00:33:13.428 [2024-07-26 09:06:31.741630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.428 [2024-07-26 09:06:31.741673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.428 qpair failed and we were unable to recover it. 00:33:13.429 [2024-07-26 09:06:31.741832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.429 [2024-07-26 09:06:31.741859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.429 qpair failed and we were unable to recover it. 00:33:13.429 [2024-07-26 09:06:31.742016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.429 [2024-07-26 09:06:31.742045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.429 qpair failed and we were unable to recover it. 00:33:13.429 [2024-07-26 09:06:31.742217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.429 [2024-07-26 09:06:31.742243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.429 qpair failed and we were unable to recover it. 00:33:13.429 [2024-07-26 09:06:31.742384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.429 [2024-07-26 09:06:31.742413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.429 qpair failed and we were unable to recover it. 00:33:13.429 [2024-07-26 09:06:31.742549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.429 [2024-07-26 09:06:31.742574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.429 qpair failed and we were unable to recover it. 00:33:13.429 [2024-07-26 09:06:31.742722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.429 [2024-07-26 09:06:31.742762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.429 qpair failed and we were unable to recover it. 00:33:13.429 [2024-07-26 09:06:31.742925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.429 [2024-07-26 09:06:31.742954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.429 qpair failed and we were unable to recover it. 00:33:13.429 [2024-07-26 09:06:31.743109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.429 [2024-07-26 09:06:31.743135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.429 qpair failed and we were unable to recover it. 00:33:13.429 [2024-07-26 09:06:31.743278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.429 [2024-07-26 09:06:31.743303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.429 qpair failed and we were unable to recover it. 00:33:13.429 [2024-07-26 09:06:31.743499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.429 [2024-07-26 09:06:31.743528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.429 qpair failed and we were unable to recover it. 00:33:13.429 [2024-07-26 09:06:31.743699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.429 [2024-07-26 09:06:31.743725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.429 qpair failed and we were unable to recover it. 00:33:13.429 [2024-07-26 09:06:31.743889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.429 [2024-07-26 09:06:31.743917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.429 qpair failed and we were unable to recover it. 00:33:13.429 [2024-07-26 09:06:31.744055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.429 [2024-07-26 09:06:31.744092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.429 qpair failed and we were unable to recover it. 00:33:13.429 [2024-07-26 09:06:31.744262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.429 [2024-07-26 09:06:31.744287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.429 qpair failed and we were unable to recover it. 00:33:13.429 [2024-07-26 09:06:31.744484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.429 [2024-07-26 09:06:31.744512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.429 qpair failed and we were unable to recover it. 00:33:13.429 [2024-07-26 09:06:31.744674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.429 [2024-07-26 09:06:31.744702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.429 qpair failed and we were unable to recover it. 00:33:13.429 [2024-07-26 09:06:31.744870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.429 [2024-07-26 09:06:31.744895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.429 qpair failed and we were unable to recover it. 00:33:13.429 [2024-07-26 09:06:31.745020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.429 [2024-07-26 09:06:31.745045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.429 qpair failed and we were unable to recover it. 00:33:13.429 [2024-07-26 09:06:31.745269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.429 [2024-07-26 09:06:31.745297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.429 qpair failed and we were unable to recover it. 00:33:13.429 [2024-07-26 09:06:31.745470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.429 [2024-07-26 09:06:31.745496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.429 qpair failed and we were unable to recover it. 00:33:13.429 [2024-07-26 09:06:31.745665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.429 [2024-07-26 09:06:31.745690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.429 qpair failed and we were unable to recover it. 00:33:13.429 [2024-07-26 09:06:31.745858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.429 [2024-07-26 09:06:31.745887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.429 qpair failed and we were unable to recover it. 00:33:13.429 [2024-07-26 09:06:31.746066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.429 [2024-07-26 09:06:31.746092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.429 qpair failed and we were unable to recover it. 00:33:13.429 [2024-07-26 09:06:31.746288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.429 [2024-07-26 09:06:31.746317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.429 qpair failed and we were unable to recover it. 00:33:13.429 [2024-07-26 09:06:31.746474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.429 [2024-07-26 09:06:31.746502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.429 qpair failed and we were unable to recover it. 00:33:13.429 [2024-07-26 09:06:31.746645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.429 [2024-07-26 09:06:31.746670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.429 qpair failed and we were unable to recover it. 00:33:13.429 [2024-07-26 09:06:31.746791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.429 [2024-07-26 09:06:31.746816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.429 qpair failed and we were unable to recover it. 00:33:13.429 [2024-07-26 09:06:31.746964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.429 [2024-07-26 09:06:31.746991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.429 qpair failed and we were unable to recover it. 00:33:13.429 [2024-07-26 09:06:31.747155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.429 [2024-07-26 09:06:31.747202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.429 qpair failed and we were unable to recover it. 00:33:13.429 [2024-07-26 09:06:31.747366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.429 [2024-07-26 09:06:31.747394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.429 qpair failed and we were unable to recover it. 00:33:13.429 [2024-07-26 09:06:31.747580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.429 [2024-07-26 09:06:31.747609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.429 qpair failed and we were unable to recover it. 00:33:13.429 [2024-07-26 09:06:31.747783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.429 [2024-07-26 09:06:31.747809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.429 qpair failed and we were unable to recover it. 00:33:13.429 [2024-07-26 09:06:31.747997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.429 [2024-07-26 09:06:31.748025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.429 qpair failed and we were unable to recover it. 00:33:13.429 [2024-07-26 09:06:31.748183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.429 [2024-07-26 09:06:31.748211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.429 qpair failed and we were unable to recover it. 00:33:13.429 [2024-07-26 09:06:31.748349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.429 [2024-07-26 09:06:31.748375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.429 qpair failed and we were unable to recover it. 00:33:13.429 [2024-07-26 09:06:31.748523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.429 [2024-07-26 09:06:31.748549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.429 qpair failed and we were unable to recover it. 00:33:13.429 [2024-07-26 09:06:31.748668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.429 [2024-07-26 09:06:31.748696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.429 qpair failed and we were unable to recover it. 00:33:13.429 [2024-07-26 09:06:31.748846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.430 [2024-07-26 09:06:31.748872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.430 qpair failed and we were unable to recover it. 00:33:13.430 [2024-07-26 09:06:31.749035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.430 [2024-07-26 09:06:31.749071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.430 qpair failed and we were unable to recover it. 00:33:13.430 [2024-07-26 09:06:31.749205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.430 [2024-07-26 09:06:31.749230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.430 qpair failed and we were unable to recover it. 00:33:13.430 [2024-07-26 09:06:31.749347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.430 [2024-07-26 09:06:31.749372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.430 qpair failed and we were unable to recover it. 00:33:13.430 [2024-07-26 09:06:31.749495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.430 [2024-07-26 09:06:31.749520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.430 qpair failed and we were unable to recover it. 00:33:13.430 [2024-07-26 09:06:31.749715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.430 [2024-07-26 09:06:31.749743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.430 qpair failed and we were unable to recover it. 00:33:13.430 [2024-07-26 09:06:31.749920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.430 [2024-07-26 09:06:31.749945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.430 qpair failed and we were unable to recover it. 00:33:13.430 [2024-07-26 09:06:31.750088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.430 [2024-07-26 09:06:31.750115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.430 qpair failed and we were unable to recover it. 00:33:13.430 [2024-07-26 09:06:31.750244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.430 [2024-07-26 09:06:31.750272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.430 qpair failed and we were unable to recover it. 00:33:13.430 [2024-07-26 09:06:31.750420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.430 [2024-07-26 09:06:31.750446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.430 qpair failed and we were unable to recover it. 00:33:13.430 [2024-07-26 09:06:31.750591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.430 [2024-07-26 09:06:31.750617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.430 qpair failed and we were unable to recover it. 00:33:13.430 [2024-07-26 09:06:31.750792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.430 [2024-07-26 09:06:31.750821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.430 qpair failed and we were unable to recover it. 00:33:13.430 [2024-07-26 09:06:31.750990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.430 [2024-07-26 09:06:31.751015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.430 qpair failed and we were unable to recover it. 00:33:13.430 [2024-07-26 09:06:31.751174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.430 [2024-07-26 09:06:31.751199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.430 qpair failed and we were unable to recover it. 00:33:13.430 [2024-07-26 09:06:31.751358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.430 [2024-07-26 09:06:31.751387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.430 qpair failed and we were unable to recover it. 00:33:13.430 [2024-07-26 09:06:31.751533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.430 [2024-07-26 09:06:31.751559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.430 qpair failed and we were unable to recover it. 00:33:13.430 [2024-07-26 09:06:31.751679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.430 [2024-07-26 09:06:31.751704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.430 qpair failed and we were unable to recover it. 00:33:13.430 [2024-07-26 09:06:31.751845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.430 [2024-07-26 09:06:31.751870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.430 qpair failed and we were unable to recover it. 00:33:13.430 [2024-07-26 09:06:31.752084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.430 [2024-07-26 09:06:31.752110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.430 qpair failed and we were unable to recover it. 00:33:13.430 [2024-07-26 09:06:31.752301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.430 [2024-07-26 09:06:31.752329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.430 qpair failed and we were unable to recover it. 00:33:13.430 [2024-07-26 09:06:31.752515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.430 [2024-07-26 09:06:31.752542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.430 qpair failed and we were unable to recover it. 00:33:13.430 [2024-07-26 09:06:31.752686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.430 [2024-07-26 09:06:31.752711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.430 qpair failed and we were unable to recover it. 00:33:13.430 [2024-07-26 09:06:31.752863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.430 [2024-07-26 09:06:31.752906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.430 qpair failed and we were unable to recover it. 00:33:13.430 [2024-07-26 09:06:31.753070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.430 [2024-07-26 09:06:31.753099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.430 qpair failed and we were unable to recover it. 00:33:13.430 [2024-07-26 09:06:31.753245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.430 [2024-07-26 09:06:31.753270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.430 qpair failed and we were unable to recover it. 00:33:13.430 [2024-07-26 09:06:31.753415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.430 [2024-07-26 09:06:31.753440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.430 qpair failed and we were unable to recover it. 00:33:13.430 [2024-07-26 09:06:31.753592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.430 [2024-07-26 09:06:31.753621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.430 qpair failed and we were unable to recover it. 00:33:13.430 [2024-07-26 09:06:31.753784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.430 [2024-07-26 09:06:31.753809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.430 qpair failed and we were unable to recover it. 00:33:13.430 [2024-07-26 09:06:31.753926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.430 [2024-07-26 09:06:31.753952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.430 qpair failed and we were unable to recover it. 00:33:13.430 [2024-07-26 09:06:31.754101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.430 [2024-07-26 09:06:31.754127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.430 qpair failed and we were unable to recover it. 00:33:13.430 [2024-07-26 09:06:31.754344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.430 [2024-07-26 09:06:31.754370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.430 qpair failed and we were unable to recover it. 00:33:13.430 [2024-07-26 09:06:31.754492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.430 [2024-07-26 09:06:31.754517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.430 qpair failed and we were unable to recover it. 00:33:13.430 [2024-07-26 09:06:31.754667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.430 [2024-07-26 09:06:31.754692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.430 qpair failed and we were unable to recover it. 00:33:13.430 [2024-07-26 09:06:31.754810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.430 [2024-07-26 09:06:31.754835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.430 qpair failed and we were unable to recover it. 00:33:13.430 [2024-07-26 09:06:31.755023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.430 [2024-07-26 09:06:31.755051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.430 qpair failed and we were unable to recover it. 00:33:13.430 [2024-07-26 09:06:31.755190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.430 [2024-07-26 09:06:31.755219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.430 qpair failed and we were unable to recover it. 00:33:13.430 [2024-07-26 09:06:31.755388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.430 [2024-07-26 09:06:31.755413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.430 qpair failed and we were unable to recover it. 00:33:13.430 [2024-07-26 09:06:31.755558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.430 [2024-07-26 09:06:31.755597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.430 qpair failed and we were unable to recover it. 00:33:13.431 [2024-07-26 09:06:31.755729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.431 [2024-07-26 09:06:31.755757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.431 qpair failed and we were unable to recover it. 00:33:13.431 [2024-07-26 09:06:31.755918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.431 [2024-07-26 09:06:31.755950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.431 qpair failed and we were unable to recover it. 00:33:13.431 [2024-07-26 09:06:31.756124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.431 [2024-07-26 09:06:31.756150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.431 qpair failed and we were unable to recover it. 00:33:13.431 [2024-07-26 09:06:31.756268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.431 [2024-07-26 09:06:31.756294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.431 qpair failed and we were unable to recover it. 00:33:13.431 [2024-07-26 09:06:31.756409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.431 [2024-07-26 09:06:31.756434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.431 qpair failed and we were unable to recover it. 00:33:13.431 [2024-07-26 09:06:31.756638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.431 [2024-07-26 09:06:31.756667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.431 qpair failed and we were unable to recover it. 00:33:13.431 [2024-07-26 09:06:31.756824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.431 [2024-07-26 09:06:31.756851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.431 qpair failed and we were unable to recover it. 00:33:13.431 [2024-07-26 09:06:31.756989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.431 [2024-07-26 09:06:31.757014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.431 qpair failed and we were unable to recover it. 00:33:13.431 [2024-07-26 09:06:31.757149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.431 [2024-07-26 09:06:31.757176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.431 qpair failed and we were unable to recover it. 00:33:13.431 [2024-07-26 09:06:31.757294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.431 [2024-07-26 09:06:31.757320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.431 qpair failed and we were unable to recover it. 00:33:13.431 [2024-07-26 09:06:31.757490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.431 [2024-07-26 09:06:31.757515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.431 qpair failed and we were unable to recover it. 00:33:13.431 [2024-07-26 09:06:31.757680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.431 [2024-07-26 09:06:31.757708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.431 qpair failed and we were unable to recover it. 00:33:13.431 [2024-07-26 09:06:31.757844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.431 [2024-07-26 09:06:31.757873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.431 qpair failed and we were unable to recover it. 00:33:13.431 [2024-07-26 09:06:31.758022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.431 [2024-07-26 09:06:31.758046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.431 qpair failed and we were unable to recover it. 00:33:13.431 [2024-07-26 09:06:31.758167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.431 [2024-07-26 09:06:31.758193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.431 qpair failed and we were unable to recover it. 00:33:13.431 [2024-07-26 09:06:31.758347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.431 [2024-07-26 09:06:31.758376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.431 qpair failed and we were unable to recover it. 00:33:13.431 [2024-07-26 09:06:31.758558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.431 [2024-07-26 09:06:31.758583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.431 qpair failed and we were unable to recover it. 00:33:13.431 [2024-07-26 09:06:31.758735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.431 [2024-07-26 09:06:31.758761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.431 qpair failed and we were unable to recover it. 00:33:13.431 [2024-07-26 09:06:31.758905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.431 [2024-07-26 09:06:31.758929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.431 qpair failed and we were unable to recover it. 00:33:13.431 [2024-07-26 09:06:31.759042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.431 [2024-07-26 09:06:31.759073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.431 qpair failed and we were unable to recover it. 00:33:13.431 [2024-07-26 09:06:31.759214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.431 [2024-07-26 09:06:31.759239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.431 qpair failed and we were unable to recover it. 00:33:13.431 [2024-07-26 09:06:31.759382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.431 [2024-07-26 09:06:31.759410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.431 qpair failed and we were unable to recover it. 00:33:13.431 [2024-07-26 09:06:31.759552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.431 [2024-07-26 09:06:31.759577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.431 qpair failed and we were unable to recover it. 00:33:13.431 [2024-07-26 09:06:31.759719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.431 [2024-07-26 09:06:31.759745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.431 qpair failed and we were unable to recover it. 00:33:13.431 [2024-07-26 09:06:31.759893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.431 [2024-07-26 09:06:31.759918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.431 qpair failed and we were unable to recover it. 00:33:13.431 [2024-07-26 09:06:31.760070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.431 [2024-07-26 09:06:31.760097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.431 qpair failed and we were unable to recover it. 00:33:13.431 [2024-07-26 09:06:31.760243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.431 [2024-07-26 09:06:31.760268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.431 qpair failed and we were unable to recover it. 00:33:13.431 [2024-07-26 09:06:31.760389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.431 [2024-07-26 09:06:31.760414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.431 qpair failed and we were unable to recover it. 00:33:13.431 [2024-07-26 09:06:31.760559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.431 [2024-07-26 09:06:31.760585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.431 qpair failed and we were unable to recover it. 00:33:13.431 [2024-07-26 09:06:31.760755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.431 [2024-07-26 09:06:31.760781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.431 qpair failed and we were unable to recover it. 00:33:13.431 [2024-07-26 09:06:31.760954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.431 [2024-07-26 09:06:31.760983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.431 qpair failed and we were unable to recover it. 00:33:13.431 [2024-07-26 09:06:31.761179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.431 [2024-07-26 09:06:31.761205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.431 qpair failed and we were unable to recover it. 00:33:13.432 [2024-07-26 09:06:31.761342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.432 [2024-07-26 09:06:31.761370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.432 qpair failed and we were unable to recover it. 00:33:13.432 [2024-07-26 09:06:31.761555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.432 [2024-07-26 09:06:31.761583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.432 qpair failed and we were unable to recover it. 00:33:13.432 [2024-07-26 09:06:31.761719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.432 [2024-07-26 09:06:31.761746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.432 qpair failed and we were unable to recover it. 00:33:13.432 [2024-07-26 09:06:31.761896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.432 [2024-07-26 09:06:31.761921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.432 qpair failed and we were unable to recover it. 00:33:13.432 [2024-07-26 09:06:31.762091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.432 [2024-07-26 09:06:31.762117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.432 qpair failed and we were unable to recover it. 00:33:13.432 [2024-07-26 09:06:31.762237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.432 [2024-07-26 09:06:31.762262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.432 qpair failed and we were unable to recover it. 00:33:13.432 [2024-07-26 09:06:31.762377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.432 [2024-07-26 09:06:31.762402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.432 qpair failed and we were unable to recover it. 00:33:13.432 [2024-07-26 09:06:31.762571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.432 [2024-07-26 09:06:31.762600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.432 qpair failed and we were unable to recover it. 00:33:13.432 [2024-07-26 09:06:31.762744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.432 [2024-07-26 09:06:31.762769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.432 qpair failed and we were unable to recover it. 00:33:13.432 [2024-07-26 09:06:31.762916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.432 [2024-07-26 09:06:31.762946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.432 qpair failed and we were unable to recover it. 00:33:13.432 [2024-07-26 09:06:31.763130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.432 [2024-07-26 09:06:31.763174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.432 qpair failed and we were unable to recover it. 00:33:13.432 [2024-07-26 09:06:31.763343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.432 [2024-07-26 09:06:31.763371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.432 qpair failed and we were unable to recover it. 00:33:13.432 [2024-07-26 09:06:31.763560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.432 [2024-07-26 09:06:31.763589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.432 qpair failed and we were unable to recover it. 00:33:13.432 [2024-07-26 09:06:31.763777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.432 [2024-07-26 09:06:31.763806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.432 qpair failed and we were unable to recover it. 00:33:13.432 [2024-07-26 09:06:31.763972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.432 [2024-07-26 09:06:31.763998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.432 qpair failed and we were unable to recover it. 00:33:13.432 [2024-07-26 09:06:31.764126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.432 [2024-07-26 09:06:31.764154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.432 qpair failed and we were unable to recover it. 00:33:13.432 [2024-07-26 09:06:31.764312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.432 [2024-07-26 09:06:31.764355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.432 qpair failed and we were unable to recover it. 00:33:13.432 [2024-07-26 09:06:31.764516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.432 [2024-07-26 09:06:31.764541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.432 qpair failed and we were unable to recover it. 00:33:13.432 [2024-07-26 09:06:31.764708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.432 [2024-07-26 09:06:31.764738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.432 qpair failed and we were unable to recover it. 00:33:13.432 [2024-07-26 09:06:31.764896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.432 [2024-07-26 09:06:31.764924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.432 qpair failed and we were unable to recover it. 00:33:13.432 [2024-07-26 09:06:31.765091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.432 [2024-07-26 09:06:31.765117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.432 qpair failed and we were unable to recover it. 00:33:13.432 [2024-07-26 09:06:31.765264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.432 [2024-07-26 09:06:31.765289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.432 qpair failed and we were unable to recover it. 00:33:13.432 [2024-07-26 09:06:31.765461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.432 [2024-07-26 09:06:31.765503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.432 qpair failed and we were unable to recover it. 00:33:13.432 [2024-07-26 09:06:31.765695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.432 [2024-07-26 09:06:31.765720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.432 qpair failed and we were unable to recover it. 00:33:13.432 [2024-07-26 09:06:31.765913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.432 [2024-07-26 09:06:31.765941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.432 qpair failed and we were unable to recover it. 00:33:13.432 [2024-07-26 09:06:31.766104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.432 [2024-07-26 09:06:31.766135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.432 qpair failed and we were unable to recover it. 00:33:13.432 [2024-07-26 09:06:31.766307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.432 [2024-07-26 09:06:31.766333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.432 qpair failed and we were unable to recover it. 00:33:13.432 [2024-07-26 09:06:31.766453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.432 [2024-07-26 09:06:31.766494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.432 qpair failed and we were unable to recover it. 00:33:13.432 [2024-07-26 09:06:31.766647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.432 [2024-07-26 09:06:31.766676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.432 qpair failed and we were unable to recover it. 00:33:13.432 [2024-07-26 09:06:31.766815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.432 [2024-07-26 09:06:31.766840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.432 qpair failed and we were unable to recover it. 00:33:13.432 [2024-07-26 09:06:31.767034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.432 [2024-07-26 09:06:31.767068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.432 qpair failed and we were unable to recover it. 00:33:13.432 [2024-07-26 09:06:31.767230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.432 [2024-07-26 09:06:31.767256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.432 qpair failed and we were unable to recover it. 00:33:13.432 [2024-07-26 09:06:31.767377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.432 [2024-07-26 09:06:31.767403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.432 qpair failed and we were unable to recover it. 00:33:13.432 [2024-07-26 09:06:31.767547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.432 [2024-07-26 09:06:31.767574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.432 qpair failed and we were unable to recover it. 00:33:13.432 [2024-07-26 09:06:31.767720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.432 [2024-07-26 09:06:31.767761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.432 qpair failed and we were unable to recover it. 00:33:13.432 [2024-07-26 09:06:31.767948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.432 [2024-07-26 09:06:31.767973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.432 qpair failed and we were unable to recover it. 00:33:13.432 [2024-07-26 09:06:31.768115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.432 [2024-07-26 09:06:31.768144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.432 qpair failed and we were unable to recover it. 00:33:13.433 [2024-07-26 09:06:31.768342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.433 [2024-07-26 09:06:31.768385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.433 qpair failed and we were unable to recover it. 00:33:13.433 [2024-07-26 09:06:31.768575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.433 [2024-07-26 09:06:31.768602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.433 qpair failed and we were unable to recover it. 00:33:13.433 [2024-07-26 09:06:31.768796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.433 [2024-07-26 09:06:31.768825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.433 qpair failed and we were unable to recover it. 00:33:13.433 [2024-07-26 09:06:31.768995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.433 [2024-07-26 09:06:31.769021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.433 qpair failed and we were unable to recover it. 00:33:13.433 [2024-07-26 09:06:31.769165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.433 [2024-07-26 09:06:31.769192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.433 qpair failed and we were unable to recover it. 00:33:13.433 [2024-07-26 09:06:31.769396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.433 [2024-07-26 09:06:31.769425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.433 qpair failed and we were unable to recover it. 00:33:13.433 [2024-07-26 09:06:31.769617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.433 [2024-07-26 09:06:31.769644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.433 qpair failed and we were unable to recover it. 00:33:13.433 [2024-07-26 09:06:31.769766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.433 [2024-07-26 09:06:31.769793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.433 qpair failed and we were unable to recover it. 00:33:13.433 [2024-07-26 09:06:31.769938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.433 [2024-07-26 09:06:31.769964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.433 qpair failed and we were unable to recover it. 00:33:13.433 [2024-07-26 09:06:31.770167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.433 [2024-07-26 09:06:31.770196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.433 qpair failed and we were unable to recover it. 00:33:13.433 [2024-07-26 09:06:31.770345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.433 [2024-07-26 09:06:31.770371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.433 qpair failed and we were unable to recover it. 00:33:13.433 [2024-07-26 09:06:31.770491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.433 [2024-07-26 09:06:31.770518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.433 qpair failed and we were unable to recover it. 00:33:13.433 [2024-07-26 09:06:31.770713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.433 [2024-07-26 09:06:31.770747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.433 qpair failed and we were unable to recover it. 00:33:13.433 [2024-07-26 09:06:31.770920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.433 [2024-07-26 09:06:31.770946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.433 qpair failed and we were unable to recover it. 00:33:13.433 [2024-07-26 09:06:31.771069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.433 [2024-07-26 09:06:31.771095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.433 qpair failed and we were unable to recover it. 00:33:13.433 [2024-07-26 09:06:31.771240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.433 [2024-07-26 09:06:31.771266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.433 qpair failed and we were unable to recover it. 00:33:13.433 [2024-07-26 09:06:31.771473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.433 [2024-07-26 09:06:31.771498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.433 qpair failed and we were unable to recover it. 00:33:13.433 [2024-07-26 09:06:31.771691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.433 [2024-07-26 09:06:31.771719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.433 qpair failed and we were unable to recover it. 00:33:13.433 [2024-07-26 09:06:31.771901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.433 [2024-07-26 09:06:31.771929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.433 qpair failed and we were unable to recover it. 00:33:13.433 [2024-07-26 09:06:31.772086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.433 [2024-07-26 09:06:31.772113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.433 qpair failed and we were unable to recover it. 00:33:13.433 [2024-07-26 09:06:31.772254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.433 [2024-07-26 09:06:31.772297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.433 qpair failed and we were unable to recover it. 00:33:13.433 [2024-07-26 09:06:31.772458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.433 [2024-07-26 09:06:31.772489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.433 qpair failed and we were unable to recover it. 00:33:13.433 [2024-07-26 09:06:31.772655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.433 [2024-07-26 09:06:31.772680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.433 qpair failed and we were unable to recover it. 00:33:13.433 [2024-07-26 09:06:31.772809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.433 [2024-07-26 09:06:31.772835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.433 qpair failed and we were unable to recover it. 00:33:13.433 [2024-07-26 09:06:31.772985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.433 [2024-07-26 09:06:31.773010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.433 qpair failed and we were unable to recover it. 00:33:13.433 [2024-07-26 09:06:31.773156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.433 [2024-07-26 09:06:31.773182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.433 qpair failed and we were unable to recover it. 00:33:13.433 [2024-07-26 09:06:31.773320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.433 [2024-07-26 09:06:31.773348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.433 qpair failed and we were unable to recover it. 00:33:13.433 [2024-07-26 09:06:31.773505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.433 [2024-07-26 09:06:31.773533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.433 qpair failed and we were unable to recover it. 00:33:13.433 [2024-07-26 09:06:31.773690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.433 [2024-07-26 09:06:31.773716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.433 qpair failed and we were unable to recover it. 00:33:13.433 [2024-07-26 09:06:31.773905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.433 [2024-07-26 09:06:31.773933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.433 qpair failed and we were unable to recover it. 00:33:13.433 [2024-07-26 09:06:31.774102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.433 [2024-07-26 09:06:31.774146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.433 qpair failed and we were unable to recover it. 00:33:13.433 [2024-07-26 09:06:31.774321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.433 [2024-07-26 09:06:31.774348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.433 qpair failed and we were unable to recover it. 00:33:13.433 [2024-07-26 09:06:31.774474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.433 [2024-07-26 09:06:31.774519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.433 qpair failed and we were unable to recover it. 00:33:13.433 [2024-07-26 09:06:31.774674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.433 [2024-07-26 09:06:31.774703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.433 qpair failed and we were unable to recover it. 00:33:13.433 [2024-07-26 09:06:31.774838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.433 [2024-07-26 09:06:31.774864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.433 qpair failed and we were unable to recover it. 00:33:13.433 [2024-07-26 09:06:31.775031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.433 [2024-07-26 09:06:31.775082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.433 qpair failed and we were unable to recover it. 00:33:13.433 [2024-07-26 09:06:31.775207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.433 [2024-07-26 09:06:31.775236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.433 qpair failed and we were unable to recover it. 00:33:13.433 [2024-07-26 09:06:31.775414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.433 [2024-07-26 09:06:31.775441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.433 qpair failed and we were unable to recover it. 00:33:13.433 [2024-07-26 09:06:31.775569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.433 [2024-07-26 09:06:31.775595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.433 qpair failed and we were unable to recover it. 00:33:13.434 [2024-07-26 09:06:31.775769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.434 [2024-07-26 09:06:31.775799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.434 qpair failed and we were unable to recover it. 00:33:13.434 [2024-07-26 09:06:31.775932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.434 [2024-07-26 09:06:31.775975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.434 qpair failed and we were unable to recover it. 00:33:13.434 [2024-07-26 09:06:31.776152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.434 [2024-07-26 09:06:31.776179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.434 qpair failed and we were unable to recover it. 00:33:13.434 [2024-07-26 09:06:31.776325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.434 [2024-07-26 09:06:31.776350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.434 qpair failed and we were unable to recover it. 00:33:13.434 [2024-07-26 09:06:31.776512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.434 [2024-07-26 09:06:31.776539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.434 qpair failed and we were unable to recover it. 00:33:13.434 [2024-07-26 09:06:31.776689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.434 [2024-07-26 09:06:31.776715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.434 qpair failed and we were unable to recover it. 00:33:13.434 [2024-07-26 09:06:31.776909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.434 [2024-07-26 09:06:31.776939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.434 qpair failed and we were unable to recover it. 00:33:13.434 [2024-07-26 09:06:31.777084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.434 [2024-07-26 09:06:31.777110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.434 qpair failed and we were unable to recover it. 00:33:13.434 [2024-07-26 09:06:31.777259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.434 [2024-07-26 09:06:31.777285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.434 qpair failed and we were unable to recover it. 00:33:13.434 [2024-07-26 09:06:31.777457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.434 [2024-07-26 09:06:31.777483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.434 qpair failed and we were unable to recover it. 00:33:13.434 [2024-07-26 09:06:31.777628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.434 [2024-07-26 09:06:31.777654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.434 qpair failed and we were unable to recover it. 00:33:13.434 [2024-07-26 09:06:31.777773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.434 [2024-07-26 09:06:31.777799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.434 qpair failed and we were unable to recover it. 00:33:13.434 [2024-07-26 09:06:31.777943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.434 [2024-07-26 09:06:31.777969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.434 qpair failed and we were unable to recover it. 00:33:13.434 [2024-07-26 09:06:31.778120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.434 [2024-07-26 09:06:31.778151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.434 qpair failed and we were unable to recover it. 00:33:13.434 [2024-07-26 09:06:31.778317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.434 [2024-07-26 09:06:31.778347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.434 qpair failed and we were unable to recover it. 00:33:13.434 [2024-07-26 09:06:31.778505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.434 [2024-07-26 09:06:31.778534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.434 qpair failed and we were unable to recover it. 00:33:13.434 [2024-07-26 09:06:31.778702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.434 [2024-07-26 09:06:31.778728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.434 qpair failed and we were unable to recover it. 00:33:13.434 [2024-07-26 09:06:31.778847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.434 [2024-07-26 09:06:31.778890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.434 qpair failed and we were unable to recover it. 00:33:13.434 [2024-07-26 09:06:31.779054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.434 [2024-07-26 09:06:31.779090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.434 qpair failed and we were unable to recover it. 00:33:13.434 [2024-07-26 09:06:31.779238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.434 [2024-07-26 09:06:31.779264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.434 qpair failed and we were unable to recover it. 00:33:13.434 [2024-07-26 09:06:31.779441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.434 [2024-07-26 09:06:31.779484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.434 qpair failed and we were unable to recover it. 00:33:13.434 [2024-07-26 09:06:31.779676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.434 [2024-07-26 09:06:31.779728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.434 qpair failed and we were unable to recover it. 00:33:13.434 [2024-07-26 09:06:31.779894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.434 [2024-07-26 09:06:31.779920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.434 qpair failed and we were unable to recover it. 00:33:13.434 [2024-07-26 09:06:31.780037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.434 [2024-07-26 09:06:31.780070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.434 qpair failed and we were unable to recover it. 00:33:13.434 [2024-07-26 09:06:31.780244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.434 [2024-07-26 09:06:31.780287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.434 qpair failed and we were unable to recover it. 00:33:13.434 [2024-07-26 09:06:31.780433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.434 [2024-07-26 09:06:31.780458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.434 qpair failed and we were unable to recover it. 00:33:13.434 [2024-07-26 09:06:31.780604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.434 [2024-07-26 09:06:31.780631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.434 qpair failed and we were unable to recover it. 00:33:13.434 [2024-07-26 09:06:31.780863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.434 [2024-07-26 09:06:31.780919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.434 qpair failed and we were unable to recover it. 00:33:13.434 [2024-07-26 09:06:31.781082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.434 [2024-07-26 09:06:31.781109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.434 qpair failed and we were unable to recover it. 00:33:13.434 [2024-07-26 09:06:31.781256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.434 [2024-07-26 09:06:31.781281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.434 qpair failed and we were unable to recover it. 00:33:13.434 [2024-07-26 09:06:31.781420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.434 [2024-07-26 09:06:31.781461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.434 qpair failed and we were unable to recover it. 00:33:13.434 [2024-07-26 09:06:31.781595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.434 [2024-07-26 09:06:31.781621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.434 qpair failed and we were unable to recover it. 00:33:13.434 [2024-07-26 09:06:31.781750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.434 [2024-07-26 09:06:31.781776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.434 qpair failed and we were unable to recover it. 00:33:13.434 [2024-07-26 09:06:31.781890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.434 [2024-07-26 09:06:31.781916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.434 qpair failed and we were unable to recover it. 00:33:13.434 [2024-07-26 09:06:31.782057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.434 [2024-07-26 09:06:31.782100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.434 qpair failed and we were unable to recover it. 00:33:13.434 [2024-07-26 09:06:31.782264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.434 [2024-07-26 09:06:31.782289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.434 qpair failed and we were unable to recover it. 00:33:13.434 [2024-07-26 09:06:31.782460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.434 [2024-07-26 09:06:31.782489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.434 qpair failed and we were unable to recover it. 00:33:13.434 [2024-07-26 09:06:31.782653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.434 [2024-07-26 09:06:31.782679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.434 qpair failed and we were unable to recover it. 00:33:13.434 [2024-07-26 09:06:31.782795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.434 [2024-07-26 09:06:31.782836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.434 qpair failed and we were unable to recover it. 00:33:13.434 [2024-07-26 09:06:31.783000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.434 [2024-07-26 09:06:31.783029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.435 qpair failed and we were unable to recover it. 00:33:13.435 [2024-07-26 09:06:31.783183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.435 [2024-07-26 09:06:31.783210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.435 qpair failed and we were unable to recover it. 00:33:13.435 [2024-07-26 09:06:31.783366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.435 [2024-07-26 09:06:31.783392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.435 qpair failed and we were unable to recover it. 00:33:13.435 [2024-07-26 09:06:31.783607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.435 [2024-07-26 09:06:31.783633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.435 qpair failed and we were unable to recover it. 00:33:13.435 [2024-07-26 09:06:31.783804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.435 [2024-07-26 09:06:31.783830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.435 qpair failed and we were unable to recover it. 00:33:13.435 [2024-07-26 09:06:31.783993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.435 [2024-07-26 09:06:31.784022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.435 qpair failed and we were unable to recover it. 00:33:13.435 [2024-07-26 09:06:31.784219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.435 [2024-07-26 09:06:31.784248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.435 qpair failed and we were unable to recover it. 00:33:13.435 [2024-07-26 09:06:31.784417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.435 [2024-07-26 09:06:31.784443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.435 qpair failed and we were unable to recover it. 00:33:13.435 [2024-07-26 09:06:31.784561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.435 [2024-07-26 09:06:31.784588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.435 qpair failed and we were unable to recover it. 00:33:13.435 [2024-07-26 09:06:31.784737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.435 [2024-07-26 09:06:31.784763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.435 qpair failed and we were unable to recover it. 00:33:13.435 [2024-07-26 09:06:31.784929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.435 [2024-07-26 09:06:31.784956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.435 qpair failed and we were unable to recover it. 00:33:13.435 [2024-07-26 09:06:31.785129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.435 [2024-07-26 09:06:31.785172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.435 qpair failed and we were unable to recover it. 00:33:13.435 [2024-07-26 09:06:31.785327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.435 [2024-07-26 09:06:31.785357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.435 qpair failed and we were unable to recover it. 00:33:13.435 [2024-07-26 09:06:31.785523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.435 [2024-07-26 09:06:31.785549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.435 qpair failed and we were unable to recover it. 00:33:13.435 [2024-07-26 09:06:31.785745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.435 [2024-07-26 09:06:31.785774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.435 qpair failed and we were unable to recover it. 00:33:13.435 [2024-07-26 09:06:31.785981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.435 [2024-07-26 09:06:31.786012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.435 qpair failed and we were unable to recover it. 00:33:13.435 [2024-07-26 09:06:31.786163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.435 [2024-07-26 09:06:31.786190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.435 qpair failed and we were unable to recover it. 00:33:13.435 [2024-07-26 09:06:31.786336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.435 [2024-07-26 09:06:31.786362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.435 qpair failed and we were unable to recover it. 00:33:13.435 [2024-07-26 09:06:31.786539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.435 [2024-07-26 09:06:31.786590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.435 qpair failed and we were unable to recover it. 00:33:13.435 [2024-07-26 09:06:31.786732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.435 [2024-07-26 09:06:31.786759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.435 qpair failed and we were unable to recover it. 00:33:13.435 [2024-07-26 09:06:31.786903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.435 [2024-07-26 09:06:31.786945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.435 qpair failed and we were unable to recover it. 00:33:13.435 [2024-07-26 09:06:31.787102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.435 [2024-07-26 09:06:31.787130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.435 qpair failed and we were unable to recover it. 00:33:13.435 [2024-07-26 09:06:31.787276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.435 [2024-07-26 09:06:31.787301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.435 qpair failed and we were unable to recover it. 00:33:13.435 [2024-07-26 09:06:31.787445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.435 [2024-07-26 09:06:31.787470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.435 qpair failed and we were unable to recover it. 00:33:13.435 [2024-07-26 09:06:31.787638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.435 [2024-07-26 09:06:31.787665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.435 qpair failed and we were unable to recover it. 00:33:13.435 [2024-07-26 09:06:31.787804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.435 [2024-07-26 09:06:31.787828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.435 qpair failed and we were unable to recover it. 00:33:13.435 [2024-07-26 09:06:31.787974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.435 [2024-07-26 09:06:31.788001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.435 qpair failed and we were unable to recover it. 00:33:13.435 [2024-07-26 09:06:31.788152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.435 [2024-07-26 09:06:31.788177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.435 qpair failed and we were unable to recover it. 00:33:13.435 [2024-07-26 09:06:31.788376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.435 [2024-07-26 09:06:31.788402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.435 qpair failed and we were unable to recover it. 00:33:13.435 [2024-07-26 09:06:31.788549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.435 [2024-07-26 09:06:31.788575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.435 qpair failed and we were unable to recover it. 00:33:13.435 [2024-07-26 09:06:31.788731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.435 [2024-07-26 09:06:31.788755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.435 qpair failed and we were unable to recover it. 00:33:13.435 [2024-07-26 09:06:31.788904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.435 [2024-07-26 09:06:31.788933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.435 qpair failed and we were unable to recover it. 00:33:13.435 [2024-07-26 09:06:31.789072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.435 [2024-07-26 09:06:31.789098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.435 qpair failed and we were unable to recover it. 00:33:13.435 [2024-07-26 09:06:31.789243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.435 [2024-07-26 09:06:31.789268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.435 qpair failed and we were unable to recover it. 00:33:13.435 [2024-07-26 09:06:31.789422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.435 [2024-07-26 09:06:31.789448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.435 qpair failed and we were unable to recover it. 00:33:13.435 [2024-07-26 09:06:31.789596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.435 [2024-07-26 09:06:31.789620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.435 qpair failed and we were unable to recover it. 00:33:13.436 [2024-07-26 09:06:31.789774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.436 [2024-07-26 09:06:31.789798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.436 qpair failed and we were unable to recover it. 00:33:13.436 [2024-07-26 09:06:31.789947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.436 [2024-07-26 09:06:31.789973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.436 qpair failed and we were unable to recover it. 00:33:13.436 [2024-07-26 09:06:31.790130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.436 [2024-07-26 09:06:31.790156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.436 qpair failed and we were unable to recover it. 00:33:13.436 [2024-07-26 09:06:31.790277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.436 [2024-07-26 09:06:31.790303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.436 qpair failed and we were unable to recover it. 00:33:13.436 [2024-07-26 09:06:31.790477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.436 [2024-07-26 09:06:31.790503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.436 qpair failed and we were unable to recover it. 00:33:13.436 [2024-07-26 09:06:31.790633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.436 [2024-07-26 09:06:31.790665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.436 qpair failed and we were unable to recover it. 00:33:13.436 [2024-07-26 09:06:31.790788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.436 [2024-07-26 09:06:31.790814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.436 qpair failed and we were unable to recover it. 00:33:13.436 [2024-07-26 09:06:31.790990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.436 [2024-07-26 09:06:31.791025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.436 qpair failed and we were unable to recover it. 00:33:13.436 [2024-07-26 09:06:31.791152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.436 [2024-07-26 09:06:31.791178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.436 qpair failed and we were unable to recover it. 00:33:13.436 [2024-07-26 09:06:31.791311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.436 [2024-07-26 09:06:31.791357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.436 qpair failed and we were unable to recover it. 00:33:13.436 [2024-07-26 09:06:31.791517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.436 [2024-07-26 09:06:31.791543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.436 qpair failed and we were unable to recover it. 00:33:13.436 [2024-07-26 09:06:31.791659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.436 [2024-07-26 09:06:31.791683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.436 qpair failed and we were unable to recover it. 00:33:13.436 [2024-07-26 09:06:31.791829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.436 [2024-07-26 09:06:31.791856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.436 qpair failed and we were unable to recover it. 00:33:13.436 [2024-07-26 09:06:31.791997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.436 [2024-07-26 09:06:31.792022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.436 qpair failed and we were unable to recover it. 00:33:13.436 [2024-07-26 09:06:31.792174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.436 [2024-07-26 09:06:31.792200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.436 qpair failed and we were unable to recover it. 00:33:13.436 [2024-07-26 09:06:31.792320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.436 [2024-07-26 09:06:31.792365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.436 qpair failed and we were unable to recover it. 00:33:13.436 [2024-07-26 09:06:31.792536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.436 [2024-07-26 09:06:31.792560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.436 qpair failed and we were unable to recover it. 00:33:13.436 [2024-07-26 09:06:31.792701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.436 [2024-07-26 09:06:31.792726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.436 qpair failed and we were unable to recover it. 00:33:13.436 [2024-07-26 09:06:31.792856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.436 [2024-07-26 09:06:31.792881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.436 qpair failed and we were unable to recover it. 00:33:13.436 [2024-07-26 09:06:31.793033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.436 [2024-07-26 09:06:31.793073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.436 qpair failed and we were unable to recover it. 00:33:13.436 [2024-07-26 09:06:31.793194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.436 [2024-07-26 09:06:31.793218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.436 qpair failed and we were unable to recover it. 00:33:13.436 [2024-07-26 09:06:31.793367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.436 [2024-07-26 09:06:31.793399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.436 qpair failed and we were unable to recover it. 00:33:13.436 [2024-07-26 09:06:31.793531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.436 [2024-07-26 09:06:31.793556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.436 qpair failed and we were unable to recover it. 00:33:13.436 [2024-07-26 09:06:31.793676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.436 [2024-07-26 09:06:31.793701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.436 qpair failed and we were unable to recover it. 00:33:13.436 [2024-07-26 09:06:31.793872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.436 [2024-07-26 09:06:31.793900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.436 qpair failed and we were unable to recover it. 00:33:13.436 [2024-07-26 09:06:31.794085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.436 [2024-07-26 09:06:31.794112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.436 qpair failed and we were unable to recover it. 00:33:13.436 [2024-07-26 09:06:31.794240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.436 [2024-07-26 09:06:31.794266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.436 qpair failed and we were unable to recover it. 00:33:13.436 [2024-07-26 09:06:31.794404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.436 [2024-07-26 09:06:31.794430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.436 qpair failed and we were unable to recover it. 00:33:13.436 [2024-07-26 09:06:31.794577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.436 [2024-07-26 09:06:31.794602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.436 qpair failed and we were unable to recover it. 00:33:13.436 [2024-07-26 09:06:31.794713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.436 [2024-07-26 09:06:31.794738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.436 qpair failed and we were unable to recover it. 00:33:13.436 [2024-07-26 09:06:31.794881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.436 [2024-07-26 09:06:31.794917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.436 qpair failed and we were unable to recover it. 00:33:13.436 [2024-07-26 09:06:31.795112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.436 [2024-07-26 09:06:31.795138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.436 qpair failed and we were unable to recover it. 00:33:13.436 [2024-07-26 09:06:31.795268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.436 [2024-07-26 09:06:31.795294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.436 qpair failed and we were unable to recover it. 00:33:13.436 [2024-07-26 09:06:31.795458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.436 [2024-07-26 09:06:31.795497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.436 qpair failed and we were unable to recover it. 00:33:13.436 [2024-07-26 09:06:31.795648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.436 [2024-07-26 09:06:31.795675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.436 qpair failed and we were unable to recover it. 00:33:13.436 [2024-07-26 09:06:31.795803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.436 [2024-07-26 09:06:31.795830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.436 qpair failed and we were unable to recover it. 00:33:13.436 [2024-07-26 09:06:31.796021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.436 [2024-07-26 09:06:31.796053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.436 qpair failed and we were unable to recover it. 00:33:13.436 [2024-07-26 09:06:31.796223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.436 [2024-07-26 09:06:31.796249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.436 qpair failed and we were unable to recover it. 00:33:13.436 [2024-07-26 09:06:31.796377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.436 [2024-07-26 09:06:31.796405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.436 qpair failed and we were unable to recover it. 00:33:13.436 [2024-07-26 09:06:31.796525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.436 [2024-07-26 09:06:31.796550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.436 qpair failed and we were unable to recover it. 00:33:13.436 [2024-07-26 09:06:31.796698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.437 [2024-07-26 09:06:31.796725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.437 qpair failed and we were unable to recover it. 00:33:13.437 [2024-07-26 09:06:31.796848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.437 [2024-07-26 09:06:31.796874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.437 qpair failed and we were unable to recover it. 00:33:13.437 [2024-07-26 09:06:31.797054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.437 [2024-07-26 09:06:31.797086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.437 qpair failed and we were unable to recover it. 00:33:13.437 [2024-07-26 09:06:31.797238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.437 [2024-07-26 09:06:31.797264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.437 qpair failed and we were unable to recover it. 00:33:13.437 [2024-07-26 09:06:31.797427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.437 [2024-07-26 09:06:31.797456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.437 qpair failed and we were unable to recover it. 00:33:13.437 [2024-07-26 09:06:31.797617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.437 [2024-07-26 09:06:31.797649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.437 qpair failed and we were unable to recover it. 00:33:13.437 [2024-07-26 09:06:31.797770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.437 [2024-07-26 09:06:31.797797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.437 qpair failed and we were unable to recover it. 00:33:13.437 [2024-07-26 09:06:31.797963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.437 [2024-07-26 09:06:31.797993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.437 qpair failed and we were unable to recover it. 00:33:13.437 [2024-07-26 09:06:31.798170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.437 [2024-07-26 09:06:31.798196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.437 qpair failed and we were unable to recover it. 00:33:13.437 [2024-07-26 09:06:31.798321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.437 [2024-07-26 09:06:31.798348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.437 qpair failed and we were unable to recover it. 00:33:13.437 [2024-07-26 09:06:31.798495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.437 [2024-07-26 09:06:31.798523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.437 qpair failed and we were unable to recover it. 00:33:13.437 [2024-07-26 09:06:31.798701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.437 [2024-07-26 09:06:31.798730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.437 qpair failed and we were unable to recover it. 00:33:13.437 [2024-07-26 09:06:31.798852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.437 [2024-07-26 09:06:31.798881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.437 qpair failed and we were unable to recover it. 00:33:13.437 [2024-07-26 09:06:31.799007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.437 [2024-07-26 09:06:31.799033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.437 qpair failed and we were unable to recover it. 00:33:13.437 [2024-07-26 09:06:31.799177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.437 [2024-07-26 09:06:31.799204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.437 qpair failed and we were unable to recover it. 00:33:13.437 [2024-07-26 09:06:31.799324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.437 [2024-07-26 09:06:31.799351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.437 qpair failed and we were unable to recover it. 00:33:13.437 [2024-07-26 09:06:31.799475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.437 [2024-07-26 09:06:31.799501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.437 qpair failed and we were unable to recover it. 00:33:13.437 [2024-07-26 09:06:31.799621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.437 [2024-07-26 09:06:31.799649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.437 qpair failed and we were unable to recover it. 00:33:13.437 [2024-07-26 09:06:31.799795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.437 [2024-07-26 09:06:31.799825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.437 qpair failed and we were unable to recover it. 00:33:13.437 [2024-07-26 09:06:31.799992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.437 [2024-07-26 09:06:31.800024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.437 qpair failed and we were unable to recover it. 00:33:13.437 [2024-07-26 09:06:31.800185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.437 [2024-07-26 09:06:31.800211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.437 qpair failed and we were unable to recover it. 00:33:13.437 [2024-07-26 09:06:31.800355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.437 [2024-07-26 09:06:31.800382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.437 qpair failed and we were unable to recover it. 00:33:13.437 [2024-07-26 09:06:31.800498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.437 [2024-07-26 09:06:31.800541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.437 qpair failed and we were unable to recover it. 00:33:13.437 [2024-07-26 09:06:31.800734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.437 [2024-07-26 09:06:31.800763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.437 qpair failed and we were unable to recover it. 00:33:13.437 [2024-07-26 09:06:31.800908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.437 [2024-07-26 09:06:31.800934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.437 qpair failed and we were unable to recover it. 00:33:13.437 [2024-07-26 09:06:31.801054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.437 [2024-07-26 09:06:31.801085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.437 qpair failed and we were unable to recover it. 00:33:13.437 [2024-07-26 09:06:31.801259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.437 [2024-07-26 09:06:31.801289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.437 qpair failed and we were unable to recover it. 00:33:13.437 [2024-07-26 09:06:31.801456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.437 [2024-07-26 09:06:31.801481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.437 qpair failed and we were unable to recover it. 00:33:13.437 [2024-07-26 09:06:31.801673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.437 [2024-07-26 09:06:31.801702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.437 qpair failed and we were unable to recover it. 00:33:13.437 [2024-07-26 09:06:31.801865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.437 [2024-07-26 09:06:31.801893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.437 qpair failed and we were unable to recover it. 00:33:13.437 [2024-07-26 09:06:31.802030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.437 [2024-07-26 09:06:31.802056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.437 qpair failed and we were unable to recover it. 00:33:13.437 [2024-07-26 09:06:31.802255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.437 [2024-07-26 09:06:31.802285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.437 qpair failed and we were unable to recover it. 00:33:13.437 [2024-07-26 09:06:31.802477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.437 [2024-07-26 09:06:31.802507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.437 qpair failed and we were unable to recover it. 00:33:13.437 [2024-07-26 09:06:31.802644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.437 [2024-07-26 09:06:31.802670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.437 qpair failed and we were unable to recover it. 00:33:13.437 [2024-07-26 09:06:31.802809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.437 [2024-07-26 09:06:31.802835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.437 qpair failed and we were unable to recover it. 00:33:13.437 [2024-07-26 09:06:31.803021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.437 [2024-07-26 09:06:31.803047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.437 qpair failed and we were unable to recover it. 00:33:13.437 [2024-07-26 09:06:31.803224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.437 [2024-07-26 09:06:31.803251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.437 qpair failed and we were unable to recover it. 00:33:13.437 [2024-07-26 09:06:31.803374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.437 [2024-07-26 09:06:31.803400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.437 qpair failed and we were unable to recover it. 00:33:13.437 [2024-07-26 09:06:31.803551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.437 [2024-07-26 09:06:31.803578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.437 qpair failed and we were unable to recover it. 00:33:13.437 [2024-07-26 09:06:31.803722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.437 [2024-07-26 09:06:31.803748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.437 qpair failed and we were unable to recover it. 00:33:13.437 [2024-07-26 09:06:31.803865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.437 [2024-07-26 09:06:31.803908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.438 qpair failed and we were unable to recover it. 00:33:13.438 [2024-07-26 09:06:31.804084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.438 [2024-07-26 09:06:31.804116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.438 qpair failed and we were unable to recover it. 00:33:13.438 [2024-07-26 09:06:31.804289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.438 [2024-07-26 09:06:31.804315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.438 qpair failed and we were unable to recover it. 00:33:13.438 [2024-07-26 09:06:31.804461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.438 [2024-07-26 09:06:31.804487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.438 qpair failed and we were unable to recover it. 00:33:13.438 [2024-07-26 09:06:31.804628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.438 [2024-07-26 09:06:31.804653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.438 qpair failed and we were unable to recover it. 00:33:13.438 [2024-07-26 09:06:31.804826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.438 [2024-07-26 09:06:31.804856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.438 qpair failed and we were unable to recover it. 00:33:13.438 [2024-07-26 09:06:31.805043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.438 [2024-07-26 09:06:31.805079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.438 qpair failed and we were unable to recover it. 00:33:13.438 [2024-07-26 09:06:31.805280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.438 [2024-07-26 09:06:31.805305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.438 qpair failed and we were unable to recover it. 00:33:13.438 [2024-07-26 09:06:31.805449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.438 [2024-07-26 09:06:31.805474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.438 qpair failed and we were unable to recover it. 00:33:13.438 [2024-07-26 09:06:31.805638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.438 [2024-07-26 09:06:31.805666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.438 qpair failed and we were unable to recover it. 00:33:13.438 [2024-07-26 09:06:31.805891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.438 [2024-07-26 09:06:31.805947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.438 qpair failed and we were unable to recover it. 00:33:13.438 [2024-07-26 09:06:31.806112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.438 [2024-07-26 09:06:31.806139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.438 qpair failed and we were unable to recover it. 00:33:13.438 [2024-07-26 09:06:31.806301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.438 [2024-07-26 09:06:31.806329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.438 qpair failed and we were unable to recover it. 00:33:13.438 [2024-07-26 09:06:31.806501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.438 [2024-07-26 09:06:31.806527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.438 qpair failed and we were unable to recover it. 00:33:13.438 [2024-07-26 09:06:31.806654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.438 [2024-07-26 09:06:31.806679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.438 qpair failed and we were unable to recover it. 00:33:13.438 [2024-07-26 09:06:31.806798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.438 [2024-07-26 09:06:31.806824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.438 qpair failed and we were unable to recover it. 00:33:13.438 [2024-07-26 09:06:31.807004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.438 [2024-07-26 09:06:31.807033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.438 qpair failed and we were unable to recover it. 00:33:13.438 [2024-07-26 09:06:31.807236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.438 [2024-07-26 09:06:31.807261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.438 qpair failed and we were unable to recover it. 00:33:13.438 [2024-07-26 09:06:31.807427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.438 [2024-07-26 09:06:31.807454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.438 qpair failed and we were unable to recover it. 00:33:13.438 [2024-07-26 09:06:31.807643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.438 [2024-07-26 09:06:31.807692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.438 qpair failed and we were unable to recover it. 00:33:13.438 [2024-07-26 09:06:31.807864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.438 [2024-07-26 09:06:31.807889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.438 qpair failed and we were unable to recover it. 00:33:13.438 [2024-07-26 09:06:31.808051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.438 [2024-07-26 09:06:31.808093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.438 qpair failed and we were unable to recover it. 00:33:13.438 [2024-07-26 09:06:31.808244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.438 [2024-07-26 09:06:31.808269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.438 qpair failed and we were unable to recover it. 00:33:13.438 [2024-07-26 09:06:31.808420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.438 [2024-07-26 09:06:31.808444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.438 qpair failed and we were unable to recover it. 00:33:13.438 [2024-07-26 09:06:31.808583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.438 [2024-07-26 09:06:31.808609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.438 qpair failed and we were unable to recover it. 00:33:13.438 [2024-07-26 09:06:31.808729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.438 [2024-07-26 09:06:31.808754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.438 qpair failed and we were unable to recover it. 00:33:13.438 [2024-07-26 09:06:31.808877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.438 [2024-07-26 09:06:31.808903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.438 qpair failed and we were unable to recover it. 00:33:13.438 [2024-07-26 09:06:31.809025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.438 [2024-07-26 09:06:31.809050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.438 qpair failed and we were unable to recover it. 00:33:13.438 [2024-07-26 09:06:31.809247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.438 [2024-07-26 09:06:31.809272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.438 qpair failed and we were unable to recover it. 00:33:13.438 [2024-07-26 09:06:31.809420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.438 [2024-07-26 09:06:31.809446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.438 qpair failed and we were unable to recover it. 00:33:13.438 [2024-07-26 09:06:31.809562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.438 [2024-07-26 09:06:31.809587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.438 qpair failed and we were unable to recover it. 00:33:13.438 [2024-07-26 09:06:31.809756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.438 [2024-07-26 09:06:31.809797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.438 qpair failed and we were unable to recover it. 00:33:13.438 [2024-07-26 09:06:31.809974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.438 [2024-07-26 09:06:31.809999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.438 qpair failed and we were unable to recover it. 00:33:13.438 [2024-07-26 09:06:31.810151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.438 [2024-07-26 09:06:31.810179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.438 qpair failed and we were unable to recover it. 00:33:13.438 [2024-07-26 09:06:31.810372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.438 [2024-07-26 09:06:31.810397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.438 qpair failed and we were unable to recover it. 00:33:13.438 [2024-07-26 09:06:31.810540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.438 [2024-07-26 09:06:31.810566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.438 qpair failed and we were unable to recover it. 00:33:13.438 [2024-07-26 09:06:31.810739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.438 [2024-07-26 09:06:31.810767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.438 qpair failed and we were unable to recover it. 00:33:13.438 [2024-07-26 09:06:31.810925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.438 [2024-07-26 09:06:31.810952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.438 qpair failed and we were unable to recover it. 00:33:13.438 [2024-07-26 09:06:31.811080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.438 [2024-07-26 09:06:31.811105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.438 qpair failed and we were unable to recover it. 00:33:13.438 [2024-07-26 09:06:31.811251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.438 [2024-07-26 09:06:31.811277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.438 qpair failed and we were unable to recover it. 00:33:13.438 [2024-07-26 09:06:31.811486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.438 [2024-07-26 09:06:31.811550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.438 qpair failed and we were unable to recover it. 00:33:13.438 [2024-07-26 09:06:31.811717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.438 [2024-07-26 09:06:31.811742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.438 qpair failed and we were unable to recover it. 00:33:13.438 [2024-07-26 09:06:31.811914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.438 [2024-07-26 09:06:31.811939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.438 qpair failed and we were unable to recover it. 00:33:13.438 [2024-07-26 09:06:31.812082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.439 [2024-07-26 09:06:31.812109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.439 qpair failed and we were unable to recover it. 00:33:13.439 [2024-07-26 09:06:31.812257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.439 [2024-07-26 09:06:31.812283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.439 qpair failed and we were unable to recover it. 00:33:13.439 [2024-07-26 09:06:31.812446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.439 [2024-07-26 09:06:31.812478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.439 qpair failed and we were unable to recover it. 00:33:13.439 [2024-07-26 09:06:31.812649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.439 [2024-07-26 09:06:31.812674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.439 qpair failed and we were unable to recover it. 00:33:13.439 [2024-07-26 09:06:31.812819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.439 [2024-07-26 09:06:31.812845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.439 qpair failed and we were unable to recover it. 00:33:13.439 [2024-07-26 09:06:31.813033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.439 [2024-07-26 09:06:31.813067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.439 qpair failed and we were unable to recover it. 00:33:13.439 [2024-07-26 09:06:31.813257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.439 [2024-07-26 09:06:31.813284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.439 qpair failed and we were unable to recover it. 00:33:13.439 [2024-07-26 09:06:31.813451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.439 [2024-07-26 09:06:31.813476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.439 qpair failed and we were unable to recover it. 00:33:13.439 [2024-07-26 09:06:31.813589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.439 [2024-07-26 09:06:31.813615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.439 qpair failed and we were unable to recover it. 00:33:13.439 [2024-07-26 09:06:31.813755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.439 [2024-07-26 09:06:31.813782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.439 qpair failed and we were unable to recover it. 00:33:13.439 [2024-07-26 09:06:31.813922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.439 [2024-07-26 09:06:31.813948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.439 qpair failed and we were unable to recover it. 00:33:13.439 [2024-07-26 09:06:31.814092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.439 [2024-07-26 09:06:31.814117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.439 qpair failed and we were unable to recover it. 00:33:13.439 [2024-07-26 09:06:31.814303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.439 [2024-07-26 09:06:31.814328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.439 qpair failed and we were unable to recover it. 00:33:13.439 [2024-07-26 09:06:31.814496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.439 [2024-07-26 09:06:31.814521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.439 qpair failed and we were unable to recover it. 00:33:13.439 [2024-07-26 09:06:31.814638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.439 [2024-07-26 09:06:31.814664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.439 qpair failed and we were unable to recover it. 00:33:13.439 [2024-07-26 09:06:31.814835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.439 [2024-07-26 09:06:31.814863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.439 qpair failed and we were unable to recover it. 00:33:13.439 [2024-07-26 09:06:31.815011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.439 [2024-07-26 09:06:31.815036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.439 qpair failed and we were unable to recover it. 00:33:13.439 [2024-07-26 09:06:31.815190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.439 [2024-07-26 09:06:31.815217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.439 qpair failed and we were unable to recover it. 00:33:13.439 [2024-07-26 09:06:31.815414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.439 [2024-07-26 09:06:31.815462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.439 qpair failed and we were unable to recover it. 00:33:13.439 [2024-07-26 09:06:31.815606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.439 [2024-07-26 09:06:31.815631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.439 qpair failed and we were unable to recover it. 00:33:13.439 [2024-07-26 09:06:31.815816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.439 [2024-07-26 09:06:31.815844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.439 qpair failed and we were unable to recover it. 00:33:13.439 [2024-07-26 09:06:31.816043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.439 [2024-07-26 09:06:31.816094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.439 qpair failed and we were unable to recover it. 00:33:13.439 [2024-07-26 09:06:31.816277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.439 [2024-07-26 09:06:31.816305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.439 qpair failed and we were unable to recover it. 00:33:13.439 [2024-07-26 09:06:31.816426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.439 [2024-07-26 09:06:31.816471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.439 qpair failed and we were unable to recover it. 00:33:13.439 [2024-07-26 09:06:31.816738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.439 [2024-07-26 09:06:31.816789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.439 qpair failed and we were unable to recover it. 00:33:13.439 [2024-07-26 09:06:31.816952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.439 [2024-07-26 09:06:31.816978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.439 qpair failed and we were unable to recover it. 00:33:13.439 [2024-07-26 09:06:31.817145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.439 [2024-07-26 09:06:31.817176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.439 qpair failed and we were unable to recover it. 00:33:13.439 [2024-07-26 09:06:31.817337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.439 [2024-07-26 09:06:31.817366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.439 qpair failed and we were unable to recover it. 00:33:13.439 [2024-07-26 09:06:31.817508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.439 [2024-07-26 09:06:31.817535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.439 qpair failed and we were unable to recover it. 00:33:13.439 [2024-07-26 09:06:31.817668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.439 [2024-07-26 09:06:31.817695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.439 qpair failed and we were unable to recover it. 00:33:13.439 [2024-07-26 09:06:31.817891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.439 [2024-07-26 09:06:31.817917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.439 qpair failed and we were unable to recover it. 00:33:13.439 [2024-07-26 09:06:31.818040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.439 [2024-07-26 09:06:31.818076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.439 qpair failed and we were unable to recover it. 00:33:13.439 [2024-07-26 09:06:31.818251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.439 [2024-07-26 09:06:31.818277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.439 qpair failed and we were unable to recover it. 00:33:13.439 [2024-07-26 09:06:31.818445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.439 [2024-07-26 09:06:31.818499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.439 qpair failed and we were unable to recover it. 00:33:13.439 [2024-07-26 09:06:31.818636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.439 [2024-07-26 09:06:31.818663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.439 qpair failed and we were unable to recover it. 00:33:13.439 [2024-07-26 09:06:31.818813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.439 [2024-07-26 09:06:31.818839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.439 qpair failed and we were unable to recover it. 00:33:13.439 [2024-07-26 09:06:31.818986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.439 [2024-07-26 09:06:31.819012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.439 qpair failed and we were unable to recover it. 00:33:13.439 [2024-07-26 09:06:31.819159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.439 [2024-07-26 09:06:31.819185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.439 qpair failed and we were unable to recover it. 00:33:13.439 [2024-07-26 09:06:31.819381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.439 [2024-07-26 09:06:31.819410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.439 qpair failed and we were unable to recover it. 00:33:13.439 [2024-07-26 09:06:31.819589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.439 [2024-07-26 09:06:31.819617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.439 qpair failed and we were unable to recover it. 00:33:13.440 [2024-07-26 09:06:31.819740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.440 [2024-07-26 09:06:31.819766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.440 qpair failed and we were unable to recover it. 00:33:13.440 [2024-07-26 09:06:31.819966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.440 [2024-07-26 09:06:31.819993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.440 qpair failed and we were unable to recover it. 00:33:13.440 [2024-07-26 09:06:31.820128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.440 [2024-07-26 09:06:31.820162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.440 qpair failed and we were unable to recover it. 00:33:13.440 [2024-07-26 09:06:31.820339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.440 [2024-07-26 09:06:31.820364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.440 qpair failed and we were unable to recover it. 00:33:13.440 [2024-07-26 09:06:31.820560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.440 [2024-07-26 09:06:31.820589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.440 qpair failed and we were unable to recover it. 00:33:13.440 [2024-07-26 09:06:31.820762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.440 [2024-07-26 09:06:31.820816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.440 qpair failed and we were unable to recover it. 00:33:13.440 [2024-07-26 09:06:31.820991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.440 [2024-07-26 09:06:31.821015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.440 qpair failed and we were unable to recover it. 00:33:13.440 [2024-07-26 09:06:31.821167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.440 [2024-07-26 09:06:31.821192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.440 qpair failed and we were unable to recover it. 00:33:13.440 [2024-07-26 09:06:31.821436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.440 [2024-07-26 09:06:31.821489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.440 qpair failed and we were unable to recover it. 00:33:13.440 [2024-07-26 09:06:31.821679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.440 [2024-07-26 09:06:31.821704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.440 qpair failed and we were unable to recover it. 00:33:13.440 [2024-07-26 09:06:31.821849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.440 [2024-07-26 09:06:31.821892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.440 qpair failed and we were unable to recover it. 00:33:13.440 [2024-07-26 09:06:31.822034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.440 [2024-07-26 09:06:31.822071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.440 qpair failed and we were unable to recover it. 00:33:13.440 [2024-07-26 09:06:31.822242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.440 [2024-07-26 09:06:31.822266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.440 qpair failed and we were unable to recover it. 00:33:13.440 [2024-07-26 09:06:31.822436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.440 [2024-07-26 09:06:31.822464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.440 qpair failed and we were unable to recover it. 00:33:13.440 [2024-07-26 09:06:31.822664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.440 [2024-07-26 09:06:31.822720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.440 qpair failed and we were unable to recover it. 00:33:13.440 [2024-07-26 09:06:31.822888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.440 [2024-07-26 09:06:31.822913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.440 qpair failed and we were unable to recover it. 00:33:13.440 [2024-07-26 09:06:31.823085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.440 [2024-07-26 09:06:31.823114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.440 qpair failed and we were unable to recover it. 00:33:13.440 [2024-07-26 09:06:31.823274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.440 [2024-07-26 09:06:31.823301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.440 qpair failed and we were unable to recover it. 00:33:13.440 [2024-07-26 09:06:31.823442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.440 [2024-07-26 09:06:31.823467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.440 qpair failed and we were unable to recover it. 00:33:13.440 [2024-07-26 09:06:31.823631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.440 [2024-07-26 09:06:31.823655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.440 qpair failed and we were unable to recover it. 00:33:13.440 [2024-07-26 09:06:31.823776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.440 [2024-07-26 09:06:31.823801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.440 qpair failed and we were unable to recover it. 00:33:13.440 [2024-07-26 09:06:31.823921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.440 [2024-07-26 09:06:31.823947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.440 qpair failed and we were unable to recover it. 00:33:13.440 [2024-07-26 09:06:31.824108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.440 [2024-07-26 09:06:31.824136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.440 qpair failed and we were unable to recover it. 00:33:13.440 [2024-07-26 09:06:31.824322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.440 [2024-07-26 09:06:31.824350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.440 qpair failed and we were unable to recover it. 00:33:13.440 [2024-07-26 09:06:31.824522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.440 [2024-07-26 09:06:31.824547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.440 qpair failed and we were unable to recover it. 00:33:13.440 [2024-07-26 09:06:31.824706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.440 [2024-07-26 09:06:31.824734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.440 qpair failed and we were unable to recover it. 00:33:13.440 [2024-07-26 09:06:31.824897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.440 [2024-07-26 09:06:31.824924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.440 qpair failed and we were unable to recover it. 00:33:13.440 [2024-07-26 09:06:31.825092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.440 [2024-07-26 09:06:31.825117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.440 qpair failed and we were unable to recover it. 00:33:13.440 [2024-07-26 09:06:31.825287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.440 [2024-07-26 09:06:31.825316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.440 qpair failed and we were unable to recover it. 00:33:13.440 [2024-07-26 09:06:31.825562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.440 [2024-07-26 09:06:31.825609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.440 qpair failed and we were unable to recover it. 00:33:13.440 [2024-07-26 09:06:31.825811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.440 [2024-07-26 09:06:31.825836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.440 qpair failed and we were unable to recover it. 00:33:13.440 [2024-07-26 09:06:31.825999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.440 [2024-07-26 09:06:31.826027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.440 qpair failed and we were unable to recover it. 00:33:13.440 [2024-07-26 09:06:31.826194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.440 [2024-07-26 09:06:31.826222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.440 qpair failed and we were unable to recover it. 00:33:13.440 [2024-07-26 09:06:31.826386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.440 [2024-07-26 09:06:31.826410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.440 qpair failed and we were unable to recover it. 00:33:13.440 [2024-07-26 09:06:31.826558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.440 [2024-07-26 09:06:31.826584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.440 qpair failed and we were unable to recover it. 00:33:13.440 [2024-07-26 09:06:31.826732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.440 [2024-07-26 09:06:31.826778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.440 qpair failed and we were unable to recover it. 00:33:13.440 [2024-07-26 09:06:31.826927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.440 [2024-07-26 09:06:31.826951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.440 qpair failed and we were unable to recover it. 00:33:13.440 [2024-07-26 09:06:31.827098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.441 [2024-07-26 09:06:31.827141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.441 qpair failed and we were unable to recover it. 00:33:13.441 [2024-07-26 09:06:31.827311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.441 [2024-07-26 09:06:31.827338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.441 qpair failed and we were unable to recover it. 00:33:13.441 [2024-07-26 09:06:31.827512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.441 [2024-07-26 09:06:31.827538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.441 qpair failed and we were unable to recover it. 00:33:13.441 [2024-07-26 09:06:31.827668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.441 [2024-07-26 09:06:31.827695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.441 qpair failed and we were unable to recover it. 00:33:13.441 [2024-07-26 09:06:31.827884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.441 [2024-07-26 09:06:31.827909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.441 qpair failed and we were unable to recover it. 00:33:13.727 [2024-07-26 09:06:31.828082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.727 [2024-07-26 09:06:31.828114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.727 qpair failed and we were unable to recover it. 00:33:13.727 [2024-07-26 09:06:31.828287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.728 [2024-07-26 09:06:31.828316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.728 qpair failed and we were unable to recover it. 00:33:13.728 [2024-07-26 09:06:31.828572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.728 [2024-07-26 09:06:31.828626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.728 qpair failed and we were unable to recover it. 00:33:13.728 [2024-07-26 09:06:31.828795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.728 [2024-07-26 09:06:31.828820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.728 qpair failed and we were unable to recover it. 00:33:13.728 [2024-07-26 09:06:31.828935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.728 [2024-07-26 09:06:31.828975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.728 qpair failed and we were unable to recover it. 00:33:13.728 [2024-07-26 09:06:31.829126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.728 [2024-07-26 09:06:31.829170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.728 qpair failed and we were unable to recover it. 00:33:13.728 [2024-07-26 09:06:31.829340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.728 [2024-07-26 09:06:31.829367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.728 qpair failed and we were unable to recover it. 00:33:13.728 [2024-07-26 09:06:31.829493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.728 [2024-07-26 09:06:31.829520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.728 qpair failed and we were unable to recover it. 00:33:13.728 [2024-07-26 09:06:31.829661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.728 [2024-07-26 09:06:31.829688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.728 qpair failed and we were unable to recover it. 00:33:13.728 [2024-07-26 09:06:31.829836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.728 [2024-07-26 09:06:31.829863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.728 qpair failed and we were unable to recover it. 00:33:13.728 [2024-07-26 09:06:31.829983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.728 [2024-07-26 09:06:31.830010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.728 qpair failed and we were unable to recover it. 00:33:13.728 [2024-07-26 09:06:31.830169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.728 [2024-07-26 09:06:31.830197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.728 qpair failed and we were unable to recover it. 00:33:13.728 [2024-07-26 09:06:31.830342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.728 [2024-07-26 09:06:31.830368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.728 qpair failed and we were unable to recover it. 00:33:13.728 [2024-07-26 09:06:31.830514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.728 [2024-07-26 09:06:31.830540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.728 qpair failed and we were unable to recover it. 00:33:13.728 [2024-07-26 09:06:31.830664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.728 [2024-07-26 09:06:31.830689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.728 qpair failed and we were unable to recover it. 00:33:13.728 [2024-07-26 09:06:31.830835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.728 [2024-07-26 09:06:31.830861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.728 qpair failed and we were unable to recover it. 00:33:13.728 [2024-07-26 09:06:31.831025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.728 [2024-07-26 09:06:31.831055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.728 qpair failed and we were unable to recover it. 00:33:13.728 [2024-07-26 09:06:31.831238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.728 [2024-07-26 09:06:31.831263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.728 qpair failed and we were unable to recover it. 00:33:13.728 [2024-07-26 09:06:31.831385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.728 [2024-07-26 09:06:31.831412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.728 qpair failed and we were unable to recover it. 00:33:13.728 [2024-07-26 09:06:31.831581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.728 [2024-07-26 09:06:31.831622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.728 qpair failed and we were unable to recover it. 00:33:13.728 [2024-07-26 09:06:31.831782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.728 [2024-07-26 09:06:31.831810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.728 qpair failed and we were unable to recover it. 00:33:13.728 [2024-07-26 09:06:31.831949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.728 [2024-07-26 09:06:31.831976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.728 qpair failed and we were unable to recover it. 00:33:13.728 [2024-07-26 09:06:31.832101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.728 [2024-07-26 09:06:31.832128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.728 qpair failed and we were unable to recover it. 00:33:13.728 [2024-07-26 09:06:31.832303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.728 [2024-07-26 09:06:31.832333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.728 qpair failed and we were unable to recover it. 00:33:13.728 [2024-07-26 09:06:31.832482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.728 [2024-07-26 09:06:31.832509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.728 qpair failed and we were unable to recover it. 00:33:13.728 [2024-07-26 09:06:31.832658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.728 [2024-07-26 09:06:31.832700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.728 qpair failed and we were unable to recover it. 00:33:13.728 [2024-07-26 09:06:31.832834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.728 [2024-07-26 09:06:31.832864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.728 qpair failed and we were unable to recover it. 00:33:13.728 [2024-07-26 09:06:31.833039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.728 [2024-07-26 09:06:31.833071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.728 qpair failed and we were unable to recover it. 00:33:13.728 [2024-07-26 09:06:31.833234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.728 [2024-07-26 09:06:31.833262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.728 qpair failed and we were unable to recover it. 00:33:13.728 [2024-07-26 09:06:31.833433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.728 [2024-07-26 09:06:31.833459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.728 qpair failed and we were unable to recover it. 00:33:13.728 [2024-07-26 09:06:31.833603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.728 [2024-07-26 09:06:31.833629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.728 qpair failed and we were unable to recover it. 00:33:13.728 [2024-07-26 09:06:31.833776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.728 [2024-07-26 09:06:31.833817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.728 qpair failed and we were unable to recover it. 00:33:13.728 [2024-07-26 09:06:31.833978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.728 [2024-07-26 09:06:31.834006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.728 qpair failed and we were unable to recover it. 00:33:13.728 [2024-07-26 09:06:31.834201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.728 [2024-07-26 09:06:31.834227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.728 qpair failed and we were unable to recover it. 00:33:13.728 [2024-07-26 09:06:31.834394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.728 [2024-07-26 09:06:31.834423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.728 qpair failed and we were unable to recover it. 00:33:13.728 [2024-07-26 09:06:31.834585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.728 [2024-07-26 09:06:31.834637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.728 qpair failed and we were unable to recover it. 00:33:13.728 [2024-07-26 09:06:31.834780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.728 [2024-07-26 09:06:31.834807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.728 qpair failed and we were unable to recover it. 00:33:13.728 [2024-07-26 09:06:31.834977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.728 [2024-07-26 09:06:31.835019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.728 qpair failed and we were unable to recover it. 00:33:13.728 [2024-07-26 09:06:31.835183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.728 [2024-07-26 09:06:31.835212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.728 qpair failed and we were unable to recover it. 00:33:13.728 [2024-07-26 09:06:31.835354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.728 [2024-07-26 09:06:31.835380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.728 qpair failed and we were unable to recover it. 00:33:13.728 [2024-07-26 09:06:31.835496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.728 [2024-07-26 09:06:31.835528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.728 qpair failed and we were unable to recover it. 00:33:13.728 [2024-07-26 09:06:31.835666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.728 [2024-07-26 09:06:31.835697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.728 qpair failed and we were unable to recover it. 00:33:13.728 [2024-07-26 09:06:31.835868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.728 [2024-07-26 09:06:31.835895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.728 qpair failed and we were unable to recover it. 00:33:13.728 [2024-07-26 09:06:31.836100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.728 [2024-07-26 09:06:31.836129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.728 qpair failed and we were unable to recover it. 00:33:13.728 [2024-07-26 09:06:31.836311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.728 [2024-07-26 09:06:31.836337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.728 qpair failed and we were unable to recover it. 00:33:13.728 [2024-07-26 09:06:31.836476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.728 [2024-07-26 09:06:31.836501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.728 qpair failed and we were unable to recover it. 00:33:13.728 [2024-07-26 09:06:31.836662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.728 [2024-07-26 09:06:31.836690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.728 qpair failed and we were unable to recover it. 00:33:13.728 [2024-07-26 09:06:31.836849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.728 [2024-07-26 09:06:31.836877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.728 qpair failed and we were unable to recover it. 00:33:13.728 [2024-07-26 09:06:31.837038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.728 [2024-07-26 09:06:31.837069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.728 qpair failed and we were unable to recover it. 00:33:13.728 [2024-07-26 09:06:31.837263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.728 [2024-07-26 09:06:31.837291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.728 qpair failed and we were unable to recover it. 00:33:13.728 [2024-07-26 09:06:31.837545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.728 [2024-07-26 09:06:31.837608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.728 qpair failed and we were unable to recover it. 00:33:13.728 [2024-07-26 09:06:31.837751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.728 [2024-07-26 09:06:31.837776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.728 qpair failed and we were unable to recover it. 00:33:13.728 [2024-07-26 09:06:31.837892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.728 [2024-07-26 09:06:31.837917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.728 qpair failed and we were unable to recover it. 00:33:13.728 [2024-07-26 09:06:31.838080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.728 [2024-07-26 09:06:31.838109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.728 qpair failed and we were unable to recover it. 00:33:13.728 [2024-07-26 09:06:31.838285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.728 [2024-07-26 09:06:31.838311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.728 qpair failed and we were unable to recover it. 00:33:13.728 [2024-07-26 09:06:31.838439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.728 [2024-07-26 09:06:31.838464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.728 qpair failed and we were unable to recover it. 00:33:13.728 [2024-07-26 09:06:31.838585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.728 [2024-07-26 09:06:31.838611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.728 qpair failed and we were unable to recover it. 00:33:13.728 [2024-07-26 09:06:31.838736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.728 [2024-07-26 09:06:31.838762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.728 qpair failed and we were unable to recover it. 00:33:13.728 [2024-07-26 09:06:31.838937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.728 [2024-07-26 09:06:31.838979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.728 qpair failed and we were unable to recover it. 00:33:13.728 [2024-07-26 09:06:31.839144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.728 [2024-07-26 09:06:31.839173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.728 qpair failed and we were unable to recover it. 00:33:13.728 [2024-07-26 09:06:31.839339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.728 [2024-07-26 09:06:31.839365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.728 qpair failed and we were unable to recover it. 00:33:13.728 [2024-07-26 09:06:31.839478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.728 [2024-07-26 09:06:31.839503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.728 qpair failed and we were unable to recover it. 00:33:13.728 [2024-07-26 09:06:31.839647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.728 [2024-07-26 09:06:31.839673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.728 qpair failed and we were unable to recover it. 00:33:13.729 [2024-07-26 09:06:31.839826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.729 [2024-07-26 09:06:31.839851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.729 qpair failed and we were unable to recover it. 00:33:13.729 [2024-07-26 09:06:31.840027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.729 [2024-07-26 09:06:31.840053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.729 qpair failed and we were unable to recover it. 00:33:13.729 [2024-07-26 09:06:31.840236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.729 [2024-07-26 09:06:31.840265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.729 qpair failed and we were unable to recover it. 00:33:13.729 [2024-07-26 09:06:31.840409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.729 [2024-07-26 09:06:31.840435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.729 qpair failed and we were unable to recover it. 00:33:13.729 [2024-07-26 09:06:31.840587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.729 [2024-07-26 09:06:31.840632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.729 qpair failed and we were unable to recover it. 00:33:13.729 [2024-07-26 09:06:31.840798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.729 [2024-07-26 09:06:31.840825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.729 qpair failed and we were unable to recover it. 00:33:13.729 [2024-07-26 09:06:31.840997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.729 [2024-07-26 09:06:31.841022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.729 qpair failed and we were unable to recover it. 00:33:13.729 [2024-07-26 09:06:31.841165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.729 [2024-07-26 09:06:31.841191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.729 qpair failed and we were unable to recover it. 00:33:13.729 [2024-07-26 09:06:31.841361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.729 [2024-07-26 09:06:31.841389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.729 qpair failed and we were unable to recover it. 00:33:13.729 [2024-07-26 09:06:31.841550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.729 [2024-07-26 09:06:31.841576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.729 qpair failed and we were unable to recover it. 00:33:13.729 [2024-07-26 09:06:31.841719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.729 [2024-07-26 09:06:31.841762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.729 qpair failed and we were unable to recover it. 00:33:13.729 [2024-07-26 09:06:31.841897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.729 [2024-07-26 09:06:31.841925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.729 qpair failed and we were unable to recover it. 00:33:13.729 [2024-07-26 09:06:31.842067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.729 [2024-07-26 09:06:31.842093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.729 qpair failed and we were unable to recover it. 00:33:13.729 [2024-07-26 09:06:31.842237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.729 [2024-07-26 09:06:31.842263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.729 qpair failed and we were unable to recover it. 00:33:13.729 [2024-07-26 09:06:31.842413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.729 [2024-07-26 09:06:31.842455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.729 qpair failed and we were unable to recover it. 00:33:13.729 [2024-07-26 09:06:31.842653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.729 [2024-07-26 09:06:31.842679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.729 qpair failed and we were unable to recover it. 00:33:13.729 [2024-07-26 09:06:31.842839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.729 [2024-07-26 09:06:31.842867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.729 qpair failed and we were unable to recover it. 00:33:13.729 [2024-07-26 09:06:31.843025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.729 [2024-07-26 09:06:31.843057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.729 qpair failed and we were unable to recover it. 00:33:13.729 [2024-07-26 09:06:31.843222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.729 [2024-07-26 09:06:31.843248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.729 qpair failed and we were unable to recover it. 00:33:13.729 [2024-07-26 09:06:31.843392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.729 [2024-07-26 09:06:31.843435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.729 qpair failed and we were unable to recover it. 00:33:13.729 [2024-07-26 09:06:31.843635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.729 [2024-07-26 09:06:31.843687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.729 qpair failed and we were unable to recover it. 00:33:13.729 [2024-07-26 09:06:31.843850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.729 [2024-07-26 09:06:31.843876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.729 qpair failed and we were unable to recover it. 00:33:13.729 [2024-07-26 09:06:31.844048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.729 [2024-07-26 09:06:31.844084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.729 qpair failed and we were unable to recover it. 00:33:13.729 [2024-07-26 09:06:31.844244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.729 [2024-07-26 09:06:31.844272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.729 qpair failed and we were unable to recover it. 00:33:13.729 [2024-07-26 09:06:31.844463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.729 [2024-07-26 09:06:31.844488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.729 qpair failed and we were unable to recover it. 00:33:13.729 [2024-07-26 09:06:31.844648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.729 [2024-07-26 09:06:31.844677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.729 qpair failed and we were unable to recover it. 00:33:13.729 [2024-07-26 09:06:31.844933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.729 [2024-07-26 09:06:31.844986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.729 qpair failed and we were unable to recover it. 00:33:13.729 [2024-07-26 09:06:31.845183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.729 [2024-07-26 09:06:31.845209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.729 qpair failed and we were unable to recover it. 00:33:13.729 [2024-07-26 09:06:31.845374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.729 [2024-07-26 09:06:31.845402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.729 qpair failed and we were unable to recover it. 00:33:13.729 [2024-07-26 09:06:31.845578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.729 [2024-07-26 09:06:31.845604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.729 qpair failed and we were unable to recover it. 00:33:13.729 [2024-07-26 09:06:31.845732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.729 [2024-07-26 09:06:31.845758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.729 qpair failed and we were unable to recover it. 00:33:13.729 [2024-07-26 09:06:31.845911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.729 [2024-07-26 09:06:31.845937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.729 qpair failed and we were unable to recover it. 00:33:13.729 [2024-07-26 09:06:31.846091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.729 [2024-07-26 09:06:31.846137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.729 qpair failed and we were unable to recover it. 00:33:13.729 [2024-07-26 09:06:31.846308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.729 [2024-07-26 09:06:31.846334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.729 qpair failed and we were unable to recover it. 00:33:13.729 [2024-07-26 09:06:31.846502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.729 [2024-07-26 09:06:31.846531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.729 qpair failed and we were unable to recover it. 00:33:13.729 [2024-07-26 09:06:31.846785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.729 [2024-07-26 09:06:31.846835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.729 qpair failed and we were unable to recover it. 00:33:13.729 [2024-07-26 09:06:31.847039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.729 [2024-07-26 09:06:31.847071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.729 qpair failed and we were unable to recover it. 00:33:13.729 [2024-07-26 09:06:31.847239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.729 [2024-07-26 09:06:31.847267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.729 qpair failed and we were unable to recover it. 00:33:13.729 [2024-07-26 09:06:31.847421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.729 [2024-07-26 09:06:31.847449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.729 qpair failed and we were unable to recover it. 00:33:13.729 [2024-07-26 09:06:31.847628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.729 [2024-07-26 09:06:31.847653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.729 qpair failed and we were unable to recover it. 00:33:13.729 [2024-07-26 09:06:31.847778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.729 [2024-07-26 09:06:31.847820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.729 qpair failed and we were unable to recover it. 00:33:13.729 [2024-07-26 09:06:31.847960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.729 [2024-07-26 09:06:31.847988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.729 qpair failed and we were unable to recover it. 00:33:13.729 [2024-07-26 09:06:31.848159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.729 [2024-07-26 09:06:31.848187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.729 qpair failed and we were unable to recover it. 00:33:13.729 [2024-07-26 09:06:31.848338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.729 [2024-07-26 09:06:31.848364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.729 qpair failed and we were unable to recover it. 00:33:13.729 [2024-07-26 09:06:31.848512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.729 [2024-07-26 09:06:31.848538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.729 qpair failed and we were unable to recover it. 00:33:13.729 [2024-07-26 09:06:31.848708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.729 [2024-07-26 09:06:31.848734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.729 qpair failed and we were unable to recover it. 00:33:13.729 [2024-07-26 09:06:31.848850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.729 [2024-07-26 09:06:31.848877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.729 qpair failed and we were unable to recover it. 00:33:13.729 [2024-07-26 09:06:31.849032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.729 [2024-07-26 09:06:31.849080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.729 qpair failed and we were unable to recover it. 00:33:13.729 [2024-07-26 09:06:31.849228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.729 [2024-07-26 09:06:31.849253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.729 qpair failed and we were unable to recover it. 00:33:13.729 [2024-07-26 09:06:31.849398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.729 [2024-07-26 09:06:31.849440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.729 qpair failed and we were unable to recover it. 00:33:13.729 [2024-07-26 09:06:31.849627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.729 [2024-07-26 09:06:31.849685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.729 qpair failed and we were unable to recover it. 00:33:13.729 [2024-07-26 09:06:31.849829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.729 [2024-07-26 09:06:31.849855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.729 qpair failed and we were unable to recover it. 00:33:13.729 [2024-07-26 09:06:31.850025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.729 [2024-07-26 09:06:31.850051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.729 qpair failed and we were unable to recover it. 00:33:13.729 [2024-07-26 09:06:31.850225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.729 [2024-07-26 09:06:31.850252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.729 qpair failed and we were unable to recover it. 00:33:13.729 [2024-07-26 09:06:31.850401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.729 [2024-07-26 09:06:31.850427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.729 qpair failed and we were unable to recover it. 00:33:13.729 [2024-07-26 09:06:31.850562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.729 [2024-07-26 09:06:31.850590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.729 qpair failed and we were unable to recover it. 00:33:13.729 [2024-07-26 09:06:31.850763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.729 [2024-07-26 09:06:31.850791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.729 qpair failed and we were unable to recover it. 00:33:13.729 [2024-07-26 09:06:31.850985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.729 [2024-07-26 09:06:31.851015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.729 qpair failed and we were unable to recover it. 00:33:13.729 [2024-07-26 09:06:31.851152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.729 [2024-07-26 09:06:31.851186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.729 qpair failed and we were unable to recover it. 00:33:13.729 [2024-07-26 09:06:31.851338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.729 [2024-07-26 09:06:31.851363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.729 qpair failed and we were unable to recover it. 00:33:13.729 [2024-07-26 09:06:31.851542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.729 [2024-07-26 09:06:31.851569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.729 qpair failed and we were unable to recover it. 00:33:13.729 [2024-07-26 09:06:31.851687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.729 [2024-07-26 09:06:31.851713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.730 qpair failed and we were unable to recover it. 00:33:13.730 [2024-07-26 09:06:31.851863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.730 [2024-07-26 09:06:31.851891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.730 qpair failed and we were unable to recover it. 00:33:13.730 [2024-07-26 09:06:31.852066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.730 [2024-07-26 09:06:31.852091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.730 qpair failed and we were unable to recover it. 00:33:13.730 [2024-07-26 09:06:31.852242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.730 [2024-07-26 09:06:31.852268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.730 qpair failed and we were unable to recover it. 00:33:13.730 [2024-07-26 09:06:31.852408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.730 [2024-07-26 09:06:31.852434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.730 qpair failed and we were unable to recover it. 00:33:13.730 [2024-07-26 09:06:31.852548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.730 [2024-07-26 09:06:31.852573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.730 qpair failed and we were unable to recover it. 00:33:13.730 [2024-07-26 09:06:31.852696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.730 [2024-07-26 09:06:31.852722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.730 qpair failed and we were unable to recover it. 00:33:13.730 [2024-07-26 09:06:31.852835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.730 [2024-07-26 09:06:31.852862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.730 qpair failed and we were unable to recover it. 00:33:13.730 [2024-07-26 09:06:31.852979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.730 [2024-07-26 09:06:31.853004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.730 qpair failed and we were unable to recover it. 00:33:13.730 [2024-07-26 09:06:31.853182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.730 [2024-07-26 09:06:31.853208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.730 qpair failed and we were unable to recover it. 00:33:13.730 [2024-07-26 09:06:31.853350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.730 [2024-07-26 09:06:31.853393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.730 qpair failed and we were unable to recover it. 00:33:13.730 [2024-07-26 09:06:31.853535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.730 [2024-07-26 09:06:31.853561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.730 qpair failed and we were unable to recover it. 00:33:13.730 [2024-07-26 09:06:31.853681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.730 [2024-07-26 09:06:31.853707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.730 qpair failed and we were unable to recover it. 00:33:13.730 [2024-07-26 09:06:31.853861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.730 [2024-07-26 09:06:31.853886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.730 qpair failed and we were unable to recover it. 00:33:13.730 [2024-07-26 09:06:31.854093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.730 [2024-07-26 09:06:31.854119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.730 qpair failed and we were unable to recover it. 00:33:13.730 [2024-07-26 09:06:31.854263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.730 [2024-07-26 09:06:31.854304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.730 qpair failed and we were unable to recover it. 00:33:13.730 [2024-07-26 09:06:31.854466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.730 [2024-07-26 09:06:31.854495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.730 qpair failed and we were unable to recover it. 00:33:13.730 [2024-07-26 09:06:31.854691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.730 [2024-07-26 09:06:31.854716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.730 qpair failed and we were unable to recover it. 00:33:13.730 [2024-07-26 09:06:31.854906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.730 [2024-07-26 09:06:31.854934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.730 qpair failed and we were unable to recover it. 00:33:13.730 [2024-07-26 09:06:31.855096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.730 [2024-07-26 09:06:31.855125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.730 qpair failed and we were unable to recover it. 00:33:13.730 [2024-07-26 09:06:31.855287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.730 [2024-07-26 09:06:31.855312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.730 qpair failed and we were unable to recover it. 00:33:13.730 [2024-07-26 09:06:31.855431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.730 [2024-07-26 09:06:31.855456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.730 qpair failed and we were unable to recover it. 00:33:13.730 [2024-07-26 09:06:31.855630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.730 [2024-07-26 09:06:31.855658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.730 qpair failed and we were unable to recover it. 00:33:13.730 [2024-07-26 09:06:31.855828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.730 [2024-07-26 09:06:31.855854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.730 qpair failed and we were unable to recover it. 00:33:13.730 [2024-07-26 09:06:31.855975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.730 [2024-07-26 09:06:31.856017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.730 qpair failed and we were unable to recover it. 00:33:13.730 [2024-07-26 09:06:31.856181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.730 [2024-07-26 09:06:31.856210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.730 qpair failed and we were unable to recover it. 00:33:13.730 [2024-07-26 09:06:31.856345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.730 [2024-07-26 09:06:31.856371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.730 qpair failed and we were unable to recover it. 00:33:13.730 [2024-07-26 09:06:31.856488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.730 [2024-07-26 09:06:31.856514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.730 qpair failed and we were unable to recover it. 00:33:13.730 [2024-07-26 09:06:31.856668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.730 [2024-07-26 09:06:31.856696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.730 qpair failed and we were unable to recover it. 00:33:13.730 [2024-07-26 09:06:31.856857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.730 [2024-07-26 09:06:31.856882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.730 qpair failed and we were unable to recover it. 00:33:13.730 [2024-07-26 09:06:31.857004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.730 [2024-07-26 09:06:31.857030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.730 qpair failed and we were unable to recover it. 00:33:13.730 [2024-07-26 09:06:31.857154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.730 [2024-07-26 09:06:31.857179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.730 qpair failed and we were unable to recover it. 00:33:13.730 [2024-07-26 09:06:31.857301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.730 [2024-07-26 09:06:31.857327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.730 qpair failed and we were unable to recover it. 00:33:13.730 [2024-07-26 09:06:31.857471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.730 [2024-07-26 09:06:31.857498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.730 qpair failed and we were unable to recover it. 00:33:13.730 [2024-07-26 09:06:31.857644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.730 [2024-07-26 09:06:31.857687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.730 qpair failed and we were unable to recover it. 00:33:13.730 [2024-07-26 09:06:31.857827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.730 [2024-07-26 09:06:31.857853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.730 qpair failed and we were unable to recover it. 00:33:13.730 [2024-07-26 09:06:31.858001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.730 [2024-07-26 09:06:31.858049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.730 qpair failed and we were unable to recover it. 00:33:13.730 [2024-07-26 09:06:31.858214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.730 [2024-07-26 09:06:31.858243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.730 qpair failed and we were unable to recover it. 00:33:13.730 [2024-07-26 09:06:31.858383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.730 [2024-07-26 09:06:31.858410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.730 qpair failed and we were unable to recover it. 00:33:13.730 [2024-07-26 09:06:31.858602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.730 [2024-07-26 09:06:31.858630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.730 qpair failed and we were unable to recover it. 00:33:13.730 [2024-07-26 09:06:31.858791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.730 [2024-07-26 09:06:31.858819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.730 qpair failed and we were unable to recover it. 00:33:13.730 [2024-07-26 09:06:31.859014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.730 [2024-07-26 09:06:31.859039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.730 qpair failed and we were unable to recover it. 00:33:13.730 [2024-07-26 09:06:31.859170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.730 [2024-07-26 09:06:31.859195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.730 qpair failed and we were unable to recover it. 00:33:13.730 [2024-07-26 09:06:31.859319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.730 [2024-07-26 09:06:31.859345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.730 qpair failed and we were unable to recover it. 00:33:13.730 [2024-07-26 09:06:31.859492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.730 [2024-07-26 09:06:31.859517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.730 qpair failed and we were unable to recover it. 00:33:13.730 [2024-07-26 09:06:31.859660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.730 [2024-07-26 09:06:31.859685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.730 qpair failed and we were unable to recover it. 00:33:13.730 [2024-07-26 09:06:31.859826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.730 [2024-07-26 09:06:31.859854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.730 qpair failed and we were unable to recover it. 00:33:13.730 [2024-07-26 09:06:31.860021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.730 [2024-07-26 09:06:31.860048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.730 qpair failed and we were unable to recover it. 00:33:13.730 [2024-07-26 09:06:31.860180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.730 [2024-07-26 09:06:31.860206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.730 qpair failed and we were unable to recover it. 00:33:13.730 [2024-07-26 09:06:31.860372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.730 [2024-07-26 09:06:31.860399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.730 qpair failed and we were unable to recover it. 00:33:13.730 [2024-07-26 09:06:31.860545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.731 [2024-07-26 09:06:31.860571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.731 qpair failed and we were unable to recover it. 00:33:13.731 [2024-07-26 09:06:31.860716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.731 [2024-07-26 09:06:31.860742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.731 qpair failed and we were unable to recover it. 00:33:13.731 [2024-07-26 09:06:31.860920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.731 [2024-07-26 09:06:31.860945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.731 qpair failed and we were unable to recover it. 00:33:13.731 [2024-07-26 09:06:31.861108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.731 [2024-07-26 09:06:31.861134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.731 qpair failed and we were unable to recover it. 00:33:13.731 [2024-07-26 09:06:31.861280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.731 [2024-07-26 09:06:31.861305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.731 qpair failed and we were unable to recover it. 00:33:13.731 [2024-07-26 09:06:31.861450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.731 [2024-07-26 09:06:31.861491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.731 qpair failed and we were unable to recover it. 00:33:13.731 [2024-07-26 09:06:31.861633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.731 [2024-07-26 09:06:31.861658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.731 qpair failed and we were unable to recover it. 00:33:13.731 [2024-07-26 09:06:31.861846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.731 [2024-07-26 09:06:31.861875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.731 qpair failed and we were unable to recover it. 00:33:13.731 [2024-07-26 09:06:31.862006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.731 [2024-07-26 09:06:31.862035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.731 qpair failed and we were unable to recover it. 00:33:13.731 [2024-07-26 09:06:31.862214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.731 [2024-07-26 09:06:31.862239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.731 qpair failed and we were unable to recover it. 00:33:13.731 [2024-07-26 09:06:31.862398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.731 [2024-07-26 09:06:31.862426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.731 qpair failed and we were unable to recover it. 00:33:13.731 [2024-07-26 09:06:31.862583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.731 [2024-07-26 09:06:31.862612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.731 qpair failed and we were unable to recover it. 00:33:13.731 [2024-07-26 09:06:31.862804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.731 [2024-07-26 09:06:31.862829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.731 qpair failed and we were unable to recover it. 00:33:13.731 [2024-07-26 09:06:31.862999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.731 [2024-07-26 09:06:31.863027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.731 qpair failed and we were unable to recover it. 00:33:13.731 [2024-07-26 09:06:31.863166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.731 [2024-07-26 09:06:31.863194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.731 qpair failed and we were unable to recover it. 00:33:13.731 [2024-07-26 09:06:31.863352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.731 [2024-07-26 09:06:31.863377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.731 qpair failed and we were unable to recover it. 00:33:13.731 [2024-07-26 09:06:31.863541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.731 [2024-07-26 09:06:31.863569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.731 qpair failed and we were unable to recover it. 00:33:13.731 [2024-07-26 09:06:31.863755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.731 [2024-07-26 09:06:31.863783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.731 qpair failed and we were unable to recover it. 00:33:13.731 [2024-07-26 09:06:31.863975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.731 [2024-07-26 09:06:31.864000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.731 qpair failed and we were unable to recover it. 00:33:13.731 [2024-07-26 09:06:31.864189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.731 [2024-07-26 09:06:31.864218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.731 qpair failed and we were unable to recover it. 00:33:13.731 [2024-07-26 09:06:31.864368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.731 [2024-07-26 09:06:31.864397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.731 qpair failed and we were unable to recover it. 00:33:13.731 [2024-07-26 09:06:31.864541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.731 [2024-07-26 09:06:31.864566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.731 qpair failed and we were unable to recover it. 00:33:13.731 [2024-07-26 09:06:31.864691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.731 [2024-07-26 09:06:31.864716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.731 qpair failed and we were unable to recover it. 00:33:13.731 [2024-07-26 09:06:31.864860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.731 [2024-07-26 09:06:31.864886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.731 qpair failed and we were unable to recover it. 00:33:13.731 [2024-07-26 09:06:31.865056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.731 [2024-07-26 09:06:31.865086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.731 qpair failed and we were unable to recover it. 00:33:13.731 [2024-07-26 09:06:31.865251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.731 [2024-07-26 09:06:31.865279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.731 qpair failed and we were unable to recover it. 00:33:13.731 [2024-07-26 09:06:31.865442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.731 [2024-07-26 09:06:31.865476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.731 qpair failed and we were unable to recover it. 00:33:13.731 [2024-07-26 09:06:31.865640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.731 [2024-07-26 09:06:31.865665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.731 qpair failed and we were unable to recover it. 00:33:13.731 [2024-07-26 09:06:31.865782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.731 [2024-07-26 09:06:31.865823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.731 qpair failed and we were unable to recover it. 00:33:13.731 [2024-07-26 09:06:31.865988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.731 [2024-07-26 09:06:31.866016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.731 qpair failed and we were unable to recover it. 00:33:13.731 [2024-07-26 09:06:31.866190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.731 [2024-07-26 09:06:31.866217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.731 qpair failed and we were unable to recover it. 00:33:13.731 [2024-07-26 09:06:31.866331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.731 [2024-07-26 09:06:31.866357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.731 qpair failed and we were unable to recover it. 00:33:13.731 [2024-07-26 09:06:31.866478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.731 [2024-07-26 09:06:31.866503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.731 qpair failed and we were unable to recover it. 00:33:13.731 [2024-07-26 09:06:31.866621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.731 [2024-07-26 09:06:31.866647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.731 qpair failed and we were unable to recover it. 00:33:13.731 [2024-07-26 09:06:31.866817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.731 [2024-07-26 09:06:31.866845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.731 qpair failed and we were unable to recover it. 00:33:13.731 [2024-07-26 09:06:31.867009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.731 [2024-07-26 09:06:31.867038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.731 qpair failed and we were unable to recover it. 00:33:13.731 [2024-07-26 09:06:31.867202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.731 [2024-07-26 09:06:31.867227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.731 qpair failed and we were unable to recover it. 00:33:13.731 [2024-07-26 09:06:31.867369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.731 [2024-07-26 09:06:31.867411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.731 qpair failed and we were unable to recover it. 00:33:13.731 [2024-07-26 09:06:31.867598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.731 [2024-07-26 09:06:31.867626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.731 qpair failed and we were unable to recover it. 00:33:13.731 [2024-07-26 09:06:31.867788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.731 [2024-07-26 09:06:31.867813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.731 qpair failed and we were unable to recover it. 00:33:13.731 [2024-07-26 09:06:31.867939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.731 [2024-07-26 09:06:31.867965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.731 qpair failed and we were unable to recover it. 00:33:13.731 [2024-07-26 09:06:31.868146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.731 [2024-07-26 09:06:31.868173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.731 qpair failed and we were unable to recover it. 00:33:13.731 [2024-07-26 09:06:31.868347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.731 [2024-07-26 09:06:31.868372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.731 qpair failed and we were unable to recover it. 00:33:13.731 [2024-07-26 09:06:31.868580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.731 [2024-07-26 09:06:31.868606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.731 qpair failed and we were unable to recover it. 00:33:13.731 [2024-07-26 09:06:31.868778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.731 [2024-07-26 09:06:31.868803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.731 qpair failed and we were unable to recover it. 00:33:13.731 [2024-07-26 09:06:31.868919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.731 [2024-07-26 09:06:31.868944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.731 qpair failed and we were unable to recover it. 00:33:13.731 [2024-07-26 09:06:31.869088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.731 [2024-07-26 09:06:31.869114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.731 qpair failed and we were unable to recover it. 00:33:13.731 [2024-07-26 09:06:31.869288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.731 [2024-07-26 09:06:31.869314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.731 qpair failed and we were unable to recover it. 00:33:13.731 [2024-07-26 09:06:31.869503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.731 [2024-07-26 09:06:31.869528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.731 qpair failed and we were unable to recover it. 00:33:13.731 [2024-07-26 09:06:31.869700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.731 [2024-07-26 09:06:31.869728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.731 qpair failed and we were unable to recover it. 00:33:13.731 [2024-07-26 09:06:31.869914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.731 [2024-07-26 09:06:31.869941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.731 qpair failed and we were unable to recover it. 00:33:13.731 [2024-07-26 09:06:31.870115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.731 [2024-07-26 09:06:31.870142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.731 qpair failed and we were unable to recover it. 00:33:13.731 [2024-07-26 09:06:31.870292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.731 [2024-07-26 09:06:31.870317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.731 qpair failed and we were unable to recover it. 00:33:13.731 [2024-07-26 09:06:31.870472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.731 [2024-07-26 09:06:31.870498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.731 qpair failed and we were unable to recover it. 00:33:13.731 [2024-07-26 09:06:31.870667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.731 [2024-07-26 09:06:31.870692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.731 qpair failed and we were unable to recover it. 00:33:13.731 [2024-07-26 09:06:31.870842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.731 [2024-07-26 09:06:31.870867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.731 qpair failed and we were unable to recover it. 00:33:13.731 [2024-07-26 09:06:31.870983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.731 [2024-07-26 09:06:31.871009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.731 qpair failed and we were unable to recover it. 00:33:13.731 [2024-07-26 09:06:31.871153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.731 [2024-07-26 09:06:31.871179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.731 qpair failed and we were unable to recover it. 00:33:13.731 [2024-07-26 09:06:31.871366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.731 [2024-07-26 09:06:31.871395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.731 qpair failed and we were unable to recover it. 00:33:13.731 [2024-07-26 09:06:31.871519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.731 [2024-07-26 09:06:31.871546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.731 qpair failed and we were unable to recover it. 00:33:13.731 [2024-07-26 09:06:31.871724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.731 [2024-07-26 09:06:31.871750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.731 qpair failed and we were unable to recover it. 00:33:13.731 [2024-07-26 09:06:31.871946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.731 [2024-07-26 09:06:31.871974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.731 qpair failed and we were unable to recover it. 00:33:13.731 [2024-07-26 09:06:31.872109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.732 [2024-07-26 09:06:31.872138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.732 qpair failed and we were unable to recover it. 00:33:13.732 [2024-07-26 09:06:31.872287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.732 [2024-07-26 09:06:31.872312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.732 qpair failed and we were unable to recover it. 00:33:13.732 [2024-07-26 09:06:31.872462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.732 [2024-07-26 09:06:31.872488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.732 qpair failed and we were unable to recover it. 00:33:13.732 [2024-07-26 09:06:31.872610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.732 [2024-07-26 09:06:31.872635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.732 qpair failed and we were unable to recover it. 00:33:13.732 [2024-07-26 09:06:31.872757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.732 [2024-07-26 09:06:31.872787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.732 qpair failed and we were unable to recover it. 00:33:13.732 [2024-07-26 09:06:31.872933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.732 [2024-07-26 09:06:31.872976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.732 qpair failed and we were unable to recover it. 00:33:13.732 [2024-07-26 09:06:31.873111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.732 [2024-07-26 09:06:31.873140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.732 qpair failed and we were unable to recover it. 00:33:13.732 [2024-07-26 09:06:31.873279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.732 [2024-07-26 09:06:31.873306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.732 qpair failed and we were unable to recover it. 00:33:13.732 [2024-07-26 09:06:31.873497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.732 [2024-07-26 09:06:31.873525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.732 qpair failed and we were unable to recover it. 00:33:13.732 [2024-07-26 09:06:31.873709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.732 [2024-07-26 09:06:31.873737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.732 qpair failed and we were unable to recover it. 00:33:13.732 [2024-07-26 09:06:31.873879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.732 [2024-07-26 09:06:31.873904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.732 qpair failed and we were unable to recover it. 00:33:13.732 [2024-07-26 09:06:31.874025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.732 [2024-07-26 09:06:31.874051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.732 qpair failed and we were unable to recover it. 00:33:13.732 [2024-07-26 09:06:31.874188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.732 [2024-07-26 09:06:31.874214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.732 qpair failed and we were unable to recover it. 00:33:13.732 [2024-07-26 09:06:31.874428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.732 [2024-07-26 09:06:31.874453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.732 qpair failed and we were unable to recover it. 00:33:13.732 [2024-07-26 09:06:31.874620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.732 [2024-07-26 09:06:31.874650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.732 qpair failed and we were unable to recover it. 00:33:13.732 [2024-07-26 09:06:31.874808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.732 [2024-07-26 09:06:31.874836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.732 qpair failed and we were unable to recover it. 00:33:13.732 [2024-07-26 09:06:31.874973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.732 [2024-07-26 09:06:31.874998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.732 qpair failed and we were unable to recover it. 00:33:13.732 [2024-07-26 09:06:31.875149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.732 [2024-07-26 09:06:31.875193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.732 qpair failed and we were unable to recover it. 00:33:13.732 [2024-07-26 09:06:31.875388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.732 [2024-07-26 09:06:31.875416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.732 qpair failed and we were unable to recover it. 00:33:13.732 [2024-07-26 09:06:31.875613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.732 [2024-07-26 09:06:31.875638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.732 qpair failed and we were unable to recover it. 00:33:13.732 [2024-07-26 09:06:31.875781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.732 [2024-07-26 09:06:31.875807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.732 qpair failed and we were unable to recover it. 00:33:13.732 [2024-07-26 09:06:31.875930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.732 [2024-07-26 09:06:31.875955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.732 qpair failed and we were unable to recover it. 00:33:13.732 [2024-07-26 09:06:31.876117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.732 [2024-07-26 09:06:31.876143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.732 qpair failed and we were unable to recover it. 00:33:13.732 [2024-07-26 09:06:31.876286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.732 [2024-07-26 09:06:31.876312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.732 qpair failed and we were unable to recover it. 00:33:13.732 [2024-07-26 09:06:31.876452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.732 [2024-07-26 09:06:31.876477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.732 qpair failed and we were unable to recover it. 00:33:13.732 [2024-07-26 09:06:31.876653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.732 [2024-07-26 09:06:31.876678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.732 qpair failed and we were unable to recover it. 00:33:13.732 [2024-07-26 09:06:31.876839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.732 [2024-07-26 09:06:31.876867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.732 qpair failed and we were unable to recover it. 00:33:13.732 [2024-07-26 09:06:31.877020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.732 [2024-07-26 09:06:31.877048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.732 qpair failed and we were unable to recover it. 00:33:13.732 [2024-07-26 09:06:31.877230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.732 [2024-07-26 09:06:31.877256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.732 qpair failed and we were unable to recover it. 00:33:13.732 [2024-07-26 09:06:31.877402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.732 [2024-07-26 09:06:31.877428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.732 qpair failed and we were unable to recover it. 00:33:13.732 [2024-07-26 09:06:31.877602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.732 [2024-07-26 09:06:31.877631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.732 qpair failed and we were unable to recover it. 00:33:13.732 [2024-07-26 09:06:31.877800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.732 [2024-07-26 09:06:31.877825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.732 qpair failed and we were unable to recover it. 00:33:13.732 [2024-07-26 09:06:31.878025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.732 [2024-07-26 09:06:31.878053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.732 qpair failed and we were unable to recover it. 00:33:13.732 [2024-07-26 09:06:31.878255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.732 [2024-07-26 09:06:31.878283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.732 qpair failed and we were unable to recover it. 00:33:13.732 [2024-07-26 09:06:31.878450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.732 [2024-07-26 09:06:31.878476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.732 qpair failed and we were unable to recover it. 00:33:13.732 [2024-07-26 09:06:31.878665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.732 [2024-07-26 09:06:31.878693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.732 qpair failed and we were unable to recover it. 00:33:13.732 [2024-07-26 09:06:31.878861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.732 [2024-07-26 09:06:31.878887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.732 qpair failed and we were unable to recover it. 00:33:13.732 [2024-07-26 09:06:31.879036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.732 [2024-07-26 09:06:31.879069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.732 qpair failed and we were unable to recover it. 00:33:13.732 [2024-07-26 09:06:31.879189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.732 [2024-07-26 09:06:31.879232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.732 qpair failed and we were unable to recover it. 00:33:13.732 [2024-07-26 09:06:31.879420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.732 [2024-07-26 09:06:31.879448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.732 qpair failed and we were unable to recover it. 00:33:13.732 [2024-07-26 09:06:31.879616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.732 [2024-07-26 09:06:31.879642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.732 qpair failed and we were unable to recover it. 00:33:13.732 [2024-07-26 09:06:31.879790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.732 [2024-07-26 09:06:31.879816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.732 qpair failed and we were unable to recover it. 00:33:13.732 [2024-07-26 09:06:31.879962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.732 [2024-07-26 09:06:31.879987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.732 qpair failed and we were unable to recover it. 00:33:13.732 [2024-07-26 09:06:31.880139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.732 [2024-07-26 09:06:31.880165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.732 qpair failed and we were unable to recover it. 00:33:13.732 [2024-07-26 09:06:31.880311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.732 [2024-07-26 09:06:31.880358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.732 qpair failed and we were unable to recover it. 00:33:13.732 [2024-07-26 09:06:31.880515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.732 [2024-07-26 09:06:31.880543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.732 qpair failed and we were unable to recover it. 00:33:13.732 [2024-07-26 09:06:31.880684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.732 [2024-07-26 09:06:31.880709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.732 qpair failed and we were unable to recover it. 00:33:13.732 [2024-07-26 09:06:31.880827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.732 [2024-07-26 09:06:31.880852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.732 qpair failed and we were unable to recover it. 00:33:13.732 [2024-07-26 09:06:31.880971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.732 [2024-07-26 09:06:31.880996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.732 qpair failed and we were unable to recover it. 00:33:13.732 [2024-07-26 09:06:31.881147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.732 [2024-07-26 09:06:31.881174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.732 qpair failed and we were unable to recover it. 00:33:13.732 [2024-07-26 09:06:31.881322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.732 [2024-07-26 09:06:31.881366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.732 qpair failed and we were unable to recover it. 00:33:13.732 [2024-07-26 09:06:31.881510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.732 [2024-07-26 09:06:31.881536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.732 qpair failed and we were unable to recover it. 00:33:13.732 [2024-07-26 09:06:31.881712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.732 [2024-07-26 09:06:31.881737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.732 qpair failed and we were unable to recover it. 00:33:13.732 [2024-07-26 09:06:31.881928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.732 [2024-07-26 09:06:31.881956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.732 qpair failed and we were unable to recover it. 00:33:13.732 [2024-07-26 09:06:31.882125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.732 [2024-07-26 09:06:31.882152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.732 qpair failed and we were unable to recover it. 00:33:13.732 [2024-07-26 09:06:31.882333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.732 [2024-07-26 09:06:31.882359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.732 qpair failed and we were unable to recover it. 00:33:13.732 [2024-07-26 09:06:31.882506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.732 [2024-07-26 09:06:31.882532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.732 qpair failed and we were unable to recover it. 00:33:13.732 [2024-07-26 09:06:31.882672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.732 [2024-07-26 09:06:31.882713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.732 qpair failed and we were unable to recover it. 00:33:13.732 [2024-07-26 09:06:31.882882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.732 [2024-07-26 09:06:31.882907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.732 qpair failed and we were unable to recover it. 00:33:13.732 [2024-07-26 09:06:31.883074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.732 [2024-07-26 09:06:31.883103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.732 qpair failed and we were unable to recover it. 00:33:13.732 [2024-07-26 09:06:31.883231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.732 [2024-07-26 09:06:31.883260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.732 qpair failed and we were unable to recover it. 00:33:13.733 [2024-07-26 09:06:31.883402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.733 [2024-07-26 09:06:31.883427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.733 qpair failed and we were unable to recover it. 00:33:13.733 [2024-07-26 09:06:31.883572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.733 [2024-07-26 09:06:31.883614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.733 qpair failed and we were unable to recover it. 00:33:13.733 [2024-07-26 09:06:31.883805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.733 [2024-07-26 09:06:31.883830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.733 qpair failed and we were unable to recover it. 00:33:13.733 [2024-07-26 09:06:31.883945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.733 [2024-07-26 09:06:31.883970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.733 qpair failed and we were unable to recover it. 00:33:13.733 [2024-07-26 09:06:31.884091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.733 [2024-07-26 09:06:31.884117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.733 qpair failed and we were unable to recover it. 00:33:13.733 [2024-07-26 09:06:31.884290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.733 [2024-07-26 09:06:31.884333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.733 qpair failed and we were unable to recover it. 00:33:13.733 [2024-07-26 09:06:31.884476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.733 [2024-07-26 09:06:31.884502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.733 qpair failed and we were unable to recover it. 00:33:13.733 [2024-07-26 09:06:31.884646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.733 [2024-07-26 09:06:31.884688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.733 qpair failed and we were unable to recover it. 00:33:13.733 [2024-07-26 09:06:31.884876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.733 [2024-07-26 09:06:31.884904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.733 qpair failed and we were unable to recover it. 00:33:13.733 [2024-07-26 09:06:31.885069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.733 [2024-07-26 09:06:31.885095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.733 qpair failed and we were unable to recover it. 00:33:13.733 [2024-07-26 09:06:31.885219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.733 [2024-07-26 09:06:31.885249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.733 qpair failed and we were unable to recover it. 00:33:13.733 [2024-07-26 09:06:31.885417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.733 [2024-07-26 09:06:31.885443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.733 qpair failed and we were unable to recover it. 00:33:13.733 [2024-07-26 09:06:31.885560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.733 [2024-07-26 09:06:31.885587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.733 qpair failed and we were unable to recover it. 00:33:13.733 [2024-07-26 09:06:31.885735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.733 [2024-07-26 09:06:31.885761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.733 qpair failed and we were unable to recover it. 00:33:13.733 [2024-07-26 09:06:31.885938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.733 [2024-07-26 09:06:31.885966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.733 qpair failed and we were unable to recover it. 00:33:13.733 [2024-07-26 09:06:31.886130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.733 [2024-07-26 09:06:31.886156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.733 qpair failed and we were unable to recover it. 00:33:13.733 [2024-07-26 09:06:31.886321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.733 [2024-07-26 09:06:31.886349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.733 qpair failed and we were unable to recover it. 00:33:13.733 [2024-07-26 09:06:31.886516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.733 [2024-07-26 09:06:31.886544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.733 qpair failed and we were unable to recover it. 00:33:13.733 [2024-07-26 09:06:31.886708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.733 [2024-07-26 09:06:31.886734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.733 qpair failed and we were unable to recover it. 00:33:13.733 [2024-07-26 09:06:31.886846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.733 [2024-07-26 09:06:31.886872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.733 qpair failed and we were unable to recover it. 00:33:13.733 [2024-07-26 09:06:31.887038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.733 [2024-07-26 09:06:31.887072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.733 qpair failed and we were unable to recover it. 00:33:13.733 [2024-07-26 09:06:31.887226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.733 [2024-07-26 09:06:31.887252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.733 qpair failed and we were unable to recover it. 00:33:13.733 [2024-07-26 09:06:31.887395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.733 [2024-07-26 09:06:31.887420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.733 qpair failed and we were unable to recover it. 00:33:13.733 [2024-07-26 09:06:31.887537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.733 [2024-07-26 09:06:31.887562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.733 qpair failed and we were unable to recover it. 00:33:13.733 [2024-07-26 09:06:31.887710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.733 [2024-07-26 09:06:31.887736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.733 qpair failed and we were unable to recover it. 00:33:13.733 [2024-07-26 09:06:31.887903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.733 [2024-07-26 09:06:31.887928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.733 qpair failed and we were unable to recover it. 00:33:13.733 [2024-07-26 09:06:31.888083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.733 [2024-07-26 09:06:31.888109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.733 qpair failed and we were unable to recover it. 00:33:13.733 [2024-07-26 09:06:31.888259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.733 [2024-07-26 09:06:31.888286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.733 qpair failed and we were unable to recover it. 00:33:13.733 [2024-07-26 09:06:31.888480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.733 [2024-07-26 09:06:31.888509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.733 qpair failed and we were unable to recover it. 00:33:13.733 [2024-07-26 09:06:31.888665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.733 [2024-07-26 09:06:31.888693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.733 qpair failed and we were unable to recover it. 00:33:13.733 [2024-07-26 09:06:31.888861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.733 [2024-07-26 09:06:31.888887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.733 qpair failed and we were unable to recover it. 00:33:13.733 [2024-07-26 09:06:31.889005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.733 [2024-07-26 09:06:31.889047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.733 qpair failed and we were unable to recover it. 00:33:13.733 [2024-07-26 09:06:31.889234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.733 [2024-07-26 09:06:31.889263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.733 qpair failed and we were unable to recover it. 00:33:13.733 [2024-07-26 09:06:31.889427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.733 [2024-07-26 09:06:31.889453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.733 qpair failed and we were unable to recover it. 00:33:13.733 [2024-07-26 09:06:31.889592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.733 [2024-07-26 09:06:31.889635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.733 qpair failed and we were unable to recover it. 00:33:13.733 [2024-07-26 09:06:31.889757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.733 [2024-07-26 09:06:31.889785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.733 qpair failed and we were unable to recover it. 00:33:13.733 [2024-07-26 09:06:31.889954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.733 [2024-07-26 09:06:31.889979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.733 qpair failed and we were unable to recover it. 00:33:13.733 [2024-07-26 09:06:31.890130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.733 [2024-07-26 09:06:31.890157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.733 qpair failed and we were unable to recover it. 00:33:13.733 [2024-07-26 09:06:31.890277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.733 [2024-07-26 09:06:31.890302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.733 qpair failed and we were unable to recover it. 00:33:13.733 [2024-07-26 09:06:31.890450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.733 [2024-07-26 09:06:31.890476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.733 qpair failed and we were unable to recover it. 00:33:13.733 [2024-07-26 09:06:31.890596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.733 [2024-07-26 09:06:31.890640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.733 qpair failed and we were unable to recover it. 00:33:13.733 [2024-07-26 09:06:31.890801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.733 [2024-07-26 09:06:31.890830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.733 qpair failed and we were unable to recover it. 00:33:13.733 [2024-07-26 09:06:31.891002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.733 [2024-07-26 09:06:31.891027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.733 qpair failed and we were unable to recover it. 00:33:13.733 [2024-07-26 09:06:31.891161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.733 [2024-07-26 09:06:31.891188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.733 qpair failed and we were unable to recover it. 00:33:13.733 [2024-07-26 09:06:31.891335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.733 [2024-07-26 09:06:31.891361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.733 qpair failed and we were unable to recover it. 00:33:13.733 [2024-07-26 09:06:31.891541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.733 [2024-07-26 09:06:31.891568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.733 qpair failed and we were unable to recover it. 00:33:13.733 [2024-07-26 09:06:31.891762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.733 [2024-07-26 09:06:31.891790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.733 qpair failed and we were unable to recover it. 00:33:13.733 [2024-07-26 09:06:31.891949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.733 [2024-07-26 09:06:31.891978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.733 qpair failed and we were unable to recover it. 00:33:13.733 [2024-07-26 09:06:31.892145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.733 [2024-07-26 09:06:31.892172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.733 qpair failed and we were unable to recover it. 00:33:13.733 [2024-07-26 09:06:31.892330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.733 [2024-07-26 09:06:31.892358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.733 qpair failed and we were unable to recover it. 00:33:13.733 [2024-07-26 09:06:31.892522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.733 [2024-07-26 09:06:31.892555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.733 qpair failed and we were unable to recover it. 00:33:13.733 [2024-07-26 09:06:31.892745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.733 [2024-07-26 09:06:31.892770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.733 qpair failed and we were unable to recover it. 00:33:13.733 [2024-07-26 09:06:31.892908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.734 [2024-07-26 09:06:31.892936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.734 qpair failed and we were unable to recover it. 00:33:13.734 [2024-07-26 09:06:31.893101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.734 [2024-07-26 09:06:31.893129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.734 qpair failed and we were unable to recover it. 00:33:13.734 [2024-07-26 09:06:31.893319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.734 [2024-07-26 09:06:31.893344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.734 qpair failed and we were unable to recover it. 00:33:13.734 [2024-07-26 09:06:31.893513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.734 [2024-07-26 09:06:31.893541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.734 qpair failed and we were unable to recover it. 00:33:13.734 [2024-07-26 09:06:31.893673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.734 [2024-07-26 09:06:31.893701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.734 qpair failed and we were unable to recover it. 00:33:13.734 [2024-07-26 09:06:31.893861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.734 [2024-07-26 09:06:31.893887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.734 qpair failed and we were unable to recover it. 00:33:13.734 [2024-07-26 09:06:31.894029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.734 [2024-07-26 09:06:31.894077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.734 qpair failed and we were unable to recover it. 00:33:13.734 [2024-07-26 09:06:31.894216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.734 [2024-07-26 09:06:31.894244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.734 qpair failed and we were unable to recover it. 00:33:13.734 [2024-07-26 09:06:31.894410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.734 [2024-07-26 09:06:31.894435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.734 qpair failed and we were unable to recover it. 00:33:13.734 [2024-07-26 09:06:31.894579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.734 [2024-07-26 09:06:31.894621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.734 qpair failed and we were unable to recover it. 00:33:13.734 [2024-07-26 09:06:31.894778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.734 [2024-07-26 09:06:31.894806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.734 qpair failed and we were unable to recover it. 00:33:13.734 [2024-07-26 09:06:31.894951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.734 [2024-07-26 09:06:31.894976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.734 qpair failed and we were unable to recover it. 00:33:13.734 [2024-07-26 09:06:31.895128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.734 [2024-07-26 09:06:31.895154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.734 qpair failed and we were unable to recover it. 00:33:13.734 [2024-07-26 09:06:31.895322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.734 [2024-07-26 09:06:31.895350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.734 qpair failed and we were unable to recover it. 00:33:13.734 [2024-07-26 09:06:31.895512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.734 [2024-07-26 09:06:31.895537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.734 qpair failed and we were unable to recover it. 00:33:13.734 [2024-07-26 09:06:31.895653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.734 [2024-07-26 09:06:31.895694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.734 qpair failed and we were unable to recover it. 00:33:13.734 [2024-07-26 09:06:31.895835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.734 [2024-07-26 09:06:31.895863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.734 qpair failed and we were unable to recover it. 00:33:13.734 [2024-07-26 09:06:31.896034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.734 [2024-07-26 09:06:31.896064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.734 qpair failed and we were unable to recover it. 00:33:13.734 [2024-07-26 09:06:31.896258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.734 [2024-07-26 09:06:31.896286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.734 qpair failed and we were unable to recover it. 00:33:13.734 [2024-07-26 09:06:31.896449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.734 [2024-07-26 09:06:31.896478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.734 qpair failed and we were unable to recover it. 00:33:13.734 [2024-07-26 09:06:31.896643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.734 [2024-07-26 09:06:31.896669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.734 qpair failed and we were unable to recover it. 00:33:13.734 [2024-07-26 09:06:31.896788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.734 [2024-07-26 09:06:31.896814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.734 qpair failed and we were unable to recover it. 00:33:13.734 [2024-07-26 09:06:31.896980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.734 [2024-07-26 09:06:31.897021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.734 qpair failed and we were unable to recover it. 00:33:13.734 [2024-07-26 09:06:31.897198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.734 [2024-07-26 09:06:31.897224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.734 qpair failed and we were unable to recover it. 00:33:13.734 [2024-07-26 09:06:31.897367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.734 [2024-07-26 09:06:31.897392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.734 qpair failed and we were unable to recover it. 00:33:13.734 [2024-07-26 09:06:31.897514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.734 [2024-07-26 09:06:31.897555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.734 qpair failed and we were unable to recover it. 00:33:13.734 [2024-07-26 09:06:31.897717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.734 [2024-07-26 09:06:31.897743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.734 qpair failed and we were unable to recover it. 00:33:13.734 [2024-07-26 09:06:31.897888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.734 [2024-07-26 09:06:31.897930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.734 qpair failed and we were unable to recover it. 00:33:13.734 [2024-07-26 09:06:31.898088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.734 [2024-07-26 09:06:31.898116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.734 qpair failed and we were unable to recover it. 00:33:13.734 [2024-07-26 09:06:31.898262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.734 [2024-07-26 09:06:31.898288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.734 qpair failed and we were unable to recover it. 00:33:13.734 [2024-07-26 09:06:31.898435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.734 [2024-07-26 09:06:31.898460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.734 qpair failed and we were unable to recover it. 00:33:13.734 [2024-07-26 09:06:31.898649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.734 [2024-07-26 09:06:31.898677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.734 qpair failed and we were unable to recover it. 00:33:13.734 [2024-07-26 09:06:31.898847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.734 [2024-07-26 09:06:31.898873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.734 qpair failed and we were unable to recover it. 00:33:13.734 [2024-07-26 09:06:31.899015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.734 [2024-07-26 09:06:31.899041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.734 qpair failed and we were unable to recover it. 00:33:13.734 [2024-07-26 09:06:31.899202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.734 [2024-07-26 09:06:31.899246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.734 qpair failed and we were unable to recover it. 00:33:13.734 [2024-07-26 09:06:31.899403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.734 [2024-07-26 09:06:31.899429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.734 qpair failed and we were unable to recover it. 00:33:13.734 [2024-07-26 09:06:31.899576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.734 [2024-07-26 09:06:31.899618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.734 qpair failed and we were unable to recover it. 00:33:13.734 [2024-07-26 09:06:31.899743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.734 [2024-07-26 09:06:31.899771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.734 qpair failed and we were unable to recover it. 00:33:13.734 [2024-07-26 09:06:31.899942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.734 [2024-07-26 09:06:31.899971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.734 qpair failed and we were unable to recover it. 00:33:13.734 [2024-07-26 09:06:31.900140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.734 [2024-07-26 09:06:31.900169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.734 qpair failed and we were unable to recover it. 00:33:13.734 [2024-07-26 09:06:31.900302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.734 [2024-07-26 09:06:31.900331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.734 qpair failed and we were unable to recover it. 00:33:13.734 [2024-07-26 09:06:31.900494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.734 [2024-07-26 09:06:31.900519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.734 qpair failed and we were unable to recover it. 00:33:13.734 [2024-07-26 09:06:31.900683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.734 [2024-07-26 09:06:31.900711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.734 qpair failed and we were unable to recover it. 00:33:13.734 [2024-07-26 09:06:31.900868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.734 [2024-07-26 09:06:31.900896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.734 qpair failed and we were unable to recover it. 00:33:13.734 [2024-07-26 09:06:31.901064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.734 [2024-07-26 09:06:31.901089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.734 qpair failed and we were unable to recover it. 00:33:13.734 [2024-07-26 09:06:31.901240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.734 [2024-07-26 09:06:31.901267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.734 qpair failed and we were unable to recover it. 00:33:13.734 [2024-07-26 09:06:31.901417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.734 [2024-07-26 09:06:31.901442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.734 qpair failed and we were unable to recover it. 00:33:13.734 [2024-07-26 09:06:31.901563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.734 [2024-07-26 09:06:31.901588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.734 qpair failed and we were unable to recover it. 00:33:13.734 [2024-07-26 09:06:31.901758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.734 [2024-07-26 09:06:31.901799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.734 qpair failed and we were unable to recover it. 00:33:13.734 [2024-07-26 09:06:31.901926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.734 [2024-07-26 09:06:31.901954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.734 qpair failed and we were unable to recover it. 00:33:13.734 [2024-07-26 09:06:31.902126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.734 [2024-07-26 09:06:31.902153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.734 qpair failed and we were unable to recover it. 00:33:13.734 [2024-07-26 09:06:31.902347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.734 [2024-07-26 09:06:31.902376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.734 qpair failed and we were unable to recover it. 00:33:13.734 [2024-07-26 09:06:31.902546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.734 [2024-07-26 09:06:31.902574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.734 qpair failed and we were unable to recover it. 00:33:13.734 [2024-07-26 09:06:31.902742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.734 [2024-07-26 09:06:31.902767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.734 qpair failed and we were unable to recover it. 00:33:13.734 [2024-07-26 09:06:31.902906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.734 [2024-07-26 09:06:31.902948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.734 qpair failed and we were unable to recover it. 00:33:13.734 [2024-07-26 09:06:31.903121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.734 [2024-07-26 09:06:31.903147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.734 qpair failed and we were unable to recover it. 00:33:13.734 [2024-07-26 09:06:31.903270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.734 [2024-07-26 09:06:31.903296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.734 qpair failed and we were unable to recover it. 00:33:13.734 [2024-07-26 09:06:31.903442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.734 [2024-07-26 09:06:31.903485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.734 qpair failed and we were unable to recover it. 00:33:13.734 [2024-07-26 09:06:31.903667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.734 [2024-07-26 09:06:31.903695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.734 qpair failed and we were unable to recover it. 00:33:13.734 [2024-07-26 09:06:31.903855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.734 [2024-07-26 09:06:31.903880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.734 qpair failed and we were unable to recover it. 00:33:13.734 [2024-07-26 09:06:31.904028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.734 [2024-07-26 09:06:31.904053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.734 qpair failed and we were unable to recover it. 00:33:13.734 [2024-07-26 09:06:31.904253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.735 [2024-07-26 09:06:31.904281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.735 qpair failed and we were unable to recover it. 00:33:13.735 [2024-07-26 09:06:31.904471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.735 [2024-07-26 09:06:31.904496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.735 qpair failed and we were unable to recover it. 00:33:13.735 [2024-07-26 09:06:31.904645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.735 [2024-07-26 09:06:31.904671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.735 qpair failed and we were unable to recover it. 00:33:13.735 [2024-07-26 09:06:31.904789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.735 [2024-07-26 09:06:31.904814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.735 qpair failed and we were unable to recover it. 00:33:13.735 [2024-07-26 09:06:31.904935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.735 [2024-07-26 09:06:31.904961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.735 qpair failed and we were unable to recover it. 00:33:13.735 [2024-07-26 09:06:31.905154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.735 [2024-07-26 09:06:31.905183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.735 qpair failed and we were unable to recover it. 00:33:13.735 [2024-07-26 09:06:31.905372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.735 [2024-07-26 09:06:31.905400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.735 qpair failed and we were unable to recover it. 00:33:13.735 [2024-07-26 09:06:31.905533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.735 [2024-07-26 09:06:31.905559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.735 qpair failed and we were unable to recover it. 00:33:13.735 [2024-07-26 09:06:31.905698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.735 [2024-07-26 09:06:31.905724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.735 qpair failed and we were unable to recover it. 00:33:13.735 [2024-07-26 09:06:31.905878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.735 [2024-07-26 09:06:31.905907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.735 qpair failed and we were unable to recover it. 00:33:13.735 [2024-07-26 09:06:31.906047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.735 [2024-07-26 09:06:31.906090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.735 qpair failed and we were unable to recover it. 00:33:13.735 [2024-07-26 09:06:31.906232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.735 [2024-07-26 09:06:31.906274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.735 qpair failed and we were unable to recover it. 00:33:13.735 [2024-07-26 09:06:31.906427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.735 [2024-07-26 09:06:31.906455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.735 qpair failed and we were unable to recover it. 00:33:13.735 [2024-07-26 09:06:31.906596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.735 [2024-07-26 09:06:31.906622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.735 qpair failed and we were unable to recover it. 00:33:13.735 [2024-07-26 09:06:31.906793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.735 [2024-07-26 09:06:31.906818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.735 qpair failed and we were unable to recover it. 00:33:13.735 [2024-07-26 09:06:31.906958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.735 [2024-07-26 09:06:31.906983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.735 qpair failed and we were unable to recover it. 00:33:13.735 [2024-07-26 09:06:31.907193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.735 [2024-07-26 09:06:31.907219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.735 qpair failed and we were unable to recover it. 00:33:13.735 [2024-07-26 09:06:31.907379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.735 [2024-07-26 09:06:31.907408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.735 qpair failed and we were unable to recover it. 00:33:13.735 [2024-07-26 09:06:31.907574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.735 [2024-07-26 09:06:31.907601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.735 qpair failed and we were unable to recover it. 00:33:13.735 [2024-07-26 09:06:31.907767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.735 [2024-07-26 09:06:31.907792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.735 qpair failed and we were unable to recover it. 00:33:13.735 [2024-07-26 09:06:31.907937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.735 [2024-07-26 09:06:31.907980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.735 qpair failed and we were unable to recover it. 00:33:13.735 [2024-07-26 09:06:31.908132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.735 [2024-07-26 09:06:31.908161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.735 qpair failed and we were unable to recover it. 00:33:13.735 [2024-07-26 09:06:31.908299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.735 [2024-07-26 09:06:31.908326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.735 qpair failed and we were unable to recover it. 00:33:13.735 [2024-07-26 09:06:31.908466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.735 [2024-07-26 09:06:31.908491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.735 qpair failed and we were unable to recover it. 00:33:13.735 [2024-07-26 09:06:31.908627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.735 [2024-07-26 09:06:31.908655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.735 qpair failed and we were unable to recover it. 00:33:13.735 [2024-07-26 09:06:31.908818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.735 [2024-07-26 09:06:31.908843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.735 qpair failed and we were unable to recover it. 00:33:13.735 [2024-07-26 09:06:31.908964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.735 [2024-07-26 09:06:31.908989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.735 qpair failed and we were unable to recover it. 00:33:13.735 [2024-07-26 09:06:31.909115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.735 [2024-07-26 09:06:31.909142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.735 qpair failed and we were unable to recover it. 00:33:13.735 [2024-07-26 09:06:31.909269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.735 [2024-07-26 09:06:31.909294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.735 qpair failed and we were unable to recover it. 00:33:13.735 [2024-07-26 09:06:31.909472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.735 [2024-07-26 09:06:31.909514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.735 qpair failed and we were unable to recover it. 00:33:13.735 [2024-07-26 09:06:31.909673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.735 [2024-07-26 09:06:31.909701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.735 qpair failed and we were unable to recover it. 00:33:13.735 [2024-07-26 09:06:31.909887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.735 [2024-07-26 09:06:31.909912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.735 qpair failed and we were unable to recover it. 00:33:13.735 [2024-07-26 09:06:31.910086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.735 [2024-07-26 09:06:31.910116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.735 qpair failed and we were unable to recover it. 00:33:13.735 [2024-07-26 09:06:31.910301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.735 [2024-07-26 09:06:31.910329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.735 qpair failed and we were unable to recover it. 00:33:13.735 [2024-07-26 09:06:31.910491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.735 [2024-07-26 09:06:31.910516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.735 qpair failed and we were unable to recover it. 00:33:13.735 [2024-07-26 09:06:31.910671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.735 [2024-07-26 09:06:31.910700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.735 qpair failed and we were unable to recover it. 00:33:13.735 [2024-07-26 09:06:31.910855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.735 [2024-07-26 09:06:31.910883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.735 qpair failed and we were unable to recover it. 00:33:13.735 [2024-07-26 09:06:31.911015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.735 [2024-07-26 09:06:31.911042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.735 qpair failed and we were unable to recover it. 00:33:13.735 [2024-07-26 09:06:31.911193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.735 [2024-07-26 09:06:31.911219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.735 qpair failed and we were unable to recover it. 00:33:13.735 [2024-07-26 09:06:31.911389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.735 [2024-07-26 09:06:31.911417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.735 qpair failed and we were unable to recover it. 00:33:13.735 [2024-07-26 09:06:31.911557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.735 [2024-07-26 09:06:31.911582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.735 qpair failed and we were unable to recover it. 00:33:13.735 [2024-07-26 09:06:31.911722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.735 [2024-07-26 09:06:31.911748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.735 qpair failed and we were unable to recover it. 00:33:13.735 [2024-07-26 09:06:31.911882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.735 [2024-07-26 09:06:31.911910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.735 qpair failed and we were unable to recover it. 00:33:13.735 [2024-07-26 09:06:31.912078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.735 [2024-07-26 09:06:31.912105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.735 qpair failed and we were unable to recover it. 00:33:13.735 [2024-07-26 09:06:31.912236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.735 [2024-07-26 09:06:31.912261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.735 qpair failed and we were unable to recover it. 00:33:13.735 [2024-07-26 09:06:31.912409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.735 [2024-07-26 09:06:31.912434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.735 qpair failed and we were unable to recover it. 00:33:13.735 [2024-07-26 09:06:31.912544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.735 [2024-07-26 09:06:31.912570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.735 qpair failed and we were unable to recover it. 00:33:13.735 [2024-07-26 09:06:31.912750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.735 [2024-07-26 09:06:31.912778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.735 qpair failed and we were unable to recover it. 00:33:13.735 [2024-07-26 09:06:31.912932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.735 [2024-07-26 09:06:31.912960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.735 qpair failed and we were unable to recover it. 00:33:13.735 [2024-07-26 09:06:31.913157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.735 [2024-07-26 09:06:31.913184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.735 qpair failed and we were unable to recover it. 00:33:13.735 [2024-07-26 09:06:31.913347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.735 [2024-07-26 09:06:31.913375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.735 qpair failed and we were unable to recover it. 00:33:13.735 [2024-07-26 09:06:31.913536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.735 [2024-07-26 09:06:31.913565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.735 qpair failed and we were unable to recover it. 00:33:13.735 [2024-07-26 09:06:31.913696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.735 [2024-07-26 09:06:31.913721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.735 qpair failed and we were unable to recover it. 00:33:13.735 [2024-07-26 09:06:31.913870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.735 [2024-07-26 09:06:31.913897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.735 qpair failed and we were unable to recover it. 00:33:13.735 [2024-07-26 09:06:31.914072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.735 [2024-07-26 09:06:31.914101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.735 qpair failed and we were unable to recover it. 00:33:13.735 [2024-07-26 09:06:31.914262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.735 [2024-07-26 09:06:31.914287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.735 qpair failed and we were unable to recover it. 00:33:13.735 [2024-07-26 09:06:31.914444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.735 [2024-07-26 09:06:31.914473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.735 qpair failed and we were unable to recover it. 00:33:13.735 [2024-07-26 09:06:31.914606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.735 [2024-07-26 09:06:31.914639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.735 qpair failed and we were unable to recover it. 00:33:13.735 [2024-07-26 09:06:31.914836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.735 [2024-07-26 09:06:31.914862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.735 qpair failed and we were unable to recover it. 00:33:13.735 [2024-07-26 09:06:31.915010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.735 [2024-07-26 09:06:31.915035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.735 qpair failed and we were unable to recover it. 00:33:13.735 [2024-07-26 09:06:31.915158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.735 [2024-07-26 09:06:31.915184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.735 qpair failed and we were unable to recover it. 00:33:13.735 [2024-07-26 09:06:31.915359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.736 [2024-07-26 09:06:31.915385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.736 qpair failed and we were unable to recover it. 00:33:13.736 [2024-07-26 09:06:31.915530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.736 [2024-07-26 09:06:31.915555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.736 qpair failed and we were unable to recover it. 00:33:13.736 [2024-07-26 09:06:31.915682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.736 [2024-07-26 09:06:31.915708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.736 qpair failed and we were unable to recover it. 00:33:13.736 [2024-07-26 09:06:31.915829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.736 [2024-07-26 09:06:31.915854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.736 qpair failed and we were unable to recover it. 00:33:13.736 [2024-07-26 09:06:31.916001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.736 [2024-07-26 09:06:31.916044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.736 qpair failed and we were unable to recover it. 00:33:13.736 [2024-07-26 09:06:31.916208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.736 [2024-07-26 09:06:31.916236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.736 qpair failed and we were unable to recover it. 00:33:13.736 [2024-07-26 09:06:31.916380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.736 [2024-07-26 09:06:31.916416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.736 qpair failed and we were unable to recover it. 00:33:13.736 [2024-07-26 09:06:31.916561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.736 [2024-07-26 09:06:31.916587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.736 qpair failed and we were unable to recover it. 00:33:13.736 [2024-07-26 09:06:31.916728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.736 [2024-07-26 09:06:31.916757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.736 qpair failed and we were unable to recover it. 00:33:13.736 [2024-07-26 09:06:31.916922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.736 [2024-07-26 09:06:31.916948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.736 qpair failed and we were unable to recover it. 00:33:13.736 [2024-07-26 09:06:31.917116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.736 [2024-07-26 09:06:31.917145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.736 qpair failed and we were unable to recover it. 00:33:13.736 [2024-07-26 09:06:31.917309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.736 [2024-07-26 09:06:31.917337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.736 qpair failed and we were unable to recover it. 00:33:13.736 [2024-07-26 09:06:31.917507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.736 [2024-07-26 09:06:31.917533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.736 qpair failed and we were unable to recover it. 00:33:13.736 [2024-07-26 09:06:31.917675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.736 [2024-07-26 09:06:31.917700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.736 qpair failed and we were unable to recover it. 00:33:13.736 [2024-07-26 09:06:31.917899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.736 [2024-07-26 09:06:31.917927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.736 qpair failed and we were unable to recover it. 00:33:13.736 [2024-07-26 09:06:31.918090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.736 [2024-07-26 09:06:31.918115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.736 qpair failed and we were unable to recover it. 00:33:13.736 [2024-07-26 09:06:31.918240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.736 [2024-07-26 09:06:31.918266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.736 qpair failed and we were unable to recover it. 00:33:13.736 [2024-07-26 09:06:31.918388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.736 [2024-07-26 09:06:31.918414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.736 qpair failed and we were unable to recover it. 00:33:13.736 [2024-07-26 09:06:31.918567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.736 [2024-07-26 09:06:31.918592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.736 qpair failed and we were unable to recover it. 00:33:13.736 [2024-07-26 09:06:31.918782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.736 [2024-07-26 09:06:31.918810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.736 qpair failed and we were unable to recover it. 00:33:13.736 [2024-07-26 09:06:31.918977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.736 [2024-07-26 09:06:31.919007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.736 qpair failed and we were unable to recover it. 00:33:13.736 [2024-07-26 09:06:31.919187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.736 [2024-07-26 09:06:31.919213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.736 qpair failed and we were unable to recover it. 00:33:13.736 [2024-07-26 09:06:31.919359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.736 [2024-07-26 09:06:31.919385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.736 qpair failed and we were unable to recover it. 00:33:13.736 [2024-07-26 09:06:31.919593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.736 [2024-07-26 09:06:31.919622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.736 qpair failed and we were unable to recover it. 00:33:13.736 [2024-07-26 09:06:31.919779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.736 [2024-07-26 09:06:31.919804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.736 qpair failed and we were unable to recover it. 00:33:13.736 [2024-07-26 09:06:31.919947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.736 [2024-07-26 09:06:31.919991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.736 qpair failed and we were unable to recover it. 00:33:13.736 [2024-07-26 09:06:31.920134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.736 [2024-07-26 09:06:31.920161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.736 qpair failed and we were unable to recover it. 00:33:13.736 [2024-07-26 09:06:31.920306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.736 [2024-07-26 09:06:31.920332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.736 qpair failed and we were unable to recover it. 00:33:13.736 [2024-07-26 09:06:31.920518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.736 [2024-07-26 09:06:31.920547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.736 qpair failed and we were unable to recover it. 00:33:13.736 [2024-07-26 09:06:31.920693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.736 [2024-07-26 09:06:31.920721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.736 qpair failed and we were unable to recover it. 00:33:13.736 [2024-07-26 09:06:31.920882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.736 [2024-07-26 09:06:31.920907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.736 qpair failed and we were unable to recover it. 00:33:13.736 [2024-07-26 09:06:31.921030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.736 [2024-07-26 09:06:31.921056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.736 qpair failed and we were unable to recover it. 00:33:13.736 [2024-07-26 09:06:31.921205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.736 [2024-07-26 09:06:31.921231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.736 qpair failed and we were unable to recover it. 00:33:13.736 [2024-07-26 09:06:31.921402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.736 [2024-07-26 09:06:31.921428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.736 qpair failed and we were unable to recover it. 00:33:13.736 [2024-07-26 09:06:31.921543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.736 [2024-07-26 09:06:31.921568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.736 qpair failed and we were unable to recover it. 00:33:13.736 [2024-07-26 09:06:31.921714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.736 [2024-07-26 09:06:31.921741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.736 qpair failed and we were unable to recover it. 00:33:13.736 [2024-07-26 09:06:31.921869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.736 [2024-07-26 09:06:31.921900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.736 qpair failed and we were unable to recover it. 00:33:13.736 [2024-07-26 09:06:31.922094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.736 [2024-07-26 09:06:31.922129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.736 qpair failed and we were unable to recover it. 00:33:13.736 [2024-07-26 09:06:31.922292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.736 [2024-07-26 09:06:31.922320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.736 qpair failed and we were unable to recover it. 00:33:13.736 [2024-07-26 09:06:31.922461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.736 [2024-07-26 09:06:31.922487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.736 qpair failed and we were unable to recover it. 00:33:13.736 [2024-07-26 09:06:31.922639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.736 [2024-07-26 09:06:31.922664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.736 qpair failed and we were unable to recover it. 00:33:13.736 [2024-07-26 09:06:31.922848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.736 [2024-07-26 09:06:31.922874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.736 qpair failed and we were unable to recover it. 00:33:13.736 [2024-07-26 09:06:31.923043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.736 [2024-07-26 09:06:31.923076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.736 qpair failed and we were unable to recover it. 00:33:13.736 [2024-07-26 09:06:31.923263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.736 [2024-07-26 09:06:31.923290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.736 qpair failed and we were unable to recover it. 00:33:13.736 [2024-07-26 09:06:31.923431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.736 [2024-07-26 09:06:31.923456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.736 qpair failed and we were unable to recover it. 00:33:13.736 [2024-07-26 09:06:31.923614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.736 [2024-07-26 09:06:31.923639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.736 qpair failed and we were unable to recover it. 00:33:13.736 [2024-07-26 09:06:31.923796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.736 [2024-07-26 09:06:31.923821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.736 qpair failed and we were unable to recover it. 00:33:13.736 [2024-07-26 09:06:31.923961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.736 [2024-07-26 09:06:31.923989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.736 qpair failed and we were unable to recover it. 00:33:13.736 [2024-07-26 09:06:31.924190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.736 [2024-07-26 09:06:31.924216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.736 qpair failed and we were unable to recover it. 00:33:13.736 [2024-07-26 09:06:31.924360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.736 [2024-07-26 09:06:31.924386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.736 qpair failed and we were unable to recover it. 00:33:13.736 [2024-07-26 09:06:31.924512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.736 [2024-07-26 09:06:31.924539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.736 qpair failed and we were unable to recover it. 00:33:13.736 [2024-07-26 09:06:31.924659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.736 [2024-07-26 09:06:31.924684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.736 qpair failed and we were unable to recover it. 00:33:13.736 [2024-07-26 09:06:31.924831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.736 [2024-07-26 09:06:31.924875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.736 qpair failed and we were unable to recover it. 00:33:13.736 [2024-07-26 09:06:31.925008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.736 [2024-07-26 09:06:31.925036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.736 qpair failed and we were unable to recover it. 00:33:13.736 [2024-07-26 09:06:31.925239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.736 [2024-07-26 09:06:31.925265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.736 qpair failed and we were unable to recover it. 00:33:13.736 [2024-07-26 09:06:31.925428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.736 [2024-07-26 09:06:31.925457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.736 qpair failed and we were unable to recover it. 00:33:13.736 [2024-07-26 09:06:31.925645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.736 [2024-07-26 09:06:31.925673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.736 qpair failed and we were unable to recover it. 00:33:13.737 [2024-07-26 09:06:31.925868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.737 [2024-07-26 09:06:31.925893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.737 qpair failed and we were unable to recover it. 00:33:13.737 [2024-07-26 09:06:31.926088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.737 [2024-07-26 09:06:31.926125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.737 qpair failed and we were unable to recover it. 00:33:13.737 [2024-07-26 09:06:31.926281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.737 [2024-07-26 09:06:31.926309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.737 qpair failed and we were unable to recover it. 00:33:13.737 [2024-07-26 09:06:31.926480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.737 [2024-07-26 09:06:31.926506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.737 qpair failed and we were unable to recover it. 00:33:13.737 [2024-07-26 09:06:31.926628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.737 [2024-07-26 09:06:31.926654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.737 qpair failed and we were unable to recover it. 00:33:13.737 [2024-07-26 09:06:31.926800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.737 [2024-07-26 09:06:31.926825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.737 qpair failed and we were unable to recover it. 00:33:13.737 [2024-07-26 09:06:31.927001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.737 [2024-07-26 09:06:31.927027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.737 qpair failed and we were unable to recover it. 00:33:13.737 [2024-07-26 09:06:31.927207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.737 [2024-07-26 09:06:31.927232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.737 qpair failed and we were unable to recover it. 00:33:13.737 [2024-07-26 09:06:31.927371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.737 [2024-07-26 09:06:31.927399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.737 qpair failed and we were unable to recover it. 00:33:13.737 [2024-07-26 09:06:31.927541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.737 [2024-07-26 09:06:31.927568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.737 qpair failed and we were unable to recover it. 00:33:13.737 [2024-07-26 09:06:31.927720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.737 [2024-07-26 09:06:31.927762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.737 qpair failed and we were unable to recover it. 00:33:13.737 [2024-07-26 09:06:31.927946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.737 [2024-07-26 09:06:31.927974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.737 qpair failed and we were unable to recover it. 00:33:13.737 [2024-07-26 09:06:31.928123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.737 [2024-07-26 09:06:31.928149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.737 qpair failed and we were unable to recover it. 00:33:13.737 [2024-07-26 09:06:31.928322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.737 [2024-07-26 09:06:31.928347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.737 qpair failed and we were unable to recover it. 00:33:13.737 [2024-07-26 09:06:31.928510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.737 [2024-07-26 09:06:31.928540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.737 qpair failed and we were unable to recover it. 00:33:13.737 [2024-07-26 09:06:31.928702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.737 [2024-07-26 09:06:31.928727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.737 qpair failed and we were unable to recover it. 00:33:13.737 [2024-07-26 09:06:31.928869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.737 [2024-07-26 09:06:31.928912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.737 qpair failed and we were unable to recover it. 00:33:13.737 [2024-07-26 09:06:31.929046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.737 [2024-07-26 09:06:31.929082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.737 qpair failed and we were unable to recover it. 00:33:13.737 [2024-07-26 09:06:31.929247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.737 [2024-07-26 09:06:31.929272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.737 qpair failed and we were unable to recover it. 00:33:13.737 [2024-07-26 09:06:31.929436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.737 [2024-07-26 09:06:31.929468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.737 qpair failed and we were unable to recover it. 00:33:13.737 [2024-07-26 09:06:31.929650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.737 [2024-07-26 09:06:31.929678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.737 qpair failed and we were unable to recover it. 00:33:13.737 [2024-07-26 09:06:31.929813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.737 [2024-07-26 09:06:31.929838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.737 qpair failed and we were unable to recover it. 00:33:13.737 [2024-07-26 09:06:31.929980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.737 [2024-07-26 09:06:31.930005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.737 qpair failed and we were unable to recover it. 00:33:13.737 [2024-07-26 09:06:31.930164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.737 [2024-07-26 09:06:31.930190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.737 qpair failed and we were unable to recover it. 00:33:13.737 [2024-07-26 09:06:31.930340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.737 [2024-07-26 09:06:31.930366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.737 qpair failed and we were unable to recover it. 00:33:13.737 [2024-07-26 09:06:31.930515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.737 [2024-07-26 09:06:31.930541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.737 qpair failed and we were unable to recover it. 00:33:13.737 [2024-07-26 09:06:31.930666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.737 [2024-07-26 09:06:31.930692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.737 qpair failed and we were unable to recover it. 00:33:13.737 [2024-07-26 09:06:31.930874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.737 [2024-07-26 09:06:31.930901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.737 qpair failed and we were unable to recover it. 00:33:13.737 [2024-07-26 09:06:31.931050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.737 [2024-07-26 09:06:31.931101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.737 qpair failed and we were unable to recover it. 00:33:13.737 [2024-07-26 09:06:31.931263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.737 [2024-07-26 09:06:31.931289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.737 qpair failed and we were unable to recover it. 00:33:13.737 [2024-07-26 09:06:31.931462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.737 [2024-07-26 09:06:31.931487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.737 qpair failed and we were unable to recover it. 00:33:13.737 [2024-07-26 09:06:31.931649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.737 [2024-07-26 09:06:31.931677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.737 qpair failed and we were unable to recover it. 00:33:13.737 [2024-07-26 09:06:31.931821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.737 [2024-07-26 09:06:31.931847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.737 qpair failed and we were unable to recover it. 00:33:13.737 [2024-07-26 09:06:31.931997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.737 [2024-07-26 09:06:31.932022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.737 qpair failed and we were unable to recover it. 00:33:13.737 [2024-07-26 09:06:31.932177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.737 [2024-07-26 09:06:31.932203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.737 qpair failed and we were unable to recover it. 00:33:13.737 [2024-07-26 09:06:31.932393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.737 [2024-07-26 09:06:31.932421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.737 qpair failed and we were unable to recover it. 00:33:13.737 [2024-07-26 09:06:31.932591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.737 [2024-07-26 09:06:31.932616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.737 qpair failed and we were unable to recover it. 00:33:13.737 [2024-07-26 09:06:31.932780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.737 [2024-07-26 09:06:31.932808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.737 qpair failed and we were unable to recover it. 00:33:13.737 [2024-07-26 09:06:31.932965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.737 [2024-07-26 09:06:31.932993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.737 qpair failed and we were unable to recover it. 00:33:13.737 [2024-07-26 09:06:31.933135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.737 [2024-07-26 09:06:31.933162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.737 qpair failed and we were unable to recover it. 00:33:13.737 [2024-07-26 09:06:31.933280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.737 [2024-07-26 09:06:31.933305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.737 qpair failed and we were unable to recover it. 00:33:13.737 [2024-07-26 09:06:31.933465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.737 [2024-07-26 09:06:31.933493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.737 qpair failed and we were unable to recover it. 00:33:13.737 [2024-07-26 09:06:31.933639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.737 [2024-07-26 09:06:31.933665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.737 qpair failed and we were unable to recover it. 00:33:13.737 [2024-07-26 09:06:31.933812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.737 [2024-07-26 09:06:31.933837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.737 qpair failed and we were unable to recover it. 00:33:13.737 [2024-07-26 09:06:31.934001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.737 [2024-07-26 09:06:31.934029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.737 qpair failed and we were unable to recover it. 00:33:13.737 [2024-07-26 09:06:31.934220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.737 [2024-07-26 09:06:31.934247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.737 qpair failed and we were unable to recover it. 00:33:13.737 [2024-07-26 09:06:31.934409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.737 [2024-07-26 09:06:31.934438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.737 qpair failed and we were unable to recover it. 00:33:13.737 [2024-07-26 09:06:31.934579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.737 [2024-07-26 09:06:31.934608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.737 qpair failed and we were unable to recover it. 00:33:13.737 [2024-07-26 09:06:31.934798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.737 [2024-07-26 09:06:31.934823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.737 qpair failed and we were unable to recover it. 00:33:13.737 [2024-07-26 09:06:31.934965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.737 [2024-07-26 09:06:31.934993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.737 qpair failed and we were unable to recover it. 00:33:13.737 [2024-07-26 09:06:31.935148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.737 [2024-07-26 09:06:31.935177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.737 qpair failed and we were unable to recover it. 00:33:13.737 [2024-07-26 09:06:31.935368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.737 [2024-07-26 09:06:31.935394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.737 qpair failed and we were unable to recover it. 00:33:13.737 [2024-07-26 09:06:31.935560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.737 [2024-07-26 09:06:31.935588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.737 qpair failed and we were unable to recover it. 00:33:13.737 [2024-07-26 09:06:31.935750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.737 [2024-07-26 09:06:31.935778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.737 qpair failed and we were unable to recover it. 00:33:13.737 [2024-07-26 09:06:31.935921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.737 [2024-07-26 09:06:31.935946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.737 qpair failed and we were unable to recover it. 00:33:13.737 [2024-07-26 09:06:31.936094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.737 [2024-07-26 09:06:31.936121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.737 qpair failed and we were unable to recover it. 00:33:13.737 [2024-07-26 09:06:31.936241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.737 [2024-07-26 09:06:31.936267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.737 qpair failed and we were unable to recover it. 00:33:13.737 [2024-07-26 09:06:31.936441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.737 [2024-07-26 09:06:31.936466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.737 qpair failed and we were unable to recover it. 00:33:13.737 [2024-07-26 09:06:31.936588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.737 [2024-07-26 09:06:31.936614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.737 qpair failed and we were unable to recover it. 00:33:13.737 [2024-07-26 09:06:31.936737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.737 [2024-07-26 09:06:31.936766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.737 qpair failed and we were unable to recover it. 00:33:13.737 [2024-07-26 09:06:31.936914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.737 [2024-07-26 09:06:31.936940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.737 qpair failed and we were unable to recover it. 00:33:13.737 [2024-07-26 09:06:31.937098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.737 [2024-07-26 09:06:31.937126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.737 qpair failed and we were unable to recover it. 00:33:13.738 [2024-07-26 09:06:31.937313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.738 [2024-07-26 09:06:31.937341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.738 qpair failed and we were unable to recover it. 00:33:13.738 [2024-07-26 09:06:31.937480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.738 [2024-07-26 09:06:31.937506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.738 qpair failed and we were unable to recover it. 00:33:13.738 [2024-07-26 09:06:31.937659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.738 [2024-07-26 09:06:31.937701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.738 qpair failed and we were unable to recover it. 00:33:13.738 [2024-07-26 09:06:31.937862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.738 [2024-07-26 09:06:31.937890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.738 qpair failed and we were unable to recover it. 00:33:13.738 [2024-07-26 09:06:31.938054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.738 [2024-07-26 09:06:31.938086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.738 qpair failed and we were unable to recover it. 00:33:13.738 [2024-07-26 09:06:31.938242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.738 [2024-07-26 09:06:31.938268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.738 qpair failed and we were unable to recover it. 00:33:13.738 [2024-07-26 09:06:31.938421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.738 [2024-07-26 09:06:31.938447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.738 qpair failed and we were unable to recover it. 00:33:13.738 [2024-07-26 09:06:31.938593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.738 [2024-07-26 09:06:31.938618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.738 qpair failed and we were unable to recover it. 00:33:13.738 [2024-07-26 09:06:31.938743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.738 [2024-07-26 09:06:31.938786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.738 qpair failed and we were unable to recover it. 00:33:13.738 [2024-07-26 09:06:31.938974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.738 [2024-07-26 09:06:31.939000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.738 qpair failed and we were unable to recover it. 00:33:13.738 [2024-07-26 09:06:31.939169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.738 [2024-07-26 09:06:31.939195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.738 qpair failed and we were unable to recover it. 00:33:13.738 [2024-07-26 09:06:31.939331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.738 [2024-07-26 09:06:31.939360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.738 qpair failed and we were unable to recover it. 00:33:13.738 [2024-07-26 09:06:31.939527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.738 [2024-07-26 09:06:31.939554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.738 qpair failed and we were unable to recover it. 00:33:13.738 [2024-07-26 09:06:31.939698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.738 [2024-07-26 09:06:31.939724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.738 qpair failed and we were unable to recover it. 00:33:13.738 [2024-07-26 09:06:31.939849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.738 [2024-07-26 09:06:31.939874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.738 qpair failed and we were unable to recover it. 00:33:13.738 [2024-07-26 09:06:31.940046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.738 [2024-07-26 09:06:31.940079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.738 qpair failed and we were unable to recover it. 00:33:13.738 [2024-07-26 09:06:31.940230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.738 [2024-07-26 09:06:31.940256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.738 qpair failed and we were unable to recover it. 00:33:13.738 [2024-07-26 09:06:31.940427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.738 [2024-07-26 09:06:31.940470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.738 qpair failed and we were unable to recover it. 00:33:13.738 [2024-07-26 09:06:31.940630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.738 [2024-07-26 09:06:31.940658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.738 qpair failed and we were unable to recover it. 00:33:13.738 [2024-07-26 09:06:31.940804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.738 [2024-07-26 09:06:31.940829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.738 qpair failed and we were unable to recover it. 00:33:13.738 [2024-07-26 09:06:31.940971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.738 [2024-07-26 09:06:31.940997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.738 qpair failed and we were unable to recover it. 00:33:13.738 [2024-07-26 09:06:31.941180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.738 [2024-07-26 09:06:31.941209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.738 qpair failed and we were unable to recover it. 00:33:13.738 [2024-07-26 09:06:31.941375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.738 [2024-07-26 09:06:31.941400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.738 qpair failed and we were unable to recover it. 00:33:13.738 [2024-07-26 09:06:31.941548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.738 [2024-07-26 09:06:31.941574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.738 qpair failed and we were unable to recover it. 00:33:13.738 [2024-07-26 09:06:31.941731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.738 [2024-07-26 09:06:31.941760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.738 qpair failed and we were unable to recover it. 00:33:13.738 [2024-07-26 09:06:31.941926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.738 [2024-07-26 09:06:31.941952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.738 qpair failed and we were unable to recover it. 00:33:13.738 [2024-07-26 09:06:31.942080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.738 [2024-07-26 09:06:31.942114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.738 qpair failed and we were unable to recover it. 00:33:13.738 [2024-07-26 09:06:31.942281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.738 [2024-07-26 09:06:31.942306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.738 qpair failed and we were unable to recover it. 00:33:13.738 [2024-07-26 09:06:31.942452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.738 [2024-07-26 09:06:31.942478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.738 qpair failed and we were unable to recover it. 00:33:13.738 [2024-07-26 09:06:31.942636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.738 [2024-07-26 09:06:31.942665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.738 qpair failed and we were unable to recover it. 00:33:13.738 [2024-07-26 09:06:31.942818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.738 [2024-07-26 09:06:31.942846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.738 qpair failed and we were unable to recover it. 00:33:13.738 [2024-07-26 09:06:31.943013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.738 [2024-07-26 09:06:31.943039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.738 qpair failed and we were unable to recover it. 00:33:13.738 [2024-07-26 09:06:31.943241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.738 [2024-07-26 09:06:31.943270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.738 qpair failed and we were unable to recover it. 00:33:13.738 [2024-07-26 09:06:31.943412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.738 [2024-07-26 09:06:31.943440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.738 qpair failed and we were unable to recover it. 00:33:13.738 [2024-07-26 09:06:31.943595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.738 [2024-07-26 09:06:31.943621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.738 qpair failed and we were unable to recover it. 00:33:13.738 [2024-07-26 09:06:31.943777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.738 [2024-07-26 09:06:31.943805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.738 qpair failed and we were unable to recover it. 00:33:13.738 [2024-07-26 09:06:31.943933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.738 [2024-07-26 09:06:31.943974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.738 qpair failed and we were unable to recover it. 00:33:13.738 [2024-07-26 09:06:31.944144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.738 [2024-07-26 09:06:31.944181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.738 qpair failed and we were unable to recover it. 00:33:13.738 [2024-07-26 09:06:31.944365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.738 [2024-07-26 09:06:31.944394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.738 qpair failed and we were unable to recover it. 00:33:13.738 [2024-07-26 09:06:31.944583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.738 [2024-07-26 09:06:31.944611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.738 qpair failed and we were unable to recover it. 00:33:13.738 [2024-07-26 09:06:31.944800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.738 [2024-07-26 09:06:31.944827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.738 qpair failed and we were unable to recover it. 00:33:13.738 [2024-07-26 09:06:31.944985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.738 [2024-07-26 09:06:31.945013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.738 qpair failed and we were unable to recover it. 00:33:13.738 [2024-07-26 09:06:31.945171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.738 [2024-07-26 09:06:31.945207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.738 qpair failed and we were unable to recover it. 00:33:13.738 [2024-07-26 09:06:31.945376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.738 [2024-07-26 09:06:31.945402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.738 qpair failed and we were unable to recover it. 00:33:13.738 [2024-07-26 09:06:31.945518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.738 [2024-07-26 09:06:31.945543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.738 qpair failed and we were unable to recover it. 00:33:13.738 [2024-07-26 09:06:31.945660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.738 [2024-07-26 09:06:31.945686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.738 qpair failed and we were unable to recover it. 00:33:13.738 [2024-07-26 09:06:31.945803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.738 [2024-07-26 09:06:31.945828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.738 qpair failed and we were unable to recover it. 00:33:13.738 [2024-07-26 09:06:31.945974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.738 [2024-07-26 09:06:31.945999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.738 qpair failed and we were unable to recover it. 00:33:13.738 [2024-07-26 09:06:31.946177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.738 [2024-07-26 09:06:31.946207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.738 qpair failed and we were unable to recover it. 00:33:13.738 [2024-07-26 09:06:31.946402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.738 [2024-07-26 09:06:31.946428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.738 qpair failed and we were unable to recover it. 00:33:13.738 [2024-07-26 09:06:31.946602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.738 [2024-07-26 09:06:31.946627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.738 qpair failed and we were unable to recover it. 00:33:13.738 [2024-07-26 09:06:31.946780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.738 [2024-07-26 09:06:31.946822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.738 qpair failed and we were unable to recover it. 00:33:13.738 [2024-07-26 09:06:31.946991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.738 [2024-07-26 09:06:31.947017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.738 qpair failed and we were unable to recover it. 00:33:13.738 [2024-07-26 09:06:31.947172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.738 [2024-07-26 09:06:31.947198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.738 qpair failed and we were unable to recover it. 00:33:13.738 [2024-07-26 09:06:31.947345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.738 [2024-07-26 09:06:31.947371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.738 qpair failed and we were unable to recover it. 00:33:13.738 [2024-07-26 09:06:31.947493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.738 [2024-07-26 09:06:31.947519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.738 qpair failed and we were unable to recover it. 00:33:13.738 [2024-07-26 09:06:31.947665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.738 [2024-07-26 09:06:31.947691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.738 qpair failed and we were unable to recover it. 00:33:13.738 [2024-07-26 09:06:31.947817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.738 [2024-07-26 09:06:31.947843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.738 qpair failed and we were unable to recover it. 00:33:13.738 [2024-07-26 09:06:31.948031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.738 [2024-07-26 09:06:31.948057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.738 qpair failed and we were unable to recover it. 00:33:13.738 [2024-07-26 09:06:31.948243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.738 [2024-07-26 09:06:31.948271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.738 qpair failed and we were unable to recover it. 00:33:13.738 [2024-07-26 09:06:31.948430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.738 [2024-07-26 09:06:31.948458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.738 qpair failed and we were unable to recover it. 00:33:13.738 [2024-07-26 09:06:31.948620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.738 [2024-07-26 09:06:31.948645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.738 qpair failed and we were unable to recover it. 00:33:13.739 [2024-07-26 09:06:31.948806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.739 [2024-07-26 09:06:31.948835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.739 qpair failed and we were unable to recover it. 00:33:13.739 [2024-07-26 09:06:31.949032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.739 [2024-07-26 09:06:31.949064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.739 qpair failed and we were unable to recover it. 00:33:13.739 [2024-07-26 09:06:31.949229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.739 [2024-07-26 09:06:31.949254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.739 qpair failed and we were unable to recover it. 00:33:13.739 [2024-07-26 09:06:31.949441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.739 [2024-07-26 09:06:31.949470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.739 qpair failed and we were unable to recover it. 00:33:13.739 [2024-07-26 09:06:31.949589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.739 [2024-07-26 09:06:31.949617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.739 qpair failed and we were unable to recover it. 00:33:13.739 [2024-07-26 09:06:31.949756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.739 [2024-07-26 09:06:31.949782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.739 qpair failed and we were unable to recover it. 00:33:13.739 [2024-07-26 09:06:31.949955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.739 [2024-07-26 09:06:31.949996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.739 qpair failed and we were unable to recover it. 00:33:13.739 [2024-07-26 09:06:31.950160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.739 [2024-07-26 09:06:31.950189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.739 qpair failed and we were unable to recover it. 00:33:13.739 [2024-07-26 09:06:31.950332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.739 [2024-07-26 09:06:31.950358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.739 qpair failed and we were unable to recover it. 00:33:13.739 [2024-07-26 09:06:31.950497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.739 [2024-07-26 09:06:31.950523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.739 qpair failed and we were unable to recover it. 00:33:13.739 [2024-07-26 09:06:31.950693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.739 [2024-07-26 09:06:31.950721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.739 qpair failed and we were unable to recover it. 00:33:13.739 [2024-07-26 09:06:31.950916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.739 [2024-07-26 09:06:31.950941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.739 qpair failed and we were unable to recover it. 00:33:13.739 [2024-07-26 09:06:31.951108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.739 [2024-07-26 09:06:31.951136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.739 qpair failed and we were unable to recover it. 00:33:13.739 [2024-07-26 09:06:31.951318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.739 [2024-07-26 09:06:31.951344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.739 qpair failed and we were unable to recover it. 00:33:13.739 [2024-07-26 09:06:31.951517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.739 [2024-07-26 09:06:31.951543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.739 qpair failed and we were unable to recover it. 00:33:13.739 [2024-07-26 09:06:31.951666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.739 [2024-07-26 09:06:31.951696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.739 qpair failed and we were unable to recover it. 00:33:13.739 [2024-07-26 09:06:31.951817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.739 [2024-07-26 09:06:31.951843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.739 qpair failed and we were unable to recover it. 00:33:13.739 [2024-07-26 09:06:31.951989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.739 [2024-07-26 09:06:31.952015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.739 qpair failed and we were unable to recover it. 00:33:13.739 [2024-07-26 09:06:31.952166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.739 [2024-07-26 09:06:31.952192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.739 qpair failed and we were unable to recover it. 00:33:13.739 [2024-07-26 09:06:31.952333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.739 [2024-07-26 09:06:31.952358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.739 qpair failed and we were unable to recover it. 00:33:13.739 [2024-07-26 09:06:31.952528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.739 [2024-07-26 09:06:31.952554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.739 qpair failed and we were unable to recover it. 00:33:13.739 [2024-07-26 09:06:31.952744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.739 [2024-07-26 09:06:31.952772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.739 qpair failed and we were unable to recover it. 00:33:13.739 [2024-07-26 09:06:31.952935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.739 [2024-07-26 09:06:31.952963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.739 qpair failed and we were unable to recover it. 00:33:13.739 [2024-07-26 09:06:31.953112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.739 [2024-07-26 09:06:31.953139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.739 qpair failed and we were unable to recover it. 00:33:13.739 [2024-07-26 09:06:31.953312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.739 [2024-07-26 09:06:31.953353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.739 qpair failed and we were unable to recover it. 00:33:13.739 [2024-07-26 09:06:31.953480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.739 [2024-07-26 09:06:31.953508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.739 qpair failed and we were unable to recover it. 00:33:13.739 [2024-07-26 09:06:31.953647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.739 [2024-07-26 09:06:31.953672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.739 qpair failed and we were unable to recover it. 00:33:13.739 [2024-07-26 09:06:31.953819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.739 [2024-07-26 09:06:31.953862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.739 qpair failed and we were unable to recover it. 00:33:13.739 [2024-07-26 09:06:31.954035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.739 [2024-07-26 09:06:31.954067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.739 qpair failed and we were unable to recover it. 00:33:13.739 [2024-07-26 09:06:31.954222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.739 [2024-07-26 09:06:31.954249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.739 qpair failed and we were unable to recover it. 00:33:13.739 [2024-07-26 09:06:31.954398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.739 [2024-07-26 09:06:31.954425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.739 qpair failed and we were unable to recover it. 00:33:13.739 [2024-07-26 09:06:31.954610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.739 [2024-07-26 09:06:31.954638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.739 qpair failed and we were unable to recover it. 00:33:13.739 [2024-07-26 09:06:31.954804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.739 [2024-07-26 09:06:31.954829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.739 qpair failed and we were unable to recover it. 00:33:13.739 [2024-07-26 09:06:31.954950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.739 [2024-07-26 09:06:31.954976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.739 qpair failed and we were unable to recover it. 00:33:13.739 [2024-07-26 09:06:31.955093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.739 [2024-07-26 09:06:31.955119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.739 qpair failed and we were unable to recover it. 00:33:13.739 [2024-07-26 09:06:31.955250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.739 [2024-07-26 09:06:31.955276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.739 qpair failed and we were unable to recover it. 00:33:13.739 [2024-07-26 09:06:31.955468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.739 [2024-07-26 09:06:31.955496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.739 qpair failed and we were unable to recover it. 00:33:13.739 [2024-07-26 09:06:31.955660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.739 [2024-07-26 09:06:31.955688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.739 qpair failed and we were unable to recover it. 00:33:13.739 [2024-07-26 09:06:31.955833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.739 [2024-07-26 09:06:31.955858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.739 qpair failed and we were unable to recover it. 00:33:13.739 [2024-07-26 09:06:31.956031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.739 [2024-07-26 09:06:31.956057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.739 qpair failed and we were unable to recover it. 00:33:13.739 [2024-07-26 09:06:31.956244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.739 [2024-07-26 09:06:31.956269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.739 qpair failed and we were unable to recover it. 00:33:13.739 [2024-07-26 09:06:31.956442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.739 [2024-07-26 09:06:31.956468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.739 qpair failed and we were unable to recover it. 00:33:13.739 [2024-07-26 09:06:31.956611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.739 [2024-07-26 09:06:31.956639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.739 qpair failed and we were unable to recover it. 00:33:13.739 [2024-07-26 09:06:31.956827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.739 [2024-07-26 09:06:31.956855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.739 qpair failed and we were unable to recover it. 00:33:13.739 [2024-07-26 09:06:31.956989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.739 [2024-07-26 09:06:31.957015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.739 qpair failed and we were unable to recover it. 00:33:13.739 [2024-07-26 09:06:31.957192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.739 [2024-07-26 09:06:31.957219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.739 qpair failed and we were unable to recover it. 00:33:13.739 [2024-07-26 09:06:31.957326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.739 [2024-07-26 09:06:31.957367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.739 qpair failed and we were unable to recover it. 00:33:13.739 [2024-07-26 09:06:31.957540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.739 [2024-07-26 09:06:31.957565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.739 qpair failed and we were unable to recover it. 00:33:13.739 [2024-07-26 09:06:31.957677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.739 [2024-07-26 09:06:31.957719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.739 qpair failed and we were unable to recover it. 00:33:13.739 [2024-07-26 09:06:31.957889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.739 [2024-07-26 09:06:31.957915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.739 qpair failed and we were unable to recover it. 00:33:13.739 [2024-07-26 09:06:31.958076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.739 [2024-07-26 09:06:31.958102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.739 qpair failed and we were unable to recover it. 00:33:13.739 [2024-07-26 09:06:31.958252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.739 [2024-07-26 09:06:31.958277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.739 qpair failed and we were unable to recover it. 00:33:13.739 [2024-07-26 09:06:31.958452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.740 [2024-07-26 09:06:31.958477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.740 qpair failed and we were unable to recover it. 00:33:13.740 [2024-07-26 09:06:31.958626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.740 [2024-07-26 09:06:31.958652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.740 qpair failed and we were unable to recover it. 00:33:13.740 [2024-07-26 09:06:31.958844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.740 [2024-07-26 09:06:31.958872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.740 qpair failed and we were unable to recover it. 00:33:13.740 [2024-07-26 09:06:31.959007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.740 [2024-07-26 09:06:31.959039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.740 qpair failed and we were unable to recover it. 00:33:13.740 [2024-07-26 09:06:31.959184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.740 [2024-07-26 09:06:31.959210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.740 qpair failed and we were unable to recover it. 00:33:13.740 [2024-07-26 09:06:31.959355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.740 [2024-07-26 09:06:31.959380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.740 qpair failed and we were unable to recover it. 00:33:13.740 [2024-07-26 09:06:31.959551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.740 [2024-07-26 09:06:31.959579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.740 qpair failed and we were unable to recover it. 00:33:13.740 [2024-07-26 09:06:31.959745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.740 [2024-07-26 09:06:31.959771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.740 qpair failed and we were unable to recover it. 00:33:13.740 [2024-07-26 09:06:31.959917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.740 [2024-07-26 09:06:31.959943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.740 qpair failed and we were unable to recover it. 00:33:13.740 [2024-07-26 09:06:31.960140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.740 [2024-07-26 09:06:31.960169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.740 qpair failed and we were unable to recover it. 00:33:13.740 [2024-07-26 09:06:31.960362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.740 [2024-07-26 09:06:31.960387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.740 qpair failed and we were unable to recover it. 00:33:13.740 [2024-07-26 09:06:31.960545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.740 [2024-07-26 09:06:31.960570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.740 qpair failed and we were unable to recover it. 00:33:13.740 [2024-07-26 09:06:31.960718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.740 [2024-07-26 09:06:31.960760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.740 qpair failed and we were unable to recover it. 00:33:13.740 [2024-07-26 09:06:31.960926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.740 [2024-07-26 09:06:31.960951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.740 qpair failed and we were unable to recover it. 00:33:13.740 [2024-07-26 09:06:31.961078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.740 [2024-07-26 09:06:31.961123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.740 qpair failed and we were unable to recover it. 00:33:13.740 [2024-07-26 09:06:31.961281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.740 [2024-07-26 09:06:31.961309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.740 qpair failed and we were unable to recover it. 00:33:13.740 [2024-07-26 09:06:31.961449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.740 [2024-07-26 09:06:31.961475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.740 qpair failed and we were unable to recover it. 00:33:13.740 [2024-07-26 09:06:31.961628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.740 [2024-07-26 09:06:31.961654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.740 qpair failed and we were unable to recover it. 00:33:13.740 [2024-07-26 09:06:31.961800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.740 [2024-07-26 09:06:31.961825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.740 qpair failed and we were unable to recover it. 00:33:13.740 [2024-07-26 09:06:31.961944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.740 [2024-07-26 09:06:31.961969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.740 qpair failed and we were unable to recover it. 00:33:13.740 [2024-07-26 09:06:31.962137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.740 [2024-07-26 09:06:31.962167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.740 qpair failed and we were unable to recover it. 00:33:13.740 [2024-07-26 09:06:31.962358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.740 [2024-07-26 09:06:31.962387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.740 qpair failed and we were unable to recover it. 00:33:13.740 [2024-07-26 09:06:31.962575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.740 [2024-07-26 09:06:31.962601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.740 qpair failed and we were unable to recover it. 00:33:13.740 [2024-07-26 09:06:31.962795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.740 [2024-07-26 09:06:31.962823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.740 qpair failed and we were unable to recover it. 00:33:13.740 [2024-07-26 09:06:31.962955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.740 [2024-07-26 09:06:31.962983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.740 qpair failed and we were unable to recover it. 00:33:13.740 [2024-07-26 09:06:31.963124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.740 [2024-07-26 09:06:31.963150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.740 qpair failed and we were unable to recover it. 00:33:13.740 [2024-07-26 09:06:31.963321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.740 [2024-07-26 09:06:31.963346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.740 qpair failed and we were unable to recover it. 00:33:13.740 [2024-07-26 09:06:31.963522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.740 [2024-07-26 09:06:31.963550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.740 qpair failed and we were unable to recover it. 00:33:13.740 [2024-07-26 09:06:31.963688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.740 [2024-07-26 09:06:31.963713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.740 qpair failed and we were unable to recover it. 00:33:13.740 [2024-07-26 09:06:31.963868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.740 [2024-07-26 09:06:31.963911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.740 qpair failed and we were unable to recover it. 00:33:13.740 [2024-07-26 09:06:31.964083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.740 [2024-07-26 09:06:31.964109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.740 qpair failed and we were unable to recover it. 00:33:13.740 [2024-07-26 09:06:31.964226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.740 [2024-07-26 09:06:31.964252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.740 qpair failed and we were unable to recover it. 00:33:13.740 [2024-07-26 09:06:31.964398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.740 [2024-07-26 09:06:31.964441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.740 qpair failed and we were unable to recover it. 00:33:13.740 [2024-07-26 09:06:31.964561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.740 [2024-07-26 09:06:31.964589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.740 qpair failed and we were unable to recover it. 00:33:13.740 [2024-07-26 09:06:31.964757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.740 [2024-07-26 09:06:31.964783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.740 qpair failed and we were unable to recover it. 00:33:13.740 [2024-07-26 09:06:31.964924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.740 [2024-07-26 09:06:31.964949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.740 qpair failed and we were unable to recover it. 00:33:13.740 [2024-07-26 09:06:31.965162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.740 [2024-07-26 09:06:31.965188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.740 qpair failed and we were unable to recover it. 00:33:13.740 [2024-07-26 09:06:31.965335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.740 [2024-07-26 09:06:31.965360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.740 qpair failed and we were unable to recover it. 00:33:13.740 [2024-07-26 09:06:31.965478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.740 [2024-07-26 09:06:31.965521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.740 qpair failed and we were unable to recover it. 00:33:13.740 [2024-07-26 09:06:31.965710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.740 [2024-07-26 09:06:31.965739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.740 qpair failed and we were unable to recover it. 00:33:13.740 [2024-07-26 09:06:31.965911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.740 [2024-07-26 09:06:31.965936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.740 qpair failed and we were unable to recover it. 00:33:13.740 [2024-07-26 09:06:31.966052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.740 [2024-07-26 09:06:31.966094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.740 qpair failed and we were unable to recover it. 00:33:13.740 [2024-07-26 09:06:31.966298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.740 [2024-07-26 09:06:31.966327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.740 qpair failed and we were unable to recover it. 00:33:13.740 [2024-07-26 09:06:31.966493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.740 [2024-07-26 09:06:31.966523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.740 qpair failed and we were unable to recover it. 00:33:13.740 [2024-07-26 09:06:31.966709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.740 [2024-07-26 09:06:31.966737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.740 qpair failed and we were unable to recover it. 00:33:13.740 [2024-07-26 09:06:31.966901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.740 [2024-07-26 09:06:31.966926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.740 qpair failed and we were unable to recover it. 00:33:13.740 [2024-07-26 09:06:31.967098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.740 [2024-07-26 09:06:31.967125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.740 qpair failed and we were unable to recover it. 00:33:13.740 [2024-07-26 09:06:31.967292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.740 [2024-07-26 09:06:31.967320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.740 qpair failed and we were unable to recover it. 00:33:13.740 [2024-07-26 09:06:31.967475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.740 [2024-07-26 09:06:31.967503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.740 qpair failed and we were unable to recover it. 00:33:13.740 [2024-07-26 09:06:31.967635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.740 [2024-07-26 09:06:31.967661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.740 qpair failed and we were unable to recover it. 00:33:13.740 [2024-07-26 09:06:31.967815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.740 [2024-07-26 09:06:31.967841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.740 qpair failed and we were unable to recover it. 00:33:13.740 [2024-07-26 09:06:31.968019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.740 [2024-07-26 09:06:31.968044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.740 qpair failed and we were unable to recover it. 00:33:13.740 [2024-07-26 09:06:31.968209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.740 [2024-07-26 09:06:31.968234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.740 qpair failed and we were unable to recover it. 00:33:13.740 [2024-07-26 09:06:31.968355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.740 [2024-07-26 09:06:31.968380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.740 qpair failed and we were unable to recover it. 00:33:13.740 [2024-07-26 09:06:31.968495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.740 [2024-07-26 09:06:31.968521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.740 qpair failed and we were unable to recover it. 00:33:13.740 [2024-07-26 09:06:31.968669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.740 [2024-07-26 09:06:31.968694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.740 qpair failed and we were unable to recover it. 00:33:13.740 [2024-07-26 09:06:31.968812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.740 [2024-07-26 09:06:31.968855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.740 qpair failed and we were unable to recover it. 00:33:13.740 [2024-07-26 09:06:31.969021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.740 [2024-07-26 09:06:31.969050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.740 qpair failed and we were unable to recover it. 00:33:13.740 [2024-07-26 09:06:31.969223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.740 [2024-07-26 09:06:31.969249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.740 qpair failed and we were unable to recover it. 00:33:13.740 [2024-07-26 09:06:31.969404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.740 [2024-07-26 09:06:31.969448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.740 qpair failed and we were unable to recover it. 00:33:13.740 [2024-07-26 09:06:31.969580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.740 [2024-07-26 09:06:31.969610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.740 qpair failed and we were unable to recover it. 00:33:13.740 [2024-07-26 09:06:31.969740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.741 [2024-07-26 09:06:31.969765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.741 qpair failed and we were unable to recover it. 00:33:13.741 [2024-07-26 09:06:31.969932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.741 [2024-07-26 09:06:31.969978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.741 qpair failed and we were unable to recover it. 00:33:13.741 [2024-07-26 09:06:31.970133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.741 [2024-07-26 09:06:31.970162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.741 qpair failed and we were unable to recover it. 00:33:13.741 [2024-07-26 09:06:31.970312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.741 [2024-07-26 09:06:31.970337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.741 qpair failed and we were unable to recover it. 00:33:13.741 [2024-07-26 09:06:31.970452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.741 [2024-07-26 09:06:31.970478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.741 qpair failed and we were unable to recover it. 00:33:13.741 [2024-07-26 09:06:31.970588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.741 [2024-07-26 09:06:31.970613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.741 qpair failed and we were unable to recover it. 00:33:13.741 [2024-07-26 09:06:31.970726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.741 [2024-07-26 09:06:31.970751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.741 qpair failed and we were unable to recover it. 00:33:13.741 [2024-07-26 09:06:31.970899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.741 [2024-07-26 09:06:31.970924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.741 qpair failed and we were unable to recover it. 00:33:13.741 [2024-07-26 09:06:31.971093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.741 [2024-07-26 09:06:31.971123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.741 qpair failed and we were unable to recover it. 00:33:13.741 [2024-07-26 09:06:31.971300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.741 [2024-07-26 09:06:31.971326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.741 qpair failed and we were unable to recover it. 00:33:13.741 [2024-07-26 09:06:31.971435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.741 [2024-07-26 09:06:31.971460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.741 qpair failed and we were unable to recover it. 00:33:13.741 [2024-07-26 09:06:31.971585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.741 [2024-07-26 09:06:31.971610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.741 qpair failed and we were unable to recover it. 00:33:13.741 [2024-07-26 09:06:31.971752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.741 [2024-07-26 09:06:31.971779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.741 qpair failed and we were unable to recover it. 00:33:13.741 [2024-07-26 09:06:31.971894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.741 [2024-07-26 09:06:31.971919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.741 qpair failed and we were unable to recover it. 00:33:13.741 [2024-07-26 09:06:31.972073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.741 [2024-07-26 09:06:31.972099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.741 qpair failed and we were unable to recover it. 00:33:13.741 [2024-07-26 09:06:31.972241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.741 [2024-07-26 09:06:31.972268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.741 qpair failed and we were unable to recover it. 00:33:13.741 [2024-07-26 09:06:31.972379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.741 [2024-07-26 09:06:31.972421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.741 qpair failed and we were unable to recover it. 00:33:13.741 [2024-07-26 09:06:31.972616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.741 [2024-07-26 09:06:31.972642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.741 qpair failed and we were unable to recover it. 00:33:13.741 [2024-07-26 09:06:31.972760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.741 [2024-07-26 09:06:31.972786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.741 qpair failed and we were unable to recover it. 00:33:13.741 [2024-07-26 09:06:31.972930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.741 [2024-07-26 09:06:31.972979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.741 qpair failed and we were unable to recover it. 00:33:13.741 [2024-07-26 09:06:31.973151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.741 [2024-07-26 09:06:31.973177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.741 qpair failed and we were unable to recover it. 00:33:13.741 [2024-07-26 09:06:31.973328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.741 [2024-07-26 09:06:31.973353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.741 qpair failed and we were unable to recover it. 00:33:13.741 [2024-07-26 09:06:31.973548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.741 [2024-07-26 09:06:31.973581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.741 qpair failed and we were unable to recover it. 00:33:13.741 [2024-07-26 09:06:31.973739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.741 [2024-07-26 09:06:31.973769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.741 qpair failed and we were unable to recover it. 00:33:13.741 [2024-07-26 09:06:31.973964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.741 [2024-07-26 09:06:31.973995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.741 qpair failed and we were unable to recover it. 00:33:13.741 [2024-07-26 09:06:31.974122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.741 [2024-07-26 09:06:31.974148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.741 qpair failed and we were unable to recover it. 00:33:13.741 [2024-07-26 09:06:31.974304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.741 [2024-07-26 09:06:31.974330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.741 qpair failed and we were unable to recover it. 00:33:13.741 [2024-07-26 09:06:31.974480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.741 [2024-07-26 09:06:31.974505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.741 qpair failed and we were unable to recover it. 00:33:13.741 [2024-07-26 09:06:31.974648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.741 [2024-07-26 09:06:31.974690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.741 qpair failed and we were unable to recover it. 00:33:13.741 [2024-07-26 09:06:31.974883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.741 [2024-07-26 09:06:31.974909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.741 qpair failed and we were unable to recover it. 00:33:13.741 [2024-07-26 09:06:31.975050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.741 [2024-07-26 09:06:31.975083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.741 qpair failed and we were unable to recover it. 00:33:13.741 [2024-07-26 09:06:31.975234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.741 [2024-07-26 09:06:31.975260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.741 qpair failed and we were unable to recover it. 00:33:13.741 [2024-07-26 09:06:31.975387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.741 [2024-07-26 09:06:31.975414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.741 qpair failed and we were unable to recover it. 00:33:13.741 [2024-07-26 09:06:31.975601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.741 [2024-07-26 09:06:31.975626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.741 qpair failed and we were unable to recover it. 00:33:13.741 [2024-07-26 09:06:31.975766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.741 [2024-07-26 09:06:31.975792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.741 qpair failed and we were unable to recover it. 00:33:13.741 [2024-07-26 09:06:31.975938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.741 [2024-07-26 09:06:31.975964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.741 qpair failed and we were unable to recover it. 00:33:13.741 [2024-07-26 09:06:31.976112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.741 [2024-07-26 09:06:31.976138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.741 qpair failed and we were unable to recover it. 00:33:13.741 [2024-07-26 09:06:31.976280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.741 [2024-07-26 09:06:31.976305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.741 qpair failed and we were unable to recover it. 00:33:13.741 [2024-07-26 09:06:31.976449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.741 [2024-07-26 09:06:31.976490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.741 qpair failed and we were unable to recover it. 00:33:13.741 [2024-07-26 09:06:31.976636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.741 [2024-07-26 09:06:31.976661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.741 qpair failed and we were unable to recover it. 00:33:13.741 [2024-07-26 09:06:31.976781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.741 [2024-07-26 09:06:31.976808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.741 qpair failed and we were unable to recover it. 00:33:13.741 [2024-07-26 09:06:31.977014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.741 [2024-07-26 09:06:31.977042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.741 qpair failed and we were unable to recover it. 00:33:13.741 [2024-07-26 09:06:31.977233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.741 [2024-07-26 09:06:31.977259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.741 qpair failed and we were unable to recover it. 00:33:13.741 [2024-07-26 09:06:31.977413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.741 [2024-07-26 09:06:31.977456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.741 qpair failed and we were unable to recover it. 00:33:13.741 [2024-07-26 09:06:31.977612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.741 [2024-07-26 09:06:31.977640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.741 qpair failed and we were unable to recover it. 00:33:13.741 [2024-07-26 09:06:31.977783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.741 [2024-07-26 09:06:31.977809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.741 qpair failed and we were unable to recover it. 00:33:13.741 [2024-07-26 09:06:31.977924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.741 [2024-07-26 09:06:31.977949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.741 qpair failed and we were unable to recover it. 00:33:13.741 [2024-07-26 09:06:31.978094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.741 [2024-07-26 09:06:31.978123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.741 qpair failed and we were unable to recover it. 00:33:13.741 [2024-07-26 09:06:31.978295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.741 [2024-07-26 09:06:31.978321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.741 qpair failed and we were unable to recover it. 00:33:13.741 [2024-07-26 09:06:31.978441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.741 [2024-07-26 09:06:31.978471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.741 qpair failed and we were unable to recover it. 00:33:13.741 [2024-07-26 09:06:31.978621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.741 [2024-07-26 09:06:31.978647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.741 qpair failed and we were unable to recover it. 00:33:13.741 [2024-07-26 09:06:31.978857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.741 [2024-07-26 09:06:31.978882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.741 qpair failed and we were unable to recover it. 00:33:13.741 [2024-07-26 09:06:31.978999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.741 [2024-07-26 09:06:31.979041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.741 qpair failed and we were unable to recover it. 00:33:13.741 [2024-07-26 09:06:31.979208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.741 [2024-07-26 09:06:31.979236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.741 qpair failed and we were unable to recover it. 00:33:13.741 [2024-07-26 09:06:31.979405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.741 [2024-07-26 09:06:31.979430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.741 qpair failed and we were unable to recover it. 00:33:13.741 [2024-07-26 09:06:31.979552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.741 [2024-07-26 09:06:31.979578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.741 qpair failed and we were unable to recover it. 00:33:13.741 [2024-07-26 09:06:31.979728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.741 [2024-07-26 09:06:31.979754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.741 qpair failed and we were unable to recover it. 00:33:13.741 [2024-07-26 09:06:31.979878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.741 [2024-07-26 09:06:31.979904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.741 qpair failed and we were unable to recover it. 00:33:13.741 [2024-07-26 09:06:31.980091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.741 [2024-07-26 09:06:31.980120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.741 qpair failed and we were unable to recover it. 00:33:13.741 [2024-07-26 09:06:31.980246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.741 [2024-07-26 09:06:31.980274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.741 qpair failed and we were unable to recover it. 00:33:13.741 [2024-07-26 09:06:31.980429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.741 [2024-07-26 09:06:31.980456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.741 qpair failed and we were unable to recover it. 00:33:13.741 [2024-07-26 09:06:31.980605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.742 [2024-07-26 09:06:31.980647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.742 qpair failed and we were unable to recover it. 00:33:13.742 [2024-07-26 09:06:31.980842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.742 [2024-07-26 09:06:31.980868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.742 qpair failed and we were unable to recover it. 00:33:13.742 [2024-07-26 09:06:31.980986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.742 [2024-07-26 09:06:31.981012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.742 qpair failed and we were unable to recover it. 00:33:13.742 [2024-07-26 09:06:31.981128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.742 [2024-07-26 09:06:31.981154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.742 qpair failed and we were unable to recover it. 00:33:13.742 [2024-07-26 09:06:31.981295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.742 [2024-07-26 09:06:31.981321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.742 qpair failed and we were unable to recover it. 00:33:13.742 [2024-07-26 09:06:31.981517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.742 [2024-07-26 09:06:31.981543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.742 qpair failed and we were unable to recover it. 00:33:13.742 [2024-07-26 09:06:31.981691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.742 [2024-07-26 09:06:31.981718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.742 qpair failed and we were unable to recover it. 00:33:13.742 [2024-07-26 09:06:31.981878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.742 [2024-07-26 09:06:31.981904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.742 qpair failed and we were unable to recover it. 00:33:13.742 [2024-07-26 09:06:31.982045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.742 [2024-07-26 09:06:31.982089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.742 qpair failed and we were unable to recover it. 00:33:13.742 [2024-07-26 09:06:31.982242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.742 [2024-07-26 09:06:31.982268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.742 qpair failed and we were unable to recover it. 00:33:13.742 [2024-07-26 09:06:31.982415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.742 [2024-07-26 09:06:31.982458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.742 qpair failed and we were unable to recover it. 00:33:13.742 [2024-07-26 09:06:31.982625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.742 [2024-07-26 09:06:31.982650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.742 qpair failed and we were unable to recover it. 00:33:13.742 [2024-07-26 09:06:31.982802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.742 [2024-07-26 09:06:31.982828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.742 qpair failed and we were unable to recover it. 00:33:13.742 [2024-07-26 09:06:31.982976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.742 [2024-07-26 09:06:31.983001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.742 qpair failed and we were unable to recover it. 00:33:13.742 [2024-07-26 09:06:31.983171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.742 [2024-07-26 09:06:31.983198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.742 qpair failed and we were unable to recover it. 00:33:13.742 [2024-07-26 09:06:31.983337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.742 [2024-07-26 09:06:31.983367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.742 qpair failed and we were unable to recover it. 00:33:13.742 [2024-07-26 09:06:31.983532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.742 [2024-07-26 09:06:31.983560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.742 qpair failed and we were unable to recover it. 00:33:13.742 [2024-07-26 09:06:31.983760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.742 [2024-07-26 09:06:31.983785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.742 qpair failed and we were unable to recover it. 00:33:13.742 [2024-07-26 09:06:31.983930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.742 [2024-07-26 09:06:31.983959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.742 qpair failed and we were unable to recover it. 00:33:13.742 [2024-07-26 09:06:31.984113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.742 [2024-07-26 09:06:31.984142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.742 qpair failed and we were unable to recover it. 00:33:13.742 [2024-07-26 09:06:31.984306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.742 [2024-07-26 09:06:31.984331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.742 qpair failed and we were unable to recover it. 00:33:13.742 [2024-07-26 09:06:31.984495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.742 [2024-07-26 09:06:31.984524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.742 qpair failed and we were unable to recover it. 00:33:13.742 [2024-07-26 09:06:31.984678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.742 [2024-07-26 09:06:31.984706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.742 qpair failed and we were unable to recover it. 00:33:13.742 [2024-07-26 09:06:31.984871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.742 [2024-07-26 09:06:31.984897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.742 qpair failed and we were unable to recover it. 00:33:13.742 [2024-07-26 09:06:31.985011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.742 [2024-07-26 09:06:31.985036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.742 qpair failed and we were unable to recover it. 00:33:13.742 [2024-07-26 09:06:31.985252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.742 [2024-07-26 09:06:31.985281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.742 qpair failed and we were unable to recover it. 00:33:13.742 [2024-07-26 09:06:31.985449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.742 [2024-07-26 09:06:31.985475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.742 qpair failed and we were unable to recover it. 00:33:13.742 [2024-07-26 09:06:31.985639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.742 [2024-07-26 09:06:31.985669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.742 qpair failed and we were unable to recover it. 00:33:13.742 [2024-07-26 09:06:31.985858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.742 [2024-07-26 09:06:31.985894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.742 qpair failed and we were unable to recover it. 00:33:13.742 [2024-07-26 09:06:31.986053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.742 [2024-07-26 09:06:31.986086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.742 qpair failed and we were unable to recover it. 00:33:13.742 [2024-07-26 09:06:31.986211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.742 [2024-07-26 09:06:31.986237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.742 qpair failed and we were unable to recover it. 00:33:13.742 [2024-07-26 09:06:31.986358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.742 [2024-07-26 09:06:31.986383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.742 qpair failed and we were unable to recover it. 00:33:13.742 [2024-07-26 09:06:31.986527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.742 [2024-07-26 09:06:31.986554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.742 qpair failed and we were unable to recover it. 00:33:13.742 [2024-07-26 09:06:31.986739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.742 [2024-07-26 09:06:31.986768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.742 qpair failed and we were unable to recover it. 00:33:13.742 [2024-07-26 09:06:31.986893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.742 [2024-07-26 09:06:31.986921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.742 qpair failed and we were unable to recover it. 00:33:13.742 [2024-07-26 09:06:31.987078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.742 [2024-07-26 09:06:31.987104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.742 qpair failed and we were unable to recover it. 00:33:13.742 [2024-07-26 09:06:31.987223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.742 [2024-07-26 09:06:31.987249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.742 qpair failed and we were unable to recover it. 00:33:13.742 [2024-07-26 09:06:31.987435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.742 [2024-07-26 09:06:31.987461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.742 qpair failed and we were unable to recover it. 00:33:13.742 [2024-07-26 09:06:31.987606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.742 [2024-07-26 09:06:31.987632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.742 qpair failed and we were unable to recover it. 00:33:13.742 [2024-07-26 09:06:31.987803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.742 [2024-07-26 09:06:31.987828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.742 qpair failed and we were unable to recover it. 00:33:13.742 [2024-07-26 09:06:31.988008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.742 [2024-07-26 09:06:31.988034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.742 qpair failed and we were unable to recover it. 00:33:13.742 [2024-07-26 09:06:31.988169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.742 [2024-07-26 09:06:31.988195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.742 qpair failed and we were unable to recover it. 00:33:13.742 [2024-07-26 09:06:31.988348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.742 [2024-07-26 09:06:31.988373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.742 qpair failed and we were unable to recover it. 00:33:13.742 [2024-07-26 09:06:31.988542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.742 [2024-07-26 09:06:31.988568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.742 qpair failed and we were unable to recover it. 00:33:13.742 [2024-07-26 09:06:31.988690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.742 [2024-07-26 09:06:31.988716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.742 qpair failed and we were unable to recover it. 00:33:13.742 [2024-07-26 09:06:31.988863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.742 [2024-07-26 09:06:31.988889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.742 qpair failed and we were unable to recover it. 00:33:13.742 [2024-07-26 09:06:31.989047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.742 [2024-07-26 09:06:31.989098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.742 qpair failed and we were unable to recover it. 00:33:13.742 [2024-07-26 09:06:31.989232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.742 [2024-07-26 09:06:31.989258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.742 qpair failed and we were unable to recover it. 00:33:13.742 [2024-07-26 09:06:31.989453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.742 [2024-07-26 09:06:31.989482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.742 qpair failed and we were unable to recover it. 00:33:13.742 [2024-07-26 09:06:31.989637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.742 [2024-07-26 09:06:31.989665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.742 qpair failed and we were unable to recover it. 00:33:13.742 [2024-07-26 09:06:31.989852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.742 [2024-07-26 09:06:31.989878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.742 qpair failed and we were unable to recover it. 00:33:13.742 [2024-07-26 09:06:31.990025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.742 [2024-07-26 09:06:31.990051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.742 qpair failed and we were unable to recover it. 00:33:13.742 [2024-07-26 09:06:31.990190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.742 [2024-07-26 09:06:31.990216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.742 qpair failed and we were unable to recover it. 00:33:13.742 [2024-07-26 09:06:31.990338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.742 [2024-07-26 09:06:31.990364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.742 qpair failed and we were unable to recover it. 00:33:13.742 [2024-07-26 09:06:31.990559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.743 [2024-07-26 09:06:31.990588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.743 qpair failed and we were unable to recover it. 00:33:13.743 [2024-07-26 09:06:31.990782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.743 [2024-07-26 09:06:31.990811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.743 qpair failed and we were unable to recover it. 00:33:13.743 [2024-07-26 09:06:31.990970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.743 [2024-07-26 09:06:31.990995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.743 qpair failed and we were unable to recover it. 00:33:13.743 [2024-07-26 09:06:31.991164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.743 [2024-07-26 09:06:31.991193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.743 qpair failed and we were unable to recover it. 00:33:13.743 [2024-07-26 09:06:31.991350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.743 [2024-07-26 09:06:31.991379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.743 qpair failed and we were unable to recover it. 00:33:13.743 [2024-07-26 09:06:31.991550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.743 [2024-07-26 09:06:31.991576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.743 qpair failed and we were unable to recover it. 00:33:13.743 [2024-07-26 09:06:31.991721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.743 [2024-07-26 09:06:31.991764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.743 qpair failed and we were unable to recover it. 00:33:13.743 [2024-07-26 09:06:31.991913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.743 [2024-07-26 09:06:31.991939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.743 qpair failed and we were unable to recover it. 00:33:13.743 [2024-07-26 09:06:31.992084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.743 [2024-07-26 09:06:31.992110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.743 qpair failed and we were unable to recover it. 00:33:13.743 [2024-07-26 09:06:31.992256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.743 [2024-07-26 09:06:31.992282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.743 qpair failed and we were unable to recover it. 00:33:13.743 [2024-07-26 09:06:31.992470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.743 [2024-07-26 09:06:31.992498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.743 qpair failed and we were unable to recover it. 00:33:13.743 [2024-07-26 09:06:31.992671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.743 [2024-07-26 09:06:31.992698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.743 qpair failed and we were unable to recover it. 00:33:13.743 [2024-07-26 09:06:31.992839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.743 [2024-07-26 09:06:31.992865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.743 qpair failed and we were unable to recover it. 00:33:13.743 [2024-07-26 09:06:31.993036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.743 [2024-07-26 09:06:31.993070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.743 qpair failed and we were unable to recover it. 00:33:13.743 [2024-07-26 09:06:31.993207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.743 [2024-07-26 09:06:31.993237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.743 qpair failed and we were unable to recover it. 00:33:13.743 [2024-07-26 09:06:31.993428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.743 [2024-07-26 09:06:31.993456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.743 qpair failed and we were unable to recover it. 00:33:13.743 [2024-07-26 09:06:31.993621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.743 [2024-07-26 09:06:31.993649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.743 qpair failed and we were unable to recover it. 00:33:13.743 [2024-07-26 09:06:31.993811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.743 [2024-07-26 09:06:31.993836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.743 qpair failed and we were unable to recover it. 00:33:13.743 [2024-07-26 09:06:31.993980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.743 [2024-07-26 09:06:31.994022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.743 qpair failed and we were unable to recover it. 00:33:13.743 [2024-07-26 09:06:31.994206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.743 [2024-07-26 09:06:31.994235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.743 qpair failed and we were unable to recover it. 00:33:13.743 [2024-07-26 09:06:31.994409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.743 [2024-07-26 09:06:31.994435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.743 qpair failed and we were unable to recover it. 00:33:13.743 [2024-07-26 09:06:31.994547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.743 [2024-07-26 09:06:31.994574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.743 qpair failed and we were unable to recover it. 00:33:13.743 [2024-07-26 09:06:31.994692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.743 [2024-07-26 09:06:31.994718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.743 qpair failed and we were unable to recover it. 00:33:13.743 [2024-07-26 09:06:31.994869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.743 [2024-07-26 09:06:31.994895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.743 qpair failed and we were unable to recover it. 00:33:13.743 [2024-07-26 09:06:31.995011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.743 [2024-07-26 09:06:31.995038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.743 qpair failed and we were unable to recover it. 00:33:13.743 [2024-07-26 09:06:31.995193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.743 [2024-07-26 09:06:31.995219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.743 qpair failed and we were unable to recover it. 00:33:13.743 [2024-07-26 09:06:31.995343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.743 [2024-07-26 09:06:31.995369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.743 qpair failed and we were unable to recover it. 00:33:13.743 [2024-07-26 09:06:31.995516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.743 [2024-07-26 09:06:31.995542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.743 qpair failed and we were unable to recover it. 00:33:13.743 [2024-07-26 09:06:31.995709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.743 [2024-07-26 09:06:31.995738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.743 qpair failed and we were unable to recover it. 00:33:13.743 [2024-07-26 09:06:31.995879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.743 [2024-07-26 09:06:31.995904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.743 qpair failed and we were unable to recover it. 00:33:13.743 [2024-07-26 09:06:31.996025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.743 [2024-07-26 09:06:31.996052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.743 qpair failed and we were unable to recover it. 00:33:13.743 [2024-07-26 09:06:31.996197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.743 [2024-07-26 09:06:31.996225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.743 qpair failed and we were unable to recover it. 00:33:13.743 [2024-07-26 09:06:31.996369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.743 [2024-07-26 09:06:31.996395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.743 qpair failed and we were unable to recover it. 00:33:13.743 [2024-07-26 09:06:31.996552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.743 [2024-07-26 09:06:31.996595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.743 qpair failed and we were unable to recover it. 00:33:13.743 [2024-07-26 09:06:31.996757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.743 [2024-07-26 09:06:31.996786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.743 qpair failed and we were unable to recover it. 00:33:13.743 [2024-07-26 09:06:31.996932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.743 [2024-07-26 09:06:31.996957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.743 qpair failed and we were unable to recover it. 00:33:13.743 [2024-07-26 09:06:31.997151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.743 [2024-07-26 09:06:31.997180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.743 qpair failed and we were unable to recover it. 00:33:13.743 [2024-07-26 09:06:31.997341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.743 [2024-07-26 09:06:31.997371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.743 qpair failed and we were unable to recover it. 00:33:13.743 [2024-07-26 09:06:31.997581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.743 [2024-07-26 09:06:31.997606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.743 qpair failed and we were unable to recover it. 00:33:13.743 [2024-07-26 09:06:31.997795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.743 [2024-07-26 09:06:31.997823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.743 qpair failed and we were unable to recover it. 00:33:13.743 [2024-07-26 09:06:31.997987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.743 [2024-07-26 09:06:31.998016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.743 qpair failed and we were unable to recover it. 00:33:13.743 [2024-07-26 09:06:31.998161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.743 [2024-07-26 09:06:31.998188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.743 qpair failed and we were unable to recover it. 00:33:13.743 [2024-07-26 09:06:31.998375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.743 [2024-07-26 09:06:31.998404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.743 qpair failed and we were unable to recover it. 00:33:13.743 [2024-07-26 09:06:31.998592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.743 [2024-07-26 09:06:31.998618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.743 qpair failed and we were unable to recover it. 00:33:13.743 [2024-07-26 09:06:31.998740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.743 [2024-07-26 09:06:31.998766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.743 qpair failed and we were unable to recover it. 00:33:13.743 [2024-07-26 09:06:31.998915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.743 [2024-07-26 09:06:31.998960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.743 qpair failed and we were unable to recover it. 00:33:13.743 [2024-07-26 09:06:31.999117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.743 [2024-07-26 09:06:31.999145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.743 qpair failed and we were unable to recover it. 00:33:13.743 [2024-07-26 09:06:31.999316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.743 [2024-07-26 09:06:31.999341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.743 qpair failed and we were unable to recover it. 00:33:13.743 [2024-07-26 09:06:31.999462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.743 [2024-07-26 09:06:31.999488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.743 qpair failed and we were unable to recover it. 00:33:13.743 [2024-07-26 09:06:31.999605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.743 [2024-07-26 09:06:31.999631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.743 qpair failed and we were unable to recover it. 00:33:13.743 [2024-07-26 09:06:31.999752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.743 [2024-07-26 09:06:31.999778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.743 qpair failed and we were unable to recover it. 00:33:13.743 [2024-07-26 09:06:31.999951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.743 [2024-07-26 09:06:31.999992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.743 qpair failed and we were unable to recover it. 00:33:13.743 [2024-07-26 09:06:32.000176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.743 [2024-07-26 09:06:32.000205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.743 qpair failed and we were unable to recover it. 00:33:13.743 [2024-07-26 09:06:32.000348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.743 [2024-07-26 09:06:32.000374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.743 qpair failed and we were unable to recover it. 00:33:13.743 [2024-07-26 09:06:32.000522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.743 [2024-07-26 09:06:32.000569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.743 qpair failed and we were unable to recover it. 00:33:13.743 [2024-07-26 09:06:32.000697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.743 [2024-07-26 09:06:32.000725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.743 qpair failed and we were unable to recover it. 00:33:13.743 [2024-07-26 09:06:32.000889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.743 [2024-07-26 09:06:32.000915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.743 qpair failed and we were unable to recover it. 00:33:13.743 [2024-07-26 09:06:32.001107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.743 [2024-07-26 09:06:32.001137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.743 qpair failed and we were unable to recover it. 00:33:13.743 [2024-07-26 09:06:32.001324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.743 [2024-07-26 09:06:32.001353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.743 qpair failed and we were unable to recover it. 00:33:13.743 [2024-07-26 09:06:32.001495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.743 [2024-07-26 09:06:32.001521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.743 qpair failed and we were unable to recover it. 00:33:13.743 [2024-07-26 09:06:32.001633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.743 [2024-07-26 09:06:32.001658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.743 qpair failed and we were unable to recover it. 00:33:13.743 [2024-07-26 09:06:32.001828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.743 [2024-07-26 09:06:32.001857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.743 qpair failed and we were unable to recover it. 00:33:13.743 [2024-07-26 09:06:32.002017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.743 [2024-07-26 09:06:32.002042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.743 qpair failed and we were unable to recover it. 00:33:13.743 [2024-07-26 09:06:32.002229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.744 [2024-07-26 09:06:32.002258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.744 qpair failed and we were unable to recover it. 00:33:13.744 [2024-07-26 09:06:32.002426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.744 [2024-07-26 09:06:32.002454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.744 qpair failed and we were unable to recover it. 00:33:13.744 [2024-07-26 09:06:32.002631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.744 [2024-07-26 09:06:32.002657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.744 qpair failed and we were unable to recover it. 00:33:13.744 [2024-07-26 09:06:32.002800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.744 [2024-07-26 09:06:32.002827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.744 qpair failed and we were unable to recover it. 00:33:13.744 [2024-07-26 09:06:32.002949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.744 [2024-07-26 09:06:32.002975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.744 qpair failed and we were unable to recover it. 00:33:13.744 [2024-07-26 09:06:32.003154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.744 [2024-07-26 09:06:32.003180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.744 qpair failed and we were unable to recover it. 00:33:13.744 [2024-07-26 09:06:32.003313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.744 [2024-07-26 09:06:32.003341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.744 qpair failed and we were unable to recover it. 00:33:13.744 [2024-07-26 09:06:32.003529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.744 [2024-07-26 09:06:32.003558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.744 qpair failed and we were unable to recover it. 00:33:13.744 [2024-07-26 09:06:32.003730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.744 [2024-07-26 09:06:32.003757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.744 qpair failed and we were unable to recover it. 00:33:13.744 [2024-07-26 09:06:32.003921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.744 [2024-07-26 09:06:32.003949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.744 qpair failed and we were unable to recover it. 00:33:13.744 [2024-07-26 09:06:32.004103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.744 [2024-07-26 09:06:32.004133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.744 qpair failed and we were unable to recover it. 00:33:13.744 [2024-07-26 09:06:32.004295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.744 [2024-07-26 09:06:32.004321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.744 qpair failed and we were unable to recover it. 00:33:13.744 [2024-07-26 09:06:32.004521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.744 [2024-07-26 09:06:32.004550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.744 qpair failed and we were unable to recover it. 00:33:13.744 [2024-07-26 09:06:32.004736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.744 [2024-07-26 09:06:32.004765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.744 qpair failed and we were unable to recover it. 00:33:13.744 [2024-07-26 09:06:32.004931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.744 [2024-07-26 09:06:32.004958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.744 qpair failed and we were unable to recover it. 00:33:13.744 [2024-07-26 09:06:32.005073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.744 [2024-07-26 09:06:32.005114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.744 qpair failed and we were unable to recover it. 00:33:13.744 [2024-07-26 09:06:32.005259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.744 [2024-07-26 09:06:32.005288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.744 qpair failed and we were unable to recover it. 00:33:13.744 [2024-07-26 09:06:32.005430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.744 [2024-07-26 09:06:32.005456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.744 qpair failed and we were unable to recover it. 00:33:13.744 [2024-07-26 09:06:32.005606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.744 [2024-07-26 09:06:32.005631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.744 qpair failed and we were unable to recover it. 00:33:13.744 [2024-07-26 09:06:32.005785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.744 [2024-07-26 09:06:32.005813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.744 qpair failed and we were unable to recover it. 00:33:13.744 [2024-07-26 09:06:32.005956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.744 [2024-07-26 09:06:32.005981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.744 qpair failed and we were unable to recover it. 00:33:13.744 [2024-07-26 09:06:32.006166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.744 [2024-07-26 09:06:32.006195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.744 qpair failed and we were unable to recover it. 00:33:13.744 [2024-07-26 09:06:32.006320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.744 [2024-07-26 09:06:32.006348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.744 qpair failed and we were unable to recover it. 00:33:13.744 [2024-07-26 09:06:32.006514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.744 [2024-07-26 09:06:32.006540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.744 qpair failed and we were unable to recover it. 00:33:13.744 [2024-07-26 09:06:32.006695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.744 [2024-07-26 09:06:32.006723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.744 qpair failed and we were unable to recover it. 00:33:13.744 [2024-07-26 09:06:32.006881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.744 [2024-07-26 09:06:32.006909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.744 qpair failed and we were unable to recover it. 00:33:13.744 [2024-07-26 09:06:32.007075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.744 [2024-07-26 09:06:32.007101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.744 qpair failed and we were unable to recover it. 00:33:13.744 [2024-07-26 09:06:32.007224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.744 [2024-07-26 09:06:32.007250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.744 qpair failed and we were unable to recover it. 00:33:13.744 [2024-07-26 09:06:32.007405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.744 [2024-07-26 09:06:32.007430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.744 qpair failed and we were unable to recover it. 00:33:13.744 [2024-07-26 09:06:32.007614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.744 [2024-07-26 09:06:32.007639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.744 qpair failed and we were unable to recover it. 00:33:13.744 [2024-07-26 09:06:32.007834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.744 [2024-07-26 09:06:32.007873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.744 qpair failed and we were unable to recover it. 00:33:13.744 [2024-07-26 09:06:32.008023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.744 [2024-07-26 09:06:32.008065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.744 qpair failed and we were unable to recover it. 00:33:13.744 [2024-07-26 09:06:32.008208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.744 [2024-07-26 09:06:32.008234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.744 qpair failed and we were unable to recover it. 00:33:13.744 [2024-07-26 09:06:32.008383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.744 [2024-07-26 09:06:32.008409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.744 qpair failed and we were unable to recover it. 00:33:13.744 [2024-07-26 09:06:32.008561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.744 [2024-07-26 09:06:32.008586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.744 qpair failed and we were unable to recover it. 00:33:13.744 [2024-07-26 09:06:32.008727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.744 [2024-07-26 09:06:32.008753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.744 qpair failed and we were unable to recover it. 00:33:13.744 [2024-07-26 09:06:32.008870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.744 [2024-07-26 09:06:32.008921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.744 qpair failed and we were unable to recover it. 00:33:13.744 [2024-07-26 09:06:32.009085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.744 [2024-07-26 09:06:32.009116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.744 qpair failed and we were unable to recover it. 00:33:13.744 [2024-07-26 09:06:32.009260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.744 [2024-07-26 09:06:32.009286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.744 qpair failed and we were unable to recover it. 00:33:13.744 [2024-07-26 09:06:32.009417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.744 [2024-07-26 09:06:32.009444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.744 qpair failed and we were unable to recover it. 00:33:13.744 [2024-07-26 09:06:32.009595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.744 [2024-07-26 09:06:32.009621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.744 qpair failed and we were unable to recover it. 00:33:13.744 [2024-07-26 09:06:32.009774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.744 [2024-07-26 09:06:32.009799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.744 qpair failed and we were unable to recover it. 00:33:13.744 [2024-07-26 09:06:32.009923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.744 [2024-07-26 09:06:32.009948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.744 qpair failed and we were unable to recover it. 00:33:13.744 [2024-07-26 09:06:32.010073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.744 [2024-07-26 09:06:32.010102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.744 qpair failed and we were unable to recover it. 00:33:13.744 [2024-07-26 09:06:32.010244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.744 [2024-07-26 09:06:32.010269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.744 qpair failed and we were unable to recover it. 00:33:13.744 [2024-07-26 09:06:32.010452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.744 [2024-07-26 09:06:32.010483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.744 qpair failed and we were unable to recover it. 00:33:13.744 [2024-07-26 09:06:32.010641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.744 [2024-07-26 09:06:32.010670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.744 qpair failed and we were unable to recover it. 00:33:13.744 [2024-07-26 09:06:32.010826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.744 [2024-07-26 09:06:32.010851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.744 qpair failed and we were unable to recover it. 00:33:13.744 [2024-07-26 09:06:32.011004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.744 [2024-07-26 09:06:32.011030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.744 qpair failed and we were unable to recover it. 00:33:13.744 [2024-07-26 09:06:32.011186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.744 [2024-07-26 09:06:32.011212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.744 qpair failed and we were unable to recover it. 00:33:13.744 [2024-07-26 09:06:32.011355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.744 [2024-07-26 09:06:32.011381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.744 qpair failed and we were unable to recover it. 00:33:13.744 [2024-07-26 09:06:32.011553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.744 [2024-07-26 09:06:32.011582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.744 qpair failed and we were unable to recover it. 00:33:13.744 [2024-07-26 09:06:32.011746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.744 [2024-07-26 09:06:32.011774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.744 qpair failed and we were unable to recover it. 00:33:13.744 [2024-07-26 09:06:32.011916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.744 [2024-07-26 09:06:32.011942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.744 qpair failed and we were unable to recover it. 00:33:13.744 [2024-07-26 09:06:32.012088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.744 [2024-07-26 09:06:32.012114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.744 qpair failed and we were unable to recover it. 00:33:13.744 [2024-07-26 09:06:32.012239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.744 [2024-07-26 09:06:32.012265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.744 qpair failed and we were unable to recover it. 00:33:13.744 [2024-07-26 09:06:32.012435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.744 [2024-07-26 09:06:32.012461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.744 qpair failed and we were unable to recover it. 00:33:13.744 [2024-07-26 09:06:32.012619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.744 [2024-07-26 09:06:32.012674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.744 qpair failed and we were unable to recover it. 00:33:13.744 [2024-07-26 09:06:32.012813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.744 [2024-07-26 09:06:32.012849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.744 qpair failed and we were unable to recover it. 00:33:13.744 [2024-07-26 09:06:32.012987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.744 [2024-07-26 09:06:32.013012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.744 qpair failed and we were unable to recover it. 00:33:13.744 [2024-07-26 09:06:32.013131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.744 [2024-07-26 09:06:32.013159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.744 qpair failed and we were unable to recover it. 00:33:13.744 [2024-07-26 09:06:32.013341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.744 [2024-07-26 09:06:32.013383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.744 qpair failed and we were unable to recover it. 00:33:13.744 [2024-07-26 09:06:32.013526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.744 [2024-07-26 09:06:32.013551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.744 qpair failed and we were unable to recover it. 00:33:13.744 [2024-07-26 09:06:32.013695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.745 [2024-07-26 09:06:32.013721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.745 qpair failed and we were unable to recover it. 00:33:13.745 [2024-07-26 09:06:32.013863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.745 [2024-07-26 09:06:32.013888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.745 qpair failed and we were unable to recover it. 00:33:13.745 [2024-07-26 09:06:32.014027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.745 [2024-07-26 09:06:32.014052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.745 qpair failed and we were unable to recover it. 00:33:13.745 [2024-07-26 09:06:32.014207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.745 [2024-07-26 09:06:32.014233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.745 qpair failed and we were unable to recover it. 00:33:13.745 [2024-07-26 09:06:32.014375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.745 [2024-07-26 09:06:32.014403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.745 qpair failed and we were unable to recover it. 00:33:13.745 [2024-07-26 09:06:32.014564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.745 [2024-07-26 09:06:32.014590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.745 qpair failed and we were unable to recover it. 00:33:13.745 [2024-07-26 09:06:32.014723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.745 [2024-07-26 09:06:32.014750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.745 qpair failed and we were unable to recover it. 00:33:13.745 [2024-07-26 09:06:32.014917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.745 [2024-07-26 09:06:32.014943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.745 qpair failed and we were unable to recover it. 00:33:13.745 [2024-07-26 09:06:32.015071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.745 [2024-07-26 09:06:32.015096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.745 qpair failed and we were unable to recover it. 00:33:13.745 [2024-07-26 09:06:32.015249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.745 [2024-07-26 09:06:32.015275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.745 qpair failed and we were unable to recover it. 00:33:13.745 [2024-07-26 09:06:32.015394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.745 [2024-07-26 09:06:32.015419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.745 qpair failed and we were unable to recover it. 00:33:13.745 [2024-07-26 09:06:32.015569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.745 [2024-07-26 09:06:32.015595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.745 qpair failed and we were unable to recover it. 00:33:13.745 [2024-07-26 09:06:32.015755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.745 [2024-07-26 09:06:32.015783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.745 qpair failed and we were unable to recover it. 00:33:13.745 [2024-07-26 09:06:32.015945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.745 [2024-07-26 09:06:32.015974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.745 qpair failed and we were unable to recover it. 00:33:13.745 [2024-07-26 09:06:32.016133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.745 [2024-07-26 09:06:32.016160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.745 qpair failed and we were unable to recover it. 00:33:13.745 [2024-07-26 09:06:32.016283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.745 [2024-07-26 09:06:32.016309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.745 qpair failed and we were unable to recover it. 00:33:13.745 [2024-07-26 09:06:32.016485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.745 [2024-07-26 09:06:32.016513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.745 qpair failed and we were unable to recover it. 00:33:13.745 [2024-07-26 09:06:32.016649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.745 [2024-07-26 09:06:32.016674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.745 qpair failed and we were unable to recover it. 00:33:13.745 [2024-07-26 09:06:32.016788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.745 [2024-07-26 09:06:32.016813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.745 qpair failed and we were unable to recover it. 00:33:13.745 [2024-07-26 09:06:32.017018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.745 [2024-07-26 09:06:32.017043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.745 qpair failed and we were unable to recover it. 00:33:13.745 [2024-07-26 09:06:32.017205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.745 [2024-07-26 09:06:32.017232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.745 qpair failed and we were unable to recover it. 00:33:13.745 [2024-07-26 09:06:32.017392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.745 [2024-07-26 09:06:32.017420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.745 qpair failed and we were unable to recover it. 00:33:13.745 [2024-07-26 09:06:32.017602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.745 [2024-07-26 09:06:32.017631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.745 qpair failed and we were unable to recover it. 00:33:13.745 [2024-07-26 09:06:32.017781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.745 [2024-07-26 09:06:32.017807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.745 qpair failed and we were unable to recover it. 00:33:13.745 [2024-07-26 09:06:32.017950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.745 [2024-07-26 09:06:32.017976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.745 qpair failed and we were unable to recover it. 00:33:13.745 [2024-07-26 09:06:32.018167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.745 [2024-07-26 09:06:32.018194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.745 qpair failed and we were unable to recover it. 00:33:13.745 [2024-07-26 09:06:32.018338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.745 [2024-07-26 09:06:32.018364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.745 qpair failed and we were unable to recover it. 00:33:13.745 [2024-07-26 09:06:32.018582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.745 [2024-07-26 09:06:32.018635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.745 qpair failed and we were unable to recover it. 00:33:13.745 [2024-07-26 09:06:32.018821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.745 [2024-07-26 09:06:32.018850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.745 qpair failed and we were unable to recover it. 00:33:13.745 [2024-07-26 09:06:32.018987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.745 [2024-07-26 09:06:32.019013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.745 qpair failed and we were unable to recover it. 00:33:13.745 [2024-07-26 09:06:32.019211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.745 [2024-07-26 09:06:32.019250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.745 qpair failed and we were unable to recover it. 00:33:13.745 [2024-07-26 09:06:32.019437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.745 [2024-07-26 09:06:32.019467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.745 qpair failed and we were unable to recover it. 00:33:13.745 [2024-07-26 09:06:32.019638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.745 [2024-07-26 09:06:32.019664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.745 qpair failed and we were unable to recover it. 00:33:13.745 [2024-07-26 09:06:32.019778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.745 [2024-07-26 09:06:32.019804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.745 qpair failed and we were unable to recover it. 00:33:13.745 [2024-07-26 09:06:32.019974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.745 [2024-07-26 09:06:32.020002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.745 qpair failed and we were unable to recover it. 00:33:13.745 [2024-07-26 09:06:32.020150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.745 [2024-07-26 09:06:32.020176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.745 qpair failed and we were unable to recover it. 00:33:13.745 [2024-07-26 09:06:32.020320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.745 [2024-07-26 09:06:32.020347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.745 qpair failed and we were unable to recover it. 00:33:13.745 [2024-07-26 09:06:32.020473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.745 [2024-07-26 09:06:32.020502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.745 qpair failed and we were unable to recover it. 00:33:13.745 [2024-07-26 09:06:32.020643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.745 [2024-07-26 09:06:32.020678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.745 qpair failed and we were unable to recover it. 00:33:13.745 [2024-07-26 09:06:32.020819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.745 [2024-07-26 09:06:32.020861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.745 qpair failed and we were unable to recover it. 00:33:13.745 [2024-07-26 09:06:32.021019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.745 [2024-07-26 09:06:32.021047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.745 qpair failed and we were unable to recover it. 00:33:13.745 [2024-07-26 09:06:32.021213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.745 [2024-07-26 09:06:32.021240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.745 qpair failed and we were unable to recover it. 00:33:13.745 [2024-07-26 09:06:32.021391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.745 [2024-07-26 09:06:32.021416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.745 qpair failed and we were unable to recover it. 00:33:13.745 [2024-07-26 09:06:32.021540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.745 [2024-07-26 09:06:32.021566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.745 qpair failed and we were unable to recover it. 00:33:13.745 [2024-07-26 09:06:32.021712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.745 [2024-07-26 09:06:32.021737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.745 qpair failed and we were unable to recover it. 00:33:13.745 [2024-07-26 09:06:32.021862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.745 [2024-07-26 09:06:32.021888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.745 qpair failed and we were unable to recover it. 00:33:13.745 [2024-07-26 09:06:32.022034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.745 [2024-07-26 09:06:32.022072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.745 qpair failed and we were unable to recover it. 00:33:13.745 [2024-07-26 09:06:32.022220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.745 [2024-07-26 09:06:32.022246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.745 qpair failed and we were unable to recover it. 00:33:13.745 [2024-07-26 09:06:32.022443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.745 [2024-07-26 09:06:32.022471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.745 qpair failed and we were unable to recover it. 00:33:13.745 [2024-07-26 09:06:32.022596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.745 [2024-07-26 09:06:32.022624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.745 qpair failed and we were unable to recover it. 00:33:13.745 [2024-07-26 09:06:32.022801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.745 [2024-07-26 09:06:32.022827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.745 qpair failed and we were unable to recover it. 00:33:13.746 [2024-07-26 09:06:32.022939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.746 [2024-07-26 09:06:32.022965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.746 qpair failed and we were unable to recover it. 00:33:13.746 [2024-07-26 09:06:32.023146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.746 [2024-07-26 09:06:32.023172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.746 qpair failed and we were unable to recover it. 00:33:13.746 [2024-07-26 09:06:32.023287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.746 [2024-07-26 09:06:32.023313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.746 qpair failed and we were unable to recover it. 00:33:13.746 [2024-07-26 09:06:32.023503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.746 [2024-07-26 09:06:32.023532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.746 qpair failed and we were unable to recover it. 00:33:13.746 [2024-07-26 09:06:32.023659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.746 [2024-07-26 09:06:32.023687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.746 qpair failed and we were unable to recover it. 00:33:13.746 [2024-07-26 09:06:32.023887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.746 [2024-07-26 09:06:32.023912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.746 qpair failed and we were unable to recover it. 00:33:13.746 [2024-07-26 09:06:32.024070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.746 [2024-07-26 09:06:32.024099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.746 qpair failed and we were unable to recover it. 00:33:13.746 [2024-07-26 09:06:32.024262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.746 [2024-07-26 09:06:32.024287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.746 qpair failed and we were unable to recover it. 00:33:13.746 [2024-07-26 09:06:32.024407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.746 [2024-07-26 09:06:32.024433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.746 qpair failed and we were unable to recover it. 00:33:13.746 [2024-07-26 09:06:32.024549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.746 [2024-07-26 09:06:32.024574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.746 qpair failed and we were unable to recover it. 00:33:13.746 [2024-07-26 09:06:32.024721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.746 [2024-07-26 09:06:32.024747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.746 qpair failed and we were unable to recover it. 00:33:13.746 [2024-07-26 09:06:32.024930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.746 [2024-07-26 09:06:32.024955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.746 qpair failed and we were unable to recover it. 00:33:13.746 [2024-07-26 09:06:32.025097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.746 [2024-07-26 09:06:32.025128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.746 qpair failed and we were unable to recover it. 00:33:13.746 [2024-07-26 09:06:32.025276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.746 [2024-07-26 09:06:32.025302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.746 qpair failed and we were unable to recover it. 00:33:13.746 [2024-07-26 09:06:32.025468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.746 [2024-07-26 09:06:32.025493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.746 qpair failed and we were unable to recover it. 00:33:13.746 [2024-07-26 09:06:32.025644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.746 [2024-07-26 09:06:32.025691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.746 qpair failed and we were unable to recover it. 00:33:13.746 [2024-07-26 09:06:32.025857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.746 [2024-07-26 09:06:32.025885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.746 qpair failed and we were unable to recover it. 00:33:13.746 [2024-07-26 09:06:32.026025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.746 [2024-07-26 09:06:32.026051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.746 qpair failed and we were unable to recover it. 00:33:13.746 [2024-07-26 09:06:32.026230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.746 [2024-07-26 09:06:32.026255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.746 qpair failed and we were unable to recover it. 00:33:13.746 [2024-07-26 09:06:32.026417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.746 [2024-07-26 09:06:32.026445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.746 qpair failed and we were unable to recover it. 00:33:13.746 [2024-07-26 09:06:32.026636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.746 [2024-07-26 09:06:32.026661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.746 qpair failed and we were unable to recover it. 00:33:13.746 [2024-07-26 09:06:32.026782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.746 [2024-07-26 09:06:32.026807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.746 qpair failed and we were unable to recover it. 00:33:13.746 [2024-07-26 09:06:32.026984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.746 [2024-07-26 09:06:32.027013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.746 qpair failed and we were unable to recover it. 00:33:13.746 [2024-07-26 09:06:32.027188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.746 [2024-07-26 09:06:32.027214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.746 qpair failed and we were unable to recover it. 00:33:13.746 [2024-07-26 09:06:32.027383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.746 [2024-07-26 09:06:32.027427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.746 qpair failed and we were unable to recover it. 00:33:13.746 [2024-07-26 09:06:32.027588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.746 [2024-07-26 09:06:32.027616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.746 qpair failed and we were unable to recover it. 00:33:13.746 [2024-07-26 09:06:32.027764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.746 [2024-07-26 09:06:32.027789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.746 qpair failed and we were unable to recover it. 00:33:13.746 [2024-07-26 09:06:32.027967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.746 [2024-07-26 09:06:32.028007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.746 qpair failed and we were unable to recover it. 00:33:13.746 [2024-07-26 09:06:32.028176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.746 [2024-07-26 09:06:32.028202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.746 qpair failed and we were unable to recover it. 00:33:13.746 [2024-07-26 09:06:32.028348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.746 [2024-07-26 09:06:32.028373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.746 qpair failed and we were unable to recover it. 00:33:13.746 [2024-07-26 09:06:32.028517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.746 [2024-07-26 09:06:32.028543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.746 qpair failed and we were unable to recover it. 00:33:13.746 [2024-07-26 09:06:32.028684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.746 [2024-07-26 09:06:32.028712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.746 qpair failed and we were unable to recover it. 00:33:13.746 [2024-07-26 09:06:32.028873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.746 [2024-07-26 09:06:32.028899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.746 qpair failed and we were unable to recover it. 00:33:13.746 [2024-07-26 09:06:32.029047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.746 [2024-07-26 09:06:32.029085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.746 qpair failed and we were unable to recover it. 00:33:13.746 [2024-07-26 09:06:32.029203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.746 [2024-07-26 09:06:32.029229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.746 qpair failed and we were unable to recover it. 00:33:13.746 [2024-07-26 09:06:32.029342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.746 [2024-07-26 09:06:32.029368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.746 qpair failed and we were unable to recover it. 00:33:13.746 [2024-07-26 09:06:32.029496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.746 [2024-07-26 09:06:32.029521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.746 qpair failed and we were unable to recover it. 00:33:13.746 [2024-07-26 09:06:32.029677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.746 [2024-07-26 09:06:32.029702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.746 qpair failed and we were unable to recover it. 00:33:13.746 [2024-07-26 09:06:32.029808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.746 [2024-07-26 09:06:32.029834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.746 qpair failed and we were unable to recover it. 00:33:13.746 [2024-07-26 09:06:32.030009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.746 [2024-07-26 09:06:32.030035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.746 qpair failed and we were unable to recover it. 00:33:13.746 [2024-07-26 09:06:32.030196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.746 [2024-07-26 09:06:32.030221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.746 qpair failed and we were unable to recover it. 00:33:13.746 [2024-07-26 09:06:32.030361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.746 [2024-07-26 09:06:32.030386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.746 qpair failed and we were unable to recover it. 00:33:13.746 [2024-07-26 09:06:32.030509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.746 [2024-07-26 09:06:32.030534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.746 qpair failed and we were unable to recover it. 00:33:13.746 [2024-07-26 09:06:32.030681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.746 [2024-07-26 09:06:32.030706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.746 qpair failed and we were unable to recover it. 00:33:13.746 [2024-07-26 09:06:32.030850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.746 [2024-07-26 09:06:32.030875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.746 qpair failed and we were unable to recover it. 00:33:13.746 [2024-07-26 09:06:32.031018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.746 [2024-07-26 09:06:32.031043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.746 qpair failed and we were unable to recover it. 00:33:13.746 [2024-07-26 09:06:32.031168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.746 [2024-07-26 09:06:32.031194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.746 qpair failed and we were unable to recover it. 00:33:13.746 [2024-07-26 09:06:32.031337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.746 [2024-07-26 09:06:32.031363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.746 qpair failed and we were unable to recover it. 00:33:13.746 [2024-07-26 09:06:32.031517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.746 [2024-07-26 09:06:32.031542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.746 qpair failed and we were unable to recover it. 00:33:13.746 [2024-07-26 09:06:32.031683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.746 [2024-07-26 09:06:32.031708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.746 qpair failed and we were unable to recover it. 00:33:13.746 [2024-07-26 09:06:32.031827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.746 [2024-07-26 09:06:32.031854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.746 qpair failed and we were unable to recover it. 00:33:13.746 [2024-07-26 09:06:32.032019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.746 [2024-07-26 09:06:32.032047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.746 qpair failed and we were unable to recover it. 00:33:13.746 [2024-07-26 09:06:32.032194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.746 [2024-07-26 09:06:32.032220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.746 qpair failed and we were unable to recover it. 00:33:13.746 [2024-07-26 09:06:32.032341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.746 [2024-07-26 09:06:32.032366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.746 qpair failed and we were unable to recover it. 00:33:13.746 [2024-07-26 09:06:32.032480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.746 [2024-07-26 09:06:32.032505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.746 qpair failed and we were unable to recover it. 00:33:13.746 [2024-07-26 09:06:32.032691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.746 [2024-07-26 09:06:32.032719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.746 qpair failed and we were unable to recover it. 00:33:13.746 [2024-07-26 09:06:32.032890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.746 [2024-07-26 09:06:32.032916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.746 qpair failed and we were unable to recover it. 00:33:13.746 [2024-07-26 09:06:32.033028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.746 [2024-07-26 09:06:32.033053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.746 qpair failed and we were unable to recover it. 00:33:13.746 [2024-07-26 09:06:32.033217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.746 [2024-07-26 09:06:32.033244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.746 qpair failed and we were unable to recover it. 00:33:13.746 [2024-07-26 09:06:32.033360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.746 [2024-07-26 09:06:32.033386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.746 qpair failed and we were unable to recover it. 00:33:13.746 [2024-07-26 09:06:32.033531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.746 [2024-07-26 09:06:32.033557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.746 qpair failed and we were unable to recover it. 00:33:13.746 [2024-07-26 09:06:32.033760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.746 [2024-07-26 09:06:32.033786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.746 qpair failed and we were unable to recover it. 00:33:13.746 [2024-07-26 09:06:32.033935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.746 [2024-07-26 09:06:32.033960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.746 qpair failed and we were unable to recover it. 00:33:13.746 [2024-07-26 09:06:32.034087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.747 [2024-07-26 09:06:32.034114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.747 qpair failed and we were unable to recover it. 00:33:13.747 [2024-07-26 09:06:32.034254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.747 [2024-07-26 09:06:32.034280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.747 qpair failed and we were unable to recover it. 00:33:13.747 [2024-07-26 09:06:32.034396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.747 [2024-07-26 09:06:32.034422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.747 qpair failed and we were unable to recover it. 00:33:13.747 [2024-07-26 09:06:32.034572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.747 [2024-07-26 09:06:32.034597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.747 qpair failed and we were unable to recover it. 00:33:13.747 [2024-07-26 09:06:32.034762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.747 [2024-07-26 09:06:32.034804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.747 qpair failed and we were unable to recover it. 00:33:13.747 [2024-07-26 09:06:32.034977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.747 [2024-07-26 09:06:32.035003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.747 qpair failed and we were unable to recover it. 00:33:13.747 [2024-07-26 09:06:32.035126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.747 [2024-07-26 09:06:32.035152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.747 qpair failed and we were unable to recover it. 00:33:13.747 [2024-07-26 09:06:32.035294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.747 [2024-07-26 09:06:32.035320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.747 qpair failed and we were unable to recover it. 00:33:13.747 [2024-07-26 09:06:32.035487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.747 [2024-07-26 09:06:32.035513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.747 qpair failed and we were unable to recover it. 00:33:13.747 [2024-07-26 09:06:32.035653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.747 [2024-07-26 09:06:32.035679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.747 qpair failed and we were unable to recover it. 00:33:13.747 [2024-07-26 09:06:32.035837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.747 [2024-07-26 09:06:32.035864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.747 qpair failed and we were unable to recover it. 00:33:13.747 [2024-07-26 09:06:32.036010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.747 [2024-07-26 09:06:32.036035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.747 qpair failed and we were unable to recover it. 00:33:13.747 [2024-07-26 09:06:32.036190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.747 [2024-07-26 09:06:32.036216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.747 qpair failed and we were unable to recover it. 00:33:13.747 [2024-07-26 09:06:32.036335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.747 [2024-07-26 09:06:32.036360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.747 qpair failed and we were unable to recover it. 00:33:13.747 [2024-07-26 09:06:32.036502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.747 [2024-07-26 09:06:32.036528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.747 qpair failed and we were unable to recover it. 00:33:13.747 [2024-07-26 09:06:32.036666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.747 [2024-07-26 09:06:32.036694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.747 qpair failed and we were unable to recover it. 00:33:13.747 [2024-07-26 09:06:32.036854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.747 [2024-07-26 09:06:32.036882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.747 qpair failed and we were unable to recover it. 00:33:13.747 [2024-07-26 09:06:32.037048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.747 [2024-07-26 09:06:32.037090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.747 qpair failed and we were unable to recover it. 00:33:13.747 [2024-07-26 09:06:32.037213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.747 [2024-07-26 09:06:32.037239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.747 qpair failed and we were unable to recover it. 00:33:13.747 [2024-07-26 09:06:32.037411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.747 [2024-07-26 09:06:32.037453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.747 qpair failed and we were unable to recover it. 00:33:13.747 [2024-07-26 09:06:32.037592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.747 [2024-07-26 09:06:32.037618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.747 qpair failed and we were unable to recover it. 00:33:13.747 [2024-07-26 09:06:32.037731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.747 [2024-07-26 09:06:32.037757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.747 qpair failed and we were unable to recover it. 00:33:13.747 [2024-07-26 09:06:32.037869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.747 [2024-07-26 09:06:32.037894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.747 qpair failed and we were unable to recover it. 00:33:13.747 [2024-07-26 09:06:32.038045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.747 [2024-07-26 09:06:32.038079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.747 qpair failed and we were unable to recover it. 00:33:13.747 [2024-07-26 09:06:32.038240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.747 [2024-07-26 09:06:32.038268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.747 qpair failed and we were unable to recover it. 00:33:13.747 [2024-07-26 09:06:32.038428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.747 [2024-07-26 09:06:32.038456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.747 qpair failed and we were unable to recover it. 00:33:13.747 [2024-07-26 09:06:32.038622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.747 [2024-07-26 09:06:32.038648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.747 qpair failed and we were unable to recover it. 00:33:13.747 [2024-07-26 09:06:32.038805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.747 [2024-07-26 09:06:32.038833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.747 qpair failed and we were unable to recover it. 00:33:13.747 [2024-07-26 09:06:32.039019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.747 [2024-07-26 09:06:32.039047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.747 qpair failed and we were unable to recover it. 00:33:13.747 [2024-07-26 09:06:32.039208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.747 [2024-07-26 09:06:32.039233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.747 qpair failed and we were unable to recover it. 00:33:13.747 [2024-07-26 09:06:32.039351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.747 [2024-07-26 09:06:32.039376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.747 qpair failed and we were unable to recover it. 00:33:13.747 [2024-07-26 09:06:32.039554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.747 [2024-07-26 09:06:32.039596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.747 qpair failed and we were unable to recover it. 00:33:13.747 [2024-07-26 09:06:32.039789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.747 [2024-07-26 09:06:32.039815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.747 qpair failed and we were unable to recover it. 00:33:13.747 [2024-07-26 09:06:32.039926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.747 [2024-07-26 09:06:32.039969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.747 qpair failed and we were unable to recover it. 00:33:13.747 [2024-07-26 09:06:32.040133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.747 [2024-07-26 09:06:32.040162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.747 qpair failed and we were unable to recover it. 00:33:13.747 [2024-07-26 09:06:32.040298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.747 [2024-07-26 09:06:32.040324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.747 qpair failed and we were unable to recover it. 00:33:13.747 [2024-07-26 09:06:32.040474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.747 [2024-07-26 09:06:32.040499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.747 qpair failed and we were unable to recover it. 00:33:13.747 [2024-07-26 09:06:32.040651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.747 [2024-07-26 09:06:32.040693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.747 qpair failed and we were unable to recover it. 00:33:13.747 [2024-07-26 09:06:32.040860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.747 [2024-07-26 09:06:32.040885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.747 qpair failed and we were unable to recover it. 00:33:13.747 [2024-07-26 09:06:32.041002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.747 [2024-07-26 09:06:32.041027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.747 qpair failed and we were unable to recover it. 00:33:13.747 [2024-07-26 09:06:32.041212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.747 [2024-07-26 09:06:32.041241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.747 qpair failed and we were unable to recover it. 00:33:13.747 [2024-07-26 09:06:32.041416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.747 [2024-07-26 09:06:32.041441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.747 qpair failed and we were unable to recover it. 00:33:13.747 [2024-07-26 09:06:32.041583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.747 [2024-07-26 09:06:32.041609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.747 qpair failed and we were unable to recover it. 00:33:13.747 [2024-07-26 09:06:32.041777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.747 [2024-07-26 09:06:32.041805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.747 qpair failed and we were unable to recover it. 00:33:13.747 [2024-07-26 09:06:32.041997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.747 [2024-07-26 09:06:32.042022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.747 qpair failed and we were unable to recover it. 00:33:13.747 [2024-07-26 09:06:32.042196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.747 [2024-07-26 09:06:32.042223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.747 qpair failed and we were unable to recover it. 00:33:13.747 [2024-07-26 09:06:32.042362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.747 [2024-07-26 09:06:32.042403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.747 qpair failed and we were unable to recover it. 00:33:13.747 [2024-07-26 09:06:32.042565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.747 [2024-07-26 09:06:32.042590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.747 qpair failed and we were unable to recover it. 00:33:13.747 [2024-07-26 09:06:32.042759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.747 [2024-07-26 09:06:32.042800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.747 qpair failed and we were unable to recover it. 00:33:13.747 [2024-07-26 09:06:32.042934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.747 [2024-07-26 09:06:32.042963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.747 qpair failed and we were unable to recover it. 00:33:13.747 [2024-07-26 09:06:32.043156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.747 [2024-07-26 09:06:32.043182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.747 qpair failed and we were unable to recover it. 00:33:13.747 [2024-07-26 09:06:32.043301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.747 [2024-07-26 09:06:32.043327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.747 qpair failed and we were unable to recover it. 00:33:13.747 [2024-07-26 09:06:32.043510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.747 [2024-07-26 09:06:32.043538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.747 qpair failed and we were unable to recover it. 00:33:13.747 [2024-07-26 09:06:32.043669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.747 [2024-07-26 09:06:32.043694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.747 qpair failed and we were unable to recover it. 00:33:13.747 [2024-07-26 09:06:32.043819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.747 [2024-07-26 09:06:32.043845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.747 qpair failed and we were unable to recover it. 00:33:13.747 [2024-07-26 09:06:32.043971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.747 [2024-07-26 09:06:32.043997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.747 qpair failed and we were unable to recover it. 00:33:13.747 [2024-07-26 09:06:32.044115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.747 [2024-07-26 09:06:32.044143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.747 qpair failed and we were unable to recover it. 00:33:13.747 [2024-07-26 09:06:32.044268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.747 [2024-07-26 09:06:32.044308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.747 qpair failed and we were unable to recover it. 00:33:13.747 [2024-07-26 09:06:32.044474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.747 [2024-07-26 09:06:32.044506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.747 qpair failed and we were unable to recover it. 00:33:13.747 [2024-07-26 09:06:32.044670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.747 [2024-07-26 09:06:32.044696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.747 qpair failed and we were unable to recover it. 00:33:13.747 [2024-07-26 09:06:32.044854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.747 [2024-07-26 09:06:32.044882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.747 qpair failed and we were unable to recover it. 00:33:13.747 [2024-07-26 09:06:32.045072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.747 [2024-07-26 09:06:32.045105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.747 qpair failed and we were unable to recover it. 00:33:13.747 [2024-07-26 09:06:32.045248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.747 [2024-07-26 09:06:32.045274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.747 qpair failed and we were unable to recover it. 00:33:13.747 [2024-07-26 09:06:32.045415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.747 [2024-07-26 09:06:32.045457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.748 qpair failed and we were unable to recover it. 00:33:13.748 [2024-07-26 09:06:32.045609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.748 [2024-07-26 09:06:32.045637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.748 qpair failed and we were unable to recover it. 00:33:13.748 [2024-07-26 09:06:32.045808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.748 [2024-07-26 09:06:32.045833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.748 qpair failed and we were unable to recover it. 00:33:13.748 [2024-07-26 09:06:32.045952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.748 [2024-07-26 09:06:32.045978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.748 qpair failed and we were unable to recover it. 00:33:13.748 [2024-07-26 09:06:32.046086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.748 [2024-07-26 09:06:32.046112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.748 qpair failed and we were unable to recover it. 00:33:13.748 [2024-07-26 09:06:32.046267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.748 [2024-07-26 09:06:32.046292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.748 qpair failed and we were unable to recover it. 00:33:13.748 [2024-07-26 09:06:32.046419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.748 [2024-07-26 09:06:32.046444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.748 qpair failed and we were unable to recover it. 00:33:13.748 [2024-07-26 09:06:32.046592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.748 [2024-07-26 09:06:32.046617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.748 qpair failed and we were unable to recover it. 00:33:13.748 [2024-07-26 09:06:32.046758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.748 [2024-07-26 09:06:32.046783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.748 qpair failed and we were unable to recover it. 00:33:13.748 [2024-07-26 09:06:32.046945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.748 [2024-07-26 09:06:32.046974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.748 qpair failed and we were unable to recover it. 00:33:13.748 [2024-07-26 09:06:32.047139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.748 [2024-07-26 09:06:32.047168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.748 qpair failed and we were unable to recover it. 00:33:13.748 [2024-07-26 09:06:32.047308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.748 [2024-07-26 09:06:32.047333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.748 qpair failed and we were unable to recover it. 00:33:13.748 [2024-07-26 09:06:32.047527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.748 [2024-07-26 09:06:32.047555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.748 qpair failed and we were unable to recover it. 00:33:13.748 [2024-07-26 09:06:32.047680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.748 [2024-07-26 09:06:32.047707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.748 qpair failed and we were unable to recover it. 00:33:13.748 [2024-07-26 09:06:32.047859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.748 [2024-07-26 09:06:32.047884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.748 qpair failed and we were unable to recover it. 00:33:13.748 [2024-07-26 09:06:32.048036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.748 [2024-07-26 09:06:32.048069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.748 qpair failed and we were unable to recover it. 00:33:13.748 [2024-07-26 09:06:32.048230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.748 [2024-07-26 09:06:32.048259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.748 qpair failed and we were unable to recover it. 00:33:13.748 [2024-07-26 09:06:32.048455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.748 [2024-07-26 09:06:32.048481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.748 qpair failed and we were unable to recover it. 00:33:13.748 [2024-07-26 09:06:32.048631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.748 [2024-07-26 09:06:32.048656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.748 qpair failed and we were unable to recover it. 00:33:13.748 [2024-07-26 09:06:32.048789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.748 [2024-07-26 09:06:32.048817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.748 qpair failed and we were unable to recover it. 00:33:13.748 [2024-07-26 09:06:32.048974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.748 [2024-07-26 09:06:32.049002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.748 qpair failed and we were unable to recover it. 00:33:13.748 [2024-07-26 09:06:32.049167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.748 [2024-07-26 09:06:32.049195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.748 qpair failed and we were unable to recover it. 00:33:13.748 [2024-07-26 09:06:32.049354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.748 [2024-07-26 09:06:32.049387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.748 qpair failed and we were unable to recover it. 00:33:13.748 [2024-07-26 09:06:32.049589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.748 [2024-07-26 09:06:32.049639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.748 qpair failed and we were unable to recover it. 00:33:13.748 [2024-07-26 09:06:32.049797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.748 [2024-07-26 09:06:32.049825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.748 qpair failed and we were unable to recover it. 00:33:13.748 [2024-07-26 09:06:32.049965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.748 [2024-07-26 09:06:32.049993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.748 qpair failed and we were unable to recover it. 00:33:13.748 [2024-07-26 09:06:32.050161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.748 [2024-07-26 09:06:32.050187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.748 qpair failed and we were unable to recover it. 00:33:13.748 [2024-07-26 09:06:32.050332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.748 [2024-07-26 09:06:32.050374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.748 qpair failed and we were unable to recover it. 00:33:13.748 [2024-07-26 09:06:32.050515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.748 [2024-07-26 09:06:32.050543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.748 qpair failed and we were unable to recover it. 00:33:13.748 [2024-07-26 09:06:32.050709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.748 [2024-07-26 09:06:32.050734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.748 qpair failed and we were unable to recover it. 00:33:13.748 [2024-07-26 09:06:32.050848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.748 [2024-07-26 09:06:32.050873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.748 qpair failed and we were unable to recover it. 00:33:13.748 [2024-07-26 09:06:32.051066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.748 [2024-07-26 09:06:32.051091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.748 qpair failed and we were unable to recover it. 00:33:13.748 [2024-07-26 09:06:32.051267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.748 [2024-07-26 09:06:32.051293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.748 qpair failed and we were unable to recover it. 00:33:13.748 [2024-07-26 09:06:32.051425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.748 [2024-07-26 09:06:32.051453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.748 qpair failed and we were unable to recover it. 00:33:13.748 [2024-07-26 09:06:32.051625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.748 [2024-07-26 09:06:32.051651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.748 qpair failed and we were unable to recover it. 00:33:13.748 [2024-07-26 09:06:32.051826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.748 [2024-07-26 09:06:32.051851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.748 qpair failed and we were unable to recover it. 00:33:13.748 [2024-07-26 09:06:32.052016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.748 [2024-07-26 09:06:32.052044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.748 qpair failed and we were unable to recover it. 00:33:13.748 [2024-07-26 09:06:32.052216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.748 [2024-07-26 09:06:32.052243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.748 qpair failed and we were unable to recover it. 00:33:13.748 [2024-07-26 09:06:32.052407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.748 [2024-07-26 09:06:32.052432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.748 qpair failed and we were unable to recover it. 00:33:13.748 [2024-07-26 09:06:32.052597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.748 [2024-07-26 09:06:32.052625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.748 qpair failed and we were unable to recover it. 00:33:13.748 [2024-07-26 09:06:32.052763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.748 [2024-07-26 09:06:32.052791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.748 qpair failed and we were unable to recover it. 00:33:13.748 [2024-07-26 09:06:32.052986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.748 [2024-07-26 09:06:32.053012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.748 qpair failed and we were unable to recover it. 00:33:13.748 [2024-07-26 09:06:32.053150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.748 [2024-07-26 09:06:32.053177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.748 qpair failed and we were unable to recover it. 00:33:13.748 [2024-07-26 09:06:32.053366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.748 [2024-07-26 09:06:32.053394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.748 qpair failed and we were unable to recover it. 00:33:13.748 [2024-07-26 09:06:32.053589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.748 [2024-07-26 09:06:32.053615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.748 qpair failed and we were unable to recover it. 00:33:13.748 [2024-07-26 09:06:32.053727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.748 [2024-07-26 09:06:32.053770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.748 qpair failed and we were unable to recover it. 00:33:13.748 [2024-07-26 09:06:32.053895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.748 [2024-07-26 09:06:32.053923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.748 qpair failed and we were unable to recover it. 00:33:13.748 [2024-07-26 09:06:32.054056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.748 [2024-07-26 09:06:32.054086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.748 qpair failed and we were unable to recover it. 00:33:13.748 [2024-07-26 09:06:32.054236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.748 [2024-07-26 09:06:32.054261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.748 qpair failed and we were unable to recover it. 00:33:13.748 [2024-07-26 09:06:32.054401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.748 [2024-07-26 09:06:32.054426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.748 qpair failed and we were unable to recover it. 00:33:13.748 [2024-07-26 09:06:32.054598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.748 [2024-07-26 09:06:32.054623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.748 qpair failed and we were unable to recover it. 00:33:13.748 [2024-07-26 09:06:32.054785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.748 [2024-07-26 09:06:32.054814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.748 qpair failed and we were unable to recover it. 00:33:13.748 [2024-07-26 09:06:32.054976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.748 [2024-07-26 09:06:32.055004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.748 qpair failed and we were unable to recover it. 00:33:13.748 [2024-07-26 09:06:32.055173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.748 [2024-07-26 09:06:32.055199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.748 qpair failed and we were unable to recover it. 00:33:13.748 [2024-07-26 09:06:32.055345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.748 [2024-07-26 09:06:32.055370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.748 qpair failed and we were unable to recover it. 00:33:13.748 [2024-07-26 09:06:32.055513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.748 [2024-07-26 09:06:32.055538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.748 qpair failed and we were unable to recover it. 00:33:13.748 [2024-07-26 09:06:32.055682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.748 [2024-07-26 09:06:32.055707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.748 qpair failed and we were unable to recover it. 00:33:13.748 [2024-07-26 09:06:32.055875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.748 [2024-07-26 09:06:32.055904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.748 qpair failed and we were unable to recover it. 00:33:13.748 [2024-07-26 09:06:32.056040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.748 [2024-07-26 09:06:32.056078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.749 qpair failed and we were unable to recover it. 00:33:13.749 [2024-07-26 09:06:32.056265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.749 [2024-07-26 09:06:32.056290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.749 qpair failed and we were unable to recover it. 00:33:13.749 [2024-07-26 09:06:32.056460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.749 [2024-07-26 09:06:32.056489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.749 qpair failed and we were unable to recover it. 00:33:13.749 [2024-07-26 09:06:32.056670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.749 [2024-07-26 09:06:32.056699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.749 qpair failed and we were unable to recover it. 00:33:13.749 [2024-07-26 09:06:32.056889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.749 [2024-07-26 09:06:32.056914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.749 qpair failed and we were unable to recover it. 00:33:13.749 [2024-07-26 09:06:32.057101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.749 [2024-07-26 09:06:32.057136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.749 qpair failed and we were unable to recover it. 00:33:13.749 [2024-07-26 09:06:32.057287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.749 [2024-07-26 09:06:32.057315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.749 qpair failed and we were unable to recover it. 00:33:13.749 [2024-07-26 09:06:32.057481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.749 [2024-07-26 09:06:32.057506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.749 qpair failed and we were unable to recover it. 00:33:13.749 [2024-07-26 09:06:32.057691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.749 [2024-07-26 09:06:32.057720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.749 qpair failed and we were unable to recover it. 00:33:13.749 [2024-07-26 09:06:32.057886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.749 [2024-07-26 09:06:32.057914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.749 qpair failed and we were unable to recover it. 00:33:13.749 [2024-07-26 09:06:32.058087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.749 [2024-07-26 09:06:32.058114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.749 qpair failed and we were unable to recover it. 00:33:13.749 [2024-07-26 09:06:32.058273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.749 [2024-07-26 09:06:32.058301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.749 qpair failed and we were unable to recover it. 00:33:13.749 [2024-07-26 09:06:32.058490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.749 [2024-07-26 09:06:32.058518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.749 qpair failed and we were unable to recover it. 00:33:13.749 [2024-07-26 09:06:32.058663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.749 [2024-07-26 09:06:32.058689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.749 qpair failed and we were unable to recover it. 00:33:13.749 [2024-07-26 09:06:32.058860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.749 [2024-07-26 09:06:32.058885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.749 qpair failed and we were unable to recover it. 00:33:13.749 [2024-07-26 09:06:32.059071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.749 [2024-07-26 09:06:32.059098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.749 qpair failed and we were unable to recover it. 00:33:13.749 [2024-07-26 09:06:32.059250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.749 [2024-07-26 09:06:32.059276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.749 qpair failed and we were unable to recover it. 00:33:13.749 [2024-07-26 09:06:32.059438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.749 [2024-07-26 09:06:32.059466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.749 qpair failed and we were unable to recover it. 00:33:13.749 [2024-07-26 09:06:32.059641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.749 [2024-07-26 09:06:32.059667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.749 qpair failed and we were unable to recover it. 00:33:13.749 [2024-07-26 09:06:32.059837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.749 [2024-07-26 09:06:32.059862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.749 qpair failed and we were unable to recover it. 00:33:13.749 [2024-07-26 09:06:32.060026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.749 [2024-07-26 09:06:32.060054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.749 qpair failed and we were unable to recover it. 00:33:13.749 [2024-07-26 09:06:32.060190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.749 [2024-07-26 09:06:32.060219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.749 qpair failed and we were unable to recover it. 00:33:13.749 [2024-07-26 09:06:32.060361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.749 [2024-07-26 09:06:32.060386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.749 qpair failed and we were unable to recover it. 00:33:13.749 [2024-07-26 09:06:32.060530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.749 [2024-07-26 09:06:32.060556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.749 qpair failed and we were unable to recover it. 00:33:13.749 [2024-07-26 09:06:32.060730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.749 [2024-07-26 09:06:32.060758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.749 qpair failed and we were unable to recover it. 00:33:13.749 [2024-07-26 09:06:32.060949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.749 [2024-07-26 09:06:32.060975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.749 qpair failed and we were unable to recover it. 00:33:13.749 [2024-07-26 09:06:32.061124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.749 [2024-07-26 09:06:32.061168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.749 qpair failed and we were unable to recover it. 00:33:13.749 [2024-07-26 09:06:32.061332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.749 [2024-07-26 09:06:32.061361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.749 qpair failed and we were unable to recover it. 00:33:13.749 [2024-07-26 09:06:32.061527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.749 [2024-07-26 09:06:32.061552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.749 qpair failed and we were unable to recover it. 00:33:13.749 [2024-07-26 09:06:32.061700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.749 [2024-07-26 09:06:32.061726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.749 qpair failed and we were unable to recover it. 00:33:13.749 [2024-07-26 09:06:32.061871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.749 [2024-07-26 09:06:32.061897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.749 qpair failed and we were unable to recover it. 00:33:13.749 [2024-07-26 09:06:32.062071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.749 [2024-07-26 09:06:32.062097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.749 qpair failed and we were unable to recover it. 00:33:13.749 [2024-07-26 09:06:32.062214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.749 [2024-07-26 09:06:32.062243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.749 qpair failed and we were unable to recover it. 00:33:13.749 [2024-07-26 09:06:32.062369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.749 [2024-07-26 09:06:32.062395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.749 qpair failed and we were unable to recover it. 00:33:13.749 [2024-07-26 09:06:32.062552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.749 [2024-07-26 09:06:32.062577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.749 qpair failed and we were unable to recover it. 00:33:13.749 [2024-07-26 09:06:32.062702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.749 [2024-07-26 09:06:32.062727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.749 qpair failed and we were unable to recover it. 00:33:13.749 [2024-07-26 09:06:32.062898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.749 [2024-07-26 09:06:32.062941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.749 qpair failed and we were unable to recover it. 00:33:13.749 [2024-07-26 09:06:32.063101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.749 [2024-07-26 09:06:32.063127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.749 qpair failed and we were unable to recover it. 00:33:13.749 [2024-07-26 09:06:32.063275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.749 [2024-07-26 09:06:32.063319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.749 qpair failed and we were unable to recover it. 00:33:13.749 [2024-07-26 09:06:32.063484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.749 [2024-07-26 09:06:32.063513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.749 qpair failed and we were unable to recover it. 00:33:13.749 [2024-07-26 09:06:32.063701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.749 [2024-07-26 09:06:32.063726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.749 qpair failed and we were unable to recover it. 00:33:13.749 [2024-07-26 09:06:32.063917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.749 [2024-07-26 09:06:32.063947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.749 qpair failed and we were unable to recover it. 00:33:13.749 [2024-07-26 09:06:32.064080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.749 [2024-07-26 09:06:32.064109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.749 qpair failed and we were unable to recover it. 00:33:13.749 [2024-07-26 09:06:32.064249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.749 [2024-07-26 09:06:32.064274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.749 qpair failed and we were unable to recover it. 00:33:13.749 [2024-07-26 09:06:32.064421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.749 [2024-07-26 09:06:32.064447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.749 qpair failed and we were unable to recover it. 00:33:13.749 [2024-07-26 09:06:32.064556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.749 [2024-07-26 09:06:32.064581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.749 qpair failed and we were unable to recover it. 00:33:13.749 [2024-07-26 09:06:32.064715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.749 [2024-07-26 09:06:32.064753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:13.749 qpair failed and we were unable to recover it. 00:33:13.749 [2024-07-26 09:06:32.064915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.749 [2024-07-26 09:06:32.064960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:13.749 qpair failed and we were unable to recover it. 00:33:13.749 [2024-07-26 09:06:32.065112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.749 [2024-07-26 09:06:32.065140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:13.749 qpair failed and we were unable to recover it. 00:33:13.749 [2024-07-26 09:06:32.065280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.749 [2024-07-26 09:06:32.065306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:13.749 qpair failed and we were unable to recover it. 00:33:13.749 [2024-07-26 09:06:32.065441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.749 [2024-07-26 09:06:32.065484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:13.749 qpair failed and we were unable to recover it. 00:33:13.749 [2024-07-26 09:06:32.065625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.749 [2024-07-26 09:06:32.065651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:13.749 qpair failed and we were unable to recover it. 00:33:13.749 [2024-07-26 09:06:32.065808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.749 [2024-07-26 09:06:32.065853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:13.749 qpair failed and we were unable to recover it. 00:33:13.749 [2024-07-26 09:06:32.065977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.749 [2024-07-26 09:06:32.066003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:13.749 qpair failed and we were unable to recover it. 00:33:13.749 [2024-07-26 09:06:32.066156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.749 [2024-07-26 09:06:32.066182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:13.749 qpair failed and we were unable to recover it. 00:33:13.749 [2024-07-26 09:06:32.066317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.749 [2024-07-26 09:06:32.066360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:13.749 qpair failed and we were unable to recover it. 00:33:13.749 [2024-07-26 09:06:32.066495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.749 [2024-07-26 09:06:32.066539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:13.749 qpair failed and we were unable to recover it. 00:33:13.749 [2024-07-26 09:06:32.066739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.749 [2024-07-26 09:06:32.066782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:13.749 qpair failed and we were unable to recover it. 00:33:13.749 [2024-07-26 09:06:32.066934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.749 [2024-07-26 09:06:32.066960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:13.749 qpair failed and we were unable to recover it. 00:33:13.749 [2024-07-26 09:06:32.067156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.749 [2024-07-26 09:06:32.067206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:13.749 qpair failed and we were unable to recover it. 00:33:13.749 [2024-07-26 09:06:32.067375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.749 [2024-07-26 09:06:32.067418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:13.749 qpair failed and we were unable to recover it. 00:33:13.749 [2024-07-26 09:06:32.067568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.749 [2024-07-26 09:06:32.067593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:13.749 qpair failed and we were unable to recover it. 00:33:13.749 [2024-07-26 09:06:32.067706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.749 [2024-07-26 09:06:32.067739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:13.750 qpair failed and we were unable to recover it. 00:33:13.750 [2024-07-26 09:06:32.067915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.750 [2024-07-26 09:06:32.067942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:13.750 qpair failed and we were unable to recover it. 00:33:13.750 [2024-07-26 09:06:32.068071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.750 [2024-07-26 09:06:32.068099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.750 qpair failed and we were unable to recover it. 00:33:13.750 [2024-07-26 09:06:32.068278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.750 [2024-07-26 09:06:32.068304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.750 qpair failed and we were unable to recover it. 00:33:13.750 [2024-07-26 09:06:32.068446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.750 [2024-07-26 09:06:32.068472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.750 qpair failed and we were unable to recover it. 00:33:13.750 [2024-07-26 09:06:32.068586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.750 [2024-07-26 09:06:32.068612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.750 qpair failed and we were unable to recover it. 00:33:13.750 [2024-07-26 09:06:32.068730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.750 [2024-07-26 09:06:32.068755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.750 qpair failed and we were unable to recover it. 00:33:13.750 [2024-07-26 09:06:32.068876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.750 [2024-07-26 09:06:32.068902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.750 qpair failed and we were unable to recover it. 00:33:13.750 [2024-07-26 09:06:32.069039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.750 [2024-07-26 09:06:32.069080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.750 qpair failed and we were unable to recover it. 00:33:13.750 [2024-07-26 09:06:32.069277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.750 [2024-07-26 09:06:32.069302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.750 qpair failed and we were unable to recover it. 00:33:13.750 [2024-07-26 09:06:32.069475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.750 [2024-07-26 09:06:32.069504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.750 qpair failed and we were unable to recover it. 00:33:13.750 [2024-07-26 09:06:32.069705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.750 [2024-07-26 09:06:32.069734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.750 qpair failed and we were unable to recover it. 00:33:13.750 [2024-07-26 09:06:32.069893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.750 [2024-07-26 09:06:32.069921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.750 qpair failed and we were unable to recover it. 00:33:13.750 [2024-07-26 09:06:32.070054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.750 [2024-07-26 09:06:32.070114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.750 qpair failed and we were unable to recover it. 00:33:13.750 [2024-07-26 09:06:32.070287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.750 [2024-07-26 09:06:32.070316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.750 qpair failed and we were unable to recover it. 00:33:13.750 [2024-07-26 09:06:32.070470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.750 [2024-07-26 09:06:32.070498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.750 qpair failed and we were unable to recover it. 00:33:13.750 [2024-07-26 09:06:32.070654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.750 [2024-07-26 09:06:32.070682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.750 qpair failed and we were unable to recover it. 00:33:13.750 [2024-07-26 09:06:32.070810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.750 [2024-07-26 09:06:32.070839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.750 qpair failed and we were unable to recover it. 00:33:13.750 [2024-07-26 09:06:32.071020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.750 [2024-07-26 09:06:32.071049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.750 qpair failed and we were unable to recover it. 00:33:13.750 [2024-07-26 09:06:32.071227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.750 [2024-07-26 09:06:32.071253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.750 qpair failed and we were unable to recover it. 00:33:13.750 [2024-07-26 09:06:32.071427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.750 [2024-07-26 09:06:32.071455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.750 qpair failed and we were unable to recover it. 00:33:13.750 [2024-07-26 09:06:32.071588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.750 [2024-07-26 09:06:32.071616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.750 qpair failed and we were unable to recover it. 00:33:13.750 [2024-07-26 09:06:32.071745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.750 [2024-07-26 09:06:32.071773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.750 qpair failed and we were unable to recover it. 00:33:13.750 [2024-07-26 09:06:32.071996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.750 [2024-07-26 09:06:32.072035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:13.750 qpair failed and we were unable to recover it. 00:33:13.750 [2024-07-26 09:06:32.072189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.750 [2024-07-26 09:06:32.072221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:13.750 qpair failed and we were unable to recover it. 00:33:13.750 [2024-07-26 09:06:32.072387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.750 [2024-07-26 09:06:32.072431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:13.750 qpair failed and we were unable to recover it. 00:33:13.750 [2024-07-26 09:06:32.072599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.750 [2024-07-26 09:06:32.072642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:13.750 qpair failed and we were unable to recover it. 00:33:13.750 [2024-07-26 09:06:32.072814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.750 [2024-07-26 09:06:32.072859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:13.750 qpair failed and we were unable to recover it. 00:33:13.750 [2024-07-26 09:06:32.073006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.750 [2024-07-26 09:06:32.073032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:13.750 qpair failed and we were unable to recover it. 00:33:13.750 [2024-07-26 09:06:32.073215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.750 [2024-07-26 09:06:32.073258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:13.750 qpair failed and we were unable to recover it. 00:33:13.750 [2024-07-26 09:06:32.073397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.750 [2024-07-26 09:06:32.073440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:13.750 qpair failed and we were unable to recover it. 00:33:13.750 [2024-07-26 09:06:32.073642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.750 [2024-07-26 09:06:32.073670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:13.750 qpair failed and we were unable to recover it. 00:33:13.750 [2024-07-26 09:06:32.073859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.750 [2024-07-26 09:06:32.073886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:13.750 qpair failed and we were unable to recover it. 00:33:13.750 [2024-07-26 09:06:32.074065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.750 [2024-07-26 09:06:32.074092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:13.750 qpair failed and we were unable to recover it. 00:33:13.750 [2024-07-26 09:06:32.074226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.750 [2024-07-26 09:06:32.074255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:13.750 qpair failed and we were unable to recover it. 00:33:13.750 [2024-07-26 09:06:32.074445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.750 [2024-07-26 09:06:32.074487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:13.750 qpair failed and we were unable to recover it. 00:33:13.750 [2024-07-26 09:06:32.074655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.750 [2024-07-26 09:06:32.074699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:13.750 qpair failed and we were unable to recover it. 00:33:13.750 [2024-07-26 09:06:32.074813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.750 [2024-07-26 09:06:32.074839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:13.750 qpair failed and we were unable to recover it. 00:33:13.750 [2024-07-26 09:06:32.074988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.750 [2024-07-26 09:06:32.075015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:13.750 qpair failed and we were unable to recover it. 00:33:13.750 [2024-07-26 09:06:32.075185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.750 [2024-07-26 09:06:32.075231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:13.750 qpair failed and we were unable to recover it. 00:33:13.750 [2024-07-26 09:06:32.075428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.750 [2024-07-26 09:06:32.075457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:13.750 qpair failed and we were unable to recover it. 00:33:13.750 [2024-07-26 09:06:32.075730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.750 [2024-07-26 09:06:32.075761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.750 qpair failed and we were unable to recover it. 00:33:13.750 [2024-07-26 09:06:32.075922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.750 [2024-07-26 09:06:32.075951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.750 qpair failed and we were unable to recover it. 00:33:13.750 [2024-07-26 09:06:32.076105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.750 [2024-07-26 09:06:32.076148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.750 qpair failed and we were unable to recover it. 00:33:13.750 [2024-07-26 09:06:32.076316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.750 [2024-07-26 09:06:32.076344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.750 qpair failed and we were unable to recover it. 00:33:13.750 [2024-07-26 09:06:32.076504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.750 [2024-07-26 09:06:32.076532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.750 qpair failed and we were unable to recover it. 00:33:13.750 [2024-07-26 09:06:32.076694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.750 [2024-07-26 09:06:32.076724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.750 qpair failed and we were unable to recover it. 00:33:13.750 [2024-07-26 09:06:32.076898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.750 [2024-07-26 09:06:32.076924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.750 qpair failed and we were unable to recover it. 00:33:13.750 [2024-07-26 09:06:32.077096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.750 [2024-07-26 09:06:32.077123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.750 qpair failed and we were unable to recover it. 00:33:13.750 [2024-07-26 09:06:32.077270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.750 [2024-07-26 09:06:32.077296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.750 qpair failed and we were unable to recover it. 00:33:13.750 [2024-07-26 09:06:32.077442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.750 [2024-07-26 09:06:32.077471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.750 qpair failed and we were unable to recover it. 00:33:13.750 [2024-07-26 09:06:32.077666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.750 [2024-07-26 09:06:32.077699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.750 qpair failed and we were unable to recover it. 00:33:13.750 [2024-07-26 09:06:32.077863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.750 [2024-07-26 09:06:32.077892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.750 qpair failed and we were unable to recover it. 00:33:13.751 [2024-07-26 09:06:32.078054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.751 [2024-07-26 09:06:32.078088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.751 qpair failed and we were unable to recover it. 00:33:13.751 [2024-07-26 09:06:32.078233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.751 [2024-07-26 09:06:32.078259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.751 qpair failed and we were unable to recover it. 00:33:13.751 [2024-07-26 09:06:32.078426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.751 [2024-07-26 09:06:32.078456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.751 qpair failed and we were unable to recover it. 00:33:13.751 [2024-07-26 09:06:32.078594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.751 [2024-07-26 09:06:32.078681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.751 qpair failed and we were unable to recover it. 00:33:13.751 [2024-07-26 09:06:32.078843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.751 [2024-07-26 09:06:32.078872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.751 qpair failed and we were unable to recover it. 00:33:13.751 [2024-07-26 09:06:32.079006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.751 [2024-07-26 09:06:32.079035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.751 qpair failed and we were unable to recover it. 00:33:13.751 [2024-07-26 09:06:32.079228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.751 [2024-07-26 09:06:32.079254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.751 qpair failed and we were unable to recover it. 00:33:13.751 [2024-07-26 09:06:32.079424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.751 [2024-07-26 09:06:32.079452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.751 qpair failed and we were unable to recover it. 00:33:13.751 [2024-07-26 09:06:32.079604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.751 [2024-07-26 09:06:32.079633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.751 qpair failed and we were unable to recover it. 00:33:13.751 [2024-07-26 09:06:32.079820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.751 [2024-07-26 09:06:32.079848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.751 qpair failed and we were unable to recover it. 00:33:13.751 [2024-07-26 09:06:32.079997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.751 [2024-07-26 09:06:32.080023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.751 qpair failed and we were unable to recover it. 00:33:13.751 [2024-07-26 09:06:32.080176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.751 [2024-07-26 09:06:32.080202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.751 qpair failed and we were unable to recover it. 00:33:13.751 [2024-07-26 09:06:32.080378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.751 [2024-07-26 09:06:32.080404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.751 qpair failed and we were unable to recover it. 00:33:13.751 [2024-07-26 09:06:32.080567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.751 [2024-07-26 09:06:32.080594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.751 qpair failed and we were unable to recover it. 00:33:13.751 [2024-07-26 09:06:32.080745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.751 [2024-07-26 09:06:32.080770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.751 qpair failed and we were unable to recover it. 00:33:13.751 [2024-07-26 09:06:32.080937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.751 [2024-07-26 09:06:32.080966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.751 qpair failed and we were unable to recover it. 00:33:13.751 [2024-07-26 09:06:32.081115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.751 [2024-07-26 09:06:32.081142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.751 qpair failed and we were unable to recover it. 00:33:13.751 [2024-07-26 09:06:32.081315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.751 [2024-07-26 09:06:32.081357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.751 qpair failed and we were unable to recover it. 00:33:13.751 [2024-07-26 09:06:32.081530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.751 [2024-07-26 09:06:32.081555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.751 qpair failed and we were unable to recover it. 00:33:13.751 [2024-07-26 09:06:32.081693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.751 [2024-07-26 09:06:32.081721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.751 qpair failed and we were unable to recover it. 00:33:13.751 [2024-07-26 09:06:32.081906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.751 [2024-07-26 09:06:32.081935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.751 qpair failed and we were unable to recover it. 00:33:13.751 [2024-07-26 09:06:32.082103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.751 [2024-07-26 09:06:32.082129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.751 qpair failed and we were unable to recover it. 00:33:13.751 [2024-07-26 09:06:32.082243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.751 [2024-07-26 09:06:32.082268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.751 qpair failed and we were unable to recover it. 00:33:13.751 [2024-07-26 09:06:32.082431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.751 [2024-07-26 09:06:32.082459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.751 qpair failed and we were unable to recover it. 00:33:13.751 [2024-07-26 09:06:32.082659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.751 [2024-07-26 09:06:32.082711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.751 qpair failed and we were unable to recover it. 00:33:13.751 [2024-07-26 09:06:32.082871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.751 [2024-07-26 09:06:32.082899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.751 qpair failed and we were unable to recover it. 00:33:13.751 [2024-07-26 09:06:32.083029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.751 [2024-07-26 09:06:32.083064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.751 qpair failed and we were unable to recover it. 00:33:13.751 [2024-07-26 09:06:32.083234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.751 [2024-07-26 09:06:32.083259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.751 qpair failed and we were unable to recover it. 00:33:13.751 [2024-07-26 09:06:32.083386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.751 [2024-07-26 09:06:32.083411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.751 qpair failed and we were unable to recover it. 00:33:13.751 [2024-07-26 09:06:32.083577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.751 [2024-07-26 09:06:32.083606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.751 qpair failed and we were unable to recover it. 00:33:13.751 [2024-07-26 09:06:32.083736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.751 [2024-07-26 09:06:32.083765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.751 qpair failed and we were unable to recover it. 00:33:13.751 [2024-07-26 09:06:32.083901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.751 [2024-07-26 09:06:32.083929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.751 qpair failed and we were unable to recover it. 00:33:13.751 [2024-07-26 09:06:32.084056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.751 [2024-07-26 09:06:32.084105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.751 qpair failed and we were unable to recover it. 00:33:13.751 [2024-07-26 09:06:32.084253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.751 [2024-07-26 09:06:32.084279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.751 qpair failed and we were unable to recover it. 00:33:13.751 [2024-07-26 09:06:32.084441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.751 [2024-07-26 09:06:32.084470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.751 qpair failed and we were unable to recover it. 00:33:13.751 [2024-07-26 09:06:32.084600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.751 [2024-07-26 09:06:32.084628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.751 qpair failed and we were unable to recover it. 00:33:13.751 [2024-07-26 09:06:32.084787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.751 [2024-07-26 09:06:32.084815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.751 qpair failed and we were unable to recover it. 00:33:13.751 [2024-07-26 09:06:32.084967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.751 [2024-07-26 09:06:32.084995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.751 qpair failed and we were unable to recover it. 00:33:13.751 [2024-07-26 09:06:32.085157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.751 [2024-07-26 09:06:32.085184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.751 qpair failed and we were unable to recover it. 00:33:13.751 [2024-07-26 09:06:32.085323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.751 [2024-07-26 09:06:32.085353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.751 qpair failed and we were unable to recover it. 00:33:13.751 [2024-07-26 09:06:32.085499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.751 [2024-07-26 09:06:32.085527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.751 qpair failed and we were unable to recover it. 00:33:13.751 [2024-07-26 09:06:32.085655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.751 [2024-07-26 09:06:32.085683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.751 qpair failed and we were unable to recover it. 00:33:13.751 [2024-07-26 09:06:32.085910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.751 [2024-07-26 09:06:32.085967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:13.751 qpair failed and we were unable to recover it. 00:33:13.751 [2024-07-26 09:06:32.086124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.751 [2024-07-26 09:06:32.086152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:13.751 qpair failed and we were unable to recover it. 00:33:13.751 [2024-07-26 09:06:32.086324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.751 [2024-07-26 09:06:32.086369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:13.751 qpair failed and we were unable to recover it. 00:33:13.751 [2024-07-26 09:06:32.086538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.751 [2024-07-26 09:06:32.086581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:13.751 qpair failed and we were unable to recover it. 00:33:13.751 [2024-07-26 09:06:32.086710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.751 [2024-07-26 09:06:32.086754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:13.751 qpair failed and we were unable to recover it. 00:33:13.751 [2024-07-26 09:06:32.086873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.751 [2024-07-26 09:06:32.086898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:13.751 qpair failed and we were unable to recover it. 00:33:13.751 [2024-07-26 09:06:32.087017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.751 [2024-07-26 09:06:32.087044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.751 qpair failed and we were unable to recover it. 00:33:13.751 [2024-07-26 09:06:32.087201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.751 [2024-07-26 09:06:32.087230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.751 qpair failed and we were unable to recover it. 00:33:13.751 [2024-07-26 09:06:32.087416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.751 [2024-07-26 09:06:32.087444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.751 qpair failed and we were unable to recover it. 00:33:13.751 [2024-07-26 09:06:32.087682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.751 [2024-07-26 09:06:32.087735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.751 qpair failed and we were unable to recover it. 00:33:13.751 [2024-07-26 09:06:32.087901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.751 [2024-07-26 09:06:32.087930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.751 qpair failed and we were unable to recover it. 00:33:13.751 [2024-07-26 09:06:32.088140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.751 [2024-07-26 09:06:32.088166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.751 qpair failed and we were unable to recover it. 00:33:13.751 [2024-07-26 09:06:32.089005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.751 [2024-07-26 09:06:32.089035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.751 qpair failed and we were unable to recover it. 00:33:13.751 [2024-07-26 09:06:32.089229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.751 [2024-07-26 09:06:32.089257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.751 qpair failed and we were unable to recover it. 00:33:13.751 [2024-07-26 09:06:32.089407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.751 [2024-07-26 09:06:32.089433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.751 qpair failed and we were unable to recover it. 00:33:13.751 [2024-07-26 09:06:32.089585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.751 [2024-07-26 09:06:32.089612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.751 qpair failed and we were unable to recover it. 00:33:13.751 [2024-07-26 09:06:32.089761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.751 [2024-07-26 09:06:32.089787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.751 qpair failed and we were unable to recover it. 00:33:13.751 [2024-07-26 09:06:32.089927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.751 [2024-07-26 09:06:32.089953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.751 qpair failed and we were unable to recover it. 00:33:13.751 [2024-07-26 09:06:32.090104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.751 [2024-07-26 09:06:32.090131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.751 qpair failed and we were unable to recover it. 00:33:13.751 [2024-07-26 09:06:32.090278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.751 [2024-07-26 09:06:32.090304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.752 qpair failed and we were unable to recover it. 00:33:13.752 [2024-07-26 09:06:32.090423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.752 [2024-07-26 09:06:32.090449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.752 qpair failed and we were unable to recover it. 00:33:13.752 [2024-07-26 09:06:32.090629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.752 [2024-07-26 09:06:32.090655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.752 qpair failed and we were unable to recover it. 00:33:13.752 [2024-07-26 09:06:32.090826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.752 [2024-07-26 09:06:32.090851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.752 qpair failed and we were unable to recover it. 00:33:13.752 [2024-07-26 09:06:32.090973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.752 [2024-07-26 09:06:32.090999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.752 qpair failed and we were unable to recover it. 00:33:13.752 [2024-07-26 09:06:32.091139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.752 [2024-07-26 09:06:32.091169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.752 qpair failed and we were unable to recover it. 00:33:13.752 [2024-07-26 09:06:32.091314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.752 [2024-07-26 09:06:32.091340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.752 qpair failed and we were unable to recover it. 00:33:13.752 [2024-07-26 09:06:32.091488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.752 [2024-07-26 09:06:32.091515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.752 qpair failed and we were unable to recover it. 00:33:13.752 [2024-07-26 09:06:32.091657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.752 [2024-07-26 09:06:32.091683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.752 qpair failed and we were unable to recover it. 00:33:13.752 [2024-07-26 09:06:32.091877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.752 [2024-07-26 09:06:32.091906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.752 qpair failed and we were unable to recover it. 00:33:13.752 [2024-07-26 09:06:32.092053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.752 [2024-07-26 09:06:32.092086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.752 qpair failed and we were unable to recover it. 00:33:13.752 [2024-07-26 09:06:32.092207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.752 [2024-07-26 09:06:32.092233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.752 qpair failed and we were unable to recover it. 00:33:13.752 [2024-07-26 09:06:32.092402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.752 [2024-07-26 09:06:32.092430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.752 qpair failed and we were unable to recover it. 00:33:13.752 [2024-07-26 09:06:32.093315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.752 [2024-07-26 09:06:32.093360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.752 qpair failed and we were unable to recover it. 00:33:13.752 [2024-07-26 09:06:32.093530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.752 [2024-07-26 09:06:32.093560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.752 qpair failed and we were unable to recover it. 00:33:13.752 [2024-07-26 09:06:32.094455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.752 [2024-07-26 09:06:32.094488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.752 qpair failed and we were unable to recover it. 00:33:13.752 [2024-07-26 09:06:32.094667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.752 [2024-07-26 09:06:32.094711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.752 qpair failed and we were unable to recover it. 00:33:13.752 [2024-07-26 09:06:32.095542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.752 [2024-07-26 09:06:32.095575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.752 qpair failed and we were unable to recover it. 00:33:13.752 [2024-07-26 09:06:32.095894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.752 [2024-07-26 09:06:32.095955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.752 qpair failed and we were unable to recover it. 00:33:13.752 [2024-07-26 09:06:32.096128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.752 [2024-07-26 09:06:32.096155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.752 qpair failed and we were unable to recover it. 00:33:13.752 [2024-07-26 09:06:32.096305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.752 [2024-07-26 09:06:32.096330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.752 qpair failed and we were unable to recover it. 00:33:13.752 [2024-07-26 09:06:32.096494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.752 [2024-07-26 09:06:32.096519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.752 qpair failed and we were unable to recover it. 00:33:13.752 [2024-07-26 09:06:32.096690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.752 [2024-07-26 09:06:32.096718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.752 qpair failed and we were unable to recover it. 00:33:13.752 [2024-07-26 09:06:32.096853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.752 [2024-07-26 09:06:32.096883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.752 qpair failed and we were unable to recover it. 00:33:13.752 [2024-07-26 09:06:32.097050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.752 [2024-07-26 09:06:32.097090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.752 qpair failed and we were unable to recover it. 00:33:13.752 [2024-07-26 09:06:32.097238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.752 [2024-07-26 09:06:32.097264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.752 qpair failed and we were unable to recover it. 00:33:13.752 [2024-07-26 09:06:32.097409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.752 [2024-07-26 09:06:32.097434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.752 qpair failed and we were unable to recover it. 00:33:13.752 [2024-07-26 09:06:32.097626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.752 [2024-07-26 09:06:32.097654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.752 qpair failed and we were unable to recover it. 00:33:13.752 [2024-07-26 09:06:32.097779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.752 [2024-07-26 09:06:32.097808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.752 qpair failed and we were unable to recover it. 00:33:13.752 [2024-07-26 09:06:32.097964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.752 [2024-07-26 09:06:32.097992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.752 qpair failed and we were unable to recover it. 00:33:13.752 [2024-07-26 09:06:32.098184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.752 [2024-07-26 09:06:32.098212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.752 qpair failed and we were unable to recover it. 00:33:13.752 [2024-07-26 09:06:32.098376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.752 [2024-07-26 09:06:32.098405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.752 qpair failed and we were unable to recover it. 00:33:13.752 [2024-07-26 09:06:32.098534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.752 [2024-07-26 09:06:32.098563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.752 qpair failed and we were unable to recover it. 00:33:13.752 [2024-07-26 09:06:32.098749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.752 [2024-07-26 09:06:32.098778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.752 qpair failed and we were unable to recover it. 00:33:13.752 [2024-07-26 09:06:32.098965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.752 [2024-07-26 09:06:32.098994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.752 qpair failed and we were unable to recover it. 00:33:13.752 [2024-07-26 09:06:32.099162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.752 [2024-07-26 09:06:32.099188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.752 qpair failed and we were unable to recover it. 00:33:13.752 [2024-07-26 09:06:32.099312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.752 [2024-07-26 09:06:32.099338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.752 qpair failed and we were unable to recover it. 00:33:13.752 [2024-07-26 09:06:32.099477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.752 [2024-07-26 09:06:32.099503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.752 qpair failed and we were unable to recover it. 00:33:13.752 [2024-07-26 09:06:32.099642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.752 [2024-07-26 09:06:32.099670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.752 qpair failed and we were unable to recover it. 00:33:13.752 [2024-07-26 09:06:32.099858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.752 [2024-07-26 09:06:32.099887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.752 qpair failed and we were unable to recover it. 00:33:13.752 [2024-07-26 09:06:32.100041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.752 [2024-07-26 09:06:32.100080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.752 qpair failed and we were unable to recover it. 00:33:13.752 [2024-07-26 09:06:32.100225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.752 [2024-07-26 09:06:32.100252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.752 qpair failed and we were unable to recover it. 00:33:13.752 [2024-07-26 09:06:32.100417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.752 [2024-07-26 09:06:32.100445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.752 qpair failed and we were unable to recover it. 00:33:13.752 [2024-07-26 09:06:32.100570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.752 [2024-07-26 09:06:32.100599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.752 qpair failed and we were unable to recover it. 00:33:13.752 [2024-07-26 09:06:32.100754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.752 [2024-07-26 09:06:32.100783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.752 qpair failed and we were unable to recover it. 00:33:13.752 [2024-07-26 09:06:32.100917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.752 [2024-07-26 09:06:32.100943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.752 qpair failed and we were unable to recover it. 00:33:13.752 [2024-07-26 09:06:32.101119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.752 [2024-07-26 09:06:32.101159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.752 qpair failed and we were unable to recover it. 00:33:13.752 [2024-07-26 09:06:32.101332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.752 [2024-07-26 09:06:32.101363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.752 qpair failed and we were unable to recover it. 00:33:13.752 [2024-07-26 09:06:32.101521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.752 [2024-07-26 09:06:32.101550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.752 qpair failed and we were unable to recover it. 00:33:13.752 [2024-07-26 09:06:32.101705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.752 [2024-07-26 09:06:32.101734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.752 qpair failed and we were unable to recover it. 00:33:13.752 [2024-07-26 09:06:32.101917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.752 [2024-07-26 09:06:32.101945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.752 qpair failed and we were unable to recover it. 00:33:13.752 [2024-07-26 09:06:32.102106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.752 [2024-07-26 09:06:32.102132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.752 qpair failed and we were unable to recover it. 00:33:13.752 [2024-07-26 09:06:32.102275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.752 [2024-07-26 09:06:32.102301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.752 qpair failed and we were unable to recover it. 00:33:13.752 [2024-07-26 09:06:32.102463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.752 [2024-07-26 09:06:32.102491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.752 qpair failed and we were unable to recover it. 00:33:13.752 [2024-07-26 09:06:32.102655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.752 [2024-07-26 09:06:32.102686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.752 qpair failed and we were unable to recover it. 00:33:13.752 [2024-07-26 09:06:32.102879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.752 [2024-07-26 09:06:32.102908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.752 qpair failed and we were unable to recover it. 00:33:13.752 [2024-07-26 09:06:32.103071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.752 [2024-07-26 09:06:32.103114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.752 qpair failed and we were unable to recover it. 00:33:13.752 [2024-07-26 09:06:32.103262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.752 [2024-07-26 09:06:32.103288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.752 qpair failed and we were unable to recover it. 00:33:13.752 [2024-07-26 09:06:32.103402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.752 [2024-07-26 09:06:32.103444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.752 qpair failed and we were unable to recover it. 00:33:13.752 [2024-07-26 09:06:32.103575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.752 [2024-07-26 09:06:32.103611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.752 qpair failed and we were unable to recover it. 00:33:13.752 [2024-07-26 09:06:32.103831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.752 [2024-07-26 09:06:32.103860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.752 qpair failed and we were unable to recover it. 00:33:13.752 [2024-07-26 09:06:32.103992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.752 [2024-07-26 09:06:32.104020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.752 qpair failed and we were unable to recover it. 00:33:13.752 [2024-07-26 09:06:32.104170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.752 [2024-07-26 09:06:32.104197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.752 qpair failed and we were unable to recover it. 00:33:13.753 [2024-07-26 09:06:32.104372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.753 [2024-07-26 09:06:32.104398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.753 qpair failed and we were unable to recover it. 00:33:13.753 [2024-07-26 09:06:32.104568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.753 [2024-07-26 09:06:32.104597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.753 qpair failed and we were unable to recover it. 00:33:13.753 [2024-07-26 09:06:32.104765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.753 [2024-07-26 09:06:32.104808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.753 qpair failed and we were unable to recover it. 00:33:13.753 [2024-07-26 09:06:32.105034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.753 [2024-07-26 09:06:32.105068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.753 qpair failed and we were unable to recover it. 00:33:13.753 [2024-07-26 09:06:32.105237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.753 [2024-07-26 09:06:32.105263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.753 qpair failed and we were unable to recover it. 00:33:13.753 [2024-07-26 09:06:32.105417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.753 [2024-07-26 09:06:32.105445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.753 qpair failed and we were unable to recover it. 00:33:13.753 [2024-07-26 09:06:32.105719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.753 [2024-07-26 09:06:32.105771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.753 qpair failed and we were unable to recover it. 00:33:13.753 [2024-07-26 09:06:32.105931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.753 [2024-07-26 09:06:32.105960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.753 qpair failed and we were unable to recover it. 00:33:13.753 [2024-07-26 09:06:32.106168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.753 [2024-07-26 09:06:32.106194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.753 qpair failed and we were unable to recover it. 00:33:13.753 [2024-07-26 09:06:32.106343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.753 [2024-07-26 09:06:32.106368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.753 qpair failed and we were unable to recover it. 00:33:13.753 [2024-07-26 09:06:32.106533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.753 [2024-07-26 09:06:32.106562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.753 qpair failed and we were unable to recover it. 00:33:13.753 [2024-07-26 09:06:32.106698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.753 [2024-07-26 09:06:32.106727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.753 qpair failed and we were unable to recover it. 00:33:13.753 [2024-07-26 09:06:32.106909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.753 [2024-07-26 09:06:32.106937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.753 qpair failed and we were unable to recover it. 00:33:13.753 [2024-07-26 09:06:32.107094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.753 [2024-07-26 09:06:32.107136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.753 qpair failed and we were unable to recover it. 00:33:13.753 [2024-07-26 09:06:32.107291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.753 [2024-07-26 09:06:32.107317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.753 qpair failed and we were unable to recover it. 00:33:13.753 [2024-07-26 09:06:32.107452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.753 [2024-07-26 09:06:32.107481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.753 qpair failed and we were unable to recover it. 00:33:13.753 [2024-07-26 09:06:32.107619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.753 [2024-07-26 09:06:32.107658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.753 qpair failed and we were unable to recover it. 00:33:13.753 [2024-07-26 09:06:32.107818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.753 [2024-07-26 09:06:32.107847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.753 qpair failed and we were unable to recover it. 00:33:13.753 [2024-07-26 09:06:32.108010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.753 [2024-07-26 09:06:32.108039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.753 qpair failed and we were unable to recover it. 00:33:13.753 [2024-07-26 09:06:32.108190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.753 [2024-07-26 09:06:32.108216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.753 qpair failed and we were unable to recover it. 00:33:13.753 [2024-07-26 09:06:32.108361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.753 [2024-07-26 09:06:32.108388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.753 qpair failed and we were unable to recover it. 00:33:13.753 [2024-07-26 09:06:32.108553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.753 [2024-07-26 09:06:32.108582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.753 qpair failed and we were unable to recover it. 00:33:13.753 [2024-07-26 09:06:32.108711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.753 [2024-07-26 09:06:32.108740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.753 qpair failed and we were unable to recover it. 00:33:13.753 [2024-07-26 09:06:32.108893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.753 [2024-07-26 09:06:32.108921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.753 qpair failed and we were unable to recover it. 00:33:13.753 [2024-07-26 09:06:32.109086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.753 [2024-07-26 09:06:32.109130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.753 qpair failed and we were unable to recover it. 00:33:13.753 [2024-07-26 09:06:32.109249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.753 [2024-07-26 09:06:32.109275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.753 qpair failed and we were unable to recover it. 00:33:13.753 [2024-07-26 09:06:32.109439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.753 [2024-07-26 09:06:32.109468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.753 qpair failed and we were unable to recover it. 00:33:13.753 [2024-07-26 09:06:32.109625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.753 [2024-07-26 09:06:32.109653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.753 qpair failed and we were unable to recover it. 00:33:13.753 [2024-07-26 09:06:32.109835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.753 [2024-07-26 09:06:32.109863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.753 qpair failed and we were unable to recover it. 00:33:13.753 [2024-07-26 09:06:32.110036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.753 [2024-07-26 09:06:32.110067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.753 qpair failed and we were unable to recover it. 00:33:13.753 [2024-07-26 09:06:32.110183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.753 [2024-07-26 09:06:32.110209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.753 qpair failed and we were unable to recover it. 00:33:13.753 [2024-07-26 09:06:32.110385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.753 [2024-07-26 09:06:32.110411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.753 qpair failed and we were unable to recover it. 00:33:13.753 [2024-07-26 09:06:32.110575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.753 [2024-07-26 09:06:32.110603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.753 qpair failed and we were unable to recover it. 00:33:13.753 [2024-07-26 09:06:32.110764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.753 [2024-07-26 09:06:32.110792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.753 qpair failed and we were unable to recover it. 00:33:13.753 [2024-07-26 09:06:32.110943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.753 [2024-07-26 09:06:32.110971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.753 qpair failed and we were unable to recover it. 00:33:13.753 [2024-07-26 09:06:32.111097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.753 [2024-07-26 09:06:32.111126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.753 qpair failed and we were unable to recover it. 00:33:13.753 [2024-07-26 09:06:32.111251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.753 [2024-07-26 09:06:32.111281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.753 qpair failed and we were unable to recover it. 00:33:13.753 [2024-07-26 09:06:32.111424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.753 [2024-07-26 09:06:32.111449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.753 qpair failed and we were unable to recover it. 00:33:13.753 [2024-07-26 09:06:32.111586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.753 [2024-07-26 09:06:32.111615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.753 qpair failed and we were unable to recover it. 00:33:13.753 [2024-07-26 09:06:32.111775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.753 [2024-07-26 09:06:32.111805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.753 qpair failed and we were unable to recover it. 00:33:13.753 [2024-07-26 09:06:32.111939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.753 [2024-07-26 09:06:32.111967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.753 qpair failed and we were unable to recover it. 00:33:13.753 [2024-07-26 09:06:32.112138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.753 [2024-07-26 09:06:32.112164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.753 qpair failed and we were unable to recover it. 00:33:13.753 [2024-07-26 09:06:32.112307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.753 [2024-07-26 09:06:32.112332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.753 qpair failed and we were unable to recover it. 00:33:13.753 [2024-07-26 09:06:32.112502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.753 [2024-07-26 09:06:32.112530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.753 qpair failed and we were unable to recover it. 00:33:13.753 [2024-07-26 09:06:32.112672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.753 [2024-07-26 09:06:32.112715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.753 qpair failed and we were unable to recover it. 00:33:13.753 [2024-07-26 09:06:32.112844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.753 [2024-07-26 09:06:32.112873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.753 qpair failed and we were unable to recover it. 00:33:13.753 [2024-07-26 09:06:32.113032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.753 [2024-07-26 09:06:32.113063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.753 qpair failed and we were unable to recover it. 00:33:13.753 [2024-07-26 09:06:32.113220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.753 [2024-07-26 09:06:32.113246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.753 qpair failed and we were unable to recover it. 00:33:13.753 [2024-07-26 09:06:32.113408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.753 [2024-07-26 09:06:32.113436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.753 qpair failed and we were unable to recover it. 00:33:13.753 [2024-07-26 09:06:32.113623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.753 [2024-07-26 09:06:32.113652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.753 qpair failed and we were unable to recover it. 00:33:13.753 [2024-07-26 09:06:32.113813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.753 [2024-07-26 09:06:32.113842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.753 qpair failed and we were unable to recover it. 00:33:13.753 [2024-07-26 09:06:32.113977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.753 [2024-07-26 09:06:32.114005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.753 qpair failed and we were unable to recover it. 00:33:13.753 [2024-07-26 09:06:32.114195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.753 [2024-07-26 09:06:32.114221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.753 qpair failed and we were unable to recover it. 00:33:13.753 [2024-07-26 09:06:32.114365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.753 [2024-07-26 09:06:32.114408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.753 qpair failed and we were unable to recover it. 00:33:13.753 [2024-07-26 09:06:32.114566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.753 [2024-07-26 09:06:32.114595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.753 qpair failed and we were unable to recover it. 00:33:13.753 [2024-07-26 09:06:32.114805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.753 [2024-07-26 09:06:32.114834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.753 qpair failed and we were unable to recover it. 00:33:13.753 [2024-07-26 09:06:32.114961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.753 [2024-07-26 09:06:32.114989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.753 qpair failed and we were unable to recover it. 00:33:13.753 [2024-07-26 09:06:32.115124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.753 [2024-07-26 09:06:32.115150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.753 qpair failed and we were unable to recover it. 00:33:13.753 [2024-07-26 09:06:32.115300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.754 [2024-07-26 09:06:32.115326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.754 qpair failed and we were unable to recover it. 00:33:13.754 [2024-07-26 09:06:32.115487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.754 [2024-07-26 09:06:32.115516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.754 qpair failed and we were unable to recover it. 00:33:13.754 [2024-07-26 09:06:32.115641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.754 [2024-07-26 09:06:32.115669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.754 qpair failed and we were unable to recover it. 00:33:13.754 [2024-07-26 09:06:32.115833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.754 [2024-07-26 09:06:32.115862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.754 qpair failed and we were unable to recover it. 00:33:13.754 [2024-07-26 09:06:32.116047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.754 [2024-07-26 09:06:32.116082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.754 qpair failed and we were unable to recover it. 00:33:13.754 [2024-07-26 09:06:32.116248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.754 [2024-07-26 09:06:32.116289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.754 qpair failed and we were unable to recover it. 00:33:13.754 [2024-07-26 09:06:32.116463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.754 [2024-07-26 09:06:32.116494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.754 qpair failed and we were unable to recover it. 00:33:13.754 [2024-07-26 09:06:32.116696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.754 [2024-07-26 09:06:32.116763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.754 qpair failed and we were unable to recover it. 00:33:13.754 [2024-07-26 09:06:32.117015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.754 [2024-07-26 09:06:32.117077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.754 qpair failed and we were unable to recover it. 00:33:13.754 [2024-07-26 09:06:32.117217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.754 [2024-07-26 09:06:32.117243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.754 qpair failed and we were unable to recover it. 00:33:13.754 [2024-07-26 09:06:32.117386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.754 [2024-07-26 09:06:32.117416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.754 qpair failed and we were unable to recover it. 00:33:13.754 [2024-07-26 09:06:32.117650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.754 [2024-07-26 09:06:32.117702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.754 qpair failed and we were unable to recover it. 00:33:13.754 [2024-07-26 09:06:32.117963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.754 [2024-07-26 09:06:32.118013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.754 qpair failed and we were unable to recover it. 00:33:13.754 [2024-07-26 09:06:32.118193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.754 [2024-07-26 09:06:32.118221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:13.754 qpair failed and we were unable to recover it. 00:33:13.754 [2024-07-26 09:06:32.118490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.754 [2024-07-26 09:06:32.118543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.754 qpair failed and we were unable to recover it. 00:33:13.754 [2024-07-26 09:06:32.118673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.754 [2024-07-26 09:06:32.118702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.754 qpair failed and we were unable to recover it. 00:33:13.754 [2024-07-26 09:06:32.118955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.754 [2024-07-26 09:06:32.119005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.754 qpair failed and we were unable to recover it. 00:33:13.754 [2024-07-26 09:06:32.119200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.754 [2024-07-26 09:06:32.119226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.754 qpair failed and we were unable to recover it. 00:33:13.754 [2024-07-26 09:06:32.119452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.754 [2024-07-26 09:06:32.119509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.754 qpair failed and we were unable to recover it. 00:33:13.754 [2024-07-26 09:06:32.119754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.754 [2024-07-26 09:06:32.119805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.754 qpair failed and we were unable to recover it. 00:33:13.754 [2024-07-26 09:06:32.119952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.754 [2024-07-26 09:06:32.119996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.754 qpair failed and we were unable to recover it. 00:33:13.754 [2024-07-26 09:06:32.120194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.754 [2024-07-26 09:06:32.120220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.754 qpair failed and we were unable to recover it. 00:33:13.754 [2024-07-26 09:06:32.120367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.754 [2024-07-26 09:06:32.120393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.754 qpair failed and we were unable to recover it. 00:33:13.754 [2024-07-26 09:06:32.120674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.754 [2024-07-26 09:06:32.120727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.754 qpair failed and we were unable to recover it. 00:33:13.754 [2024-07-26 09:06:32.120912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.754 [2024-07-26 09:06:32.120940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.754 qpair failed and we were unable to recover it. 00:33:13.754 [2024-07-26 09:06:32.121078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.754 [2024-07-26 09:06:32.121120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.754 qpair failed and we were unable to recover it. 00:33:13.754 [2024-07-26 09:06:32.121256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.754 [2024-07-26 09:06:32.121281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.754 qpair failed and we were unable to recover it. 00:33:13.754 [2024-07-26 09:06:32.121480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.754 [2024-07-26 09:06:32.121539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.754 qpair failed and we were unable to recover it. 00:33:13.754 [2024-07-26 09:06:32.121761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.754 [2024-07-26 09:06:32.121816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.754 qpair failed and we were unable to recover it. 00:33:13.754 [2024-07-26 09:06:32.121954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.754 [2024-07-26 09:06:32.121982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.754 qpair failed and we were unable to recover it. 00:33:13.754 [2024-07-26 09:06:32.122181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.754 [2024-07-26 09:06:32.122207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.754 qpair failed and we were unable to recover it. 00:33:13.754 [2024-07-26 09:06:32.122346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.754 [2024-07-26 09:06:32.122372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.754 qpair failed and we were unable to recover it. 00:33:13.754 [2024-07-26 09:06:32.122547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.754 [2024-07-26 09:06:32.122576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.754 qpair failed and we were unable to recover it. 00:33:13.754 [2024-07-26 09:06:32.122829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.754 [2024-07-26 09:06:32.122883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.754 qpair failed and we were unable to recover it. 00:33:13.754 [2024-07-26 09:06:32.123043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.754 [2024-07-26 09:06:32.123111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.754 qpair failed and we were unable to recover it. 00:33:13.754 [2024-07-26 09:06:32.123268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.754 [2024-07-26 09:06:32.123295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.754 qpair failed and we were unable to recover it. 00:33:13.754 [2024-07-26 09:06:32.123489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.754 [2024-07-26 09:06:32.123518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.754 qpair failed and we were unable to recover it. 00:33:13.754 [2024-07-26 09:06:32.123679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.754 [2024-07-26 09:06:32.123724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.754 qpair failed and we were unable to recover it. 00:33:13.754 [2024-07-26 09:06:32.123863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.754 [2024-07-26 09:06:32.123892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.754 qpair failed and we were unable to recover it. 00:33:13.754 [2024-07-26 09:06:32.124083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.754 [2024-07-26 09:06:32.124109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.754 qpair failed and we were unable to recover it. 00:33:13.754 [2024-07-26 09:06:32.124232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.754 [2024-07-26 09:06:32.124257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.754 qpair failed and we were unable to recover it. 00:33:13.754 [2024-07-26 09:06:32.124423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.754 [2024-07-26 09:06:32.124451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.754 qpair failed and we were unable to recover it. 00:33:13.754 [2024-07-26 09:06:32.124599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.754 [2024-07-26 09:06:32.124640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.754 qpair failed and we were unable to recover it. 00:33:13.754 [2024-07-26 09:06:32.124827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.754 [2024-07-26 09:06:32.124856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.754 qpair failed and we were unable to recover it. 00:33:13.754 [2024-07-26 09:06:32.125018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.754 [2024-07-26 09:06:32.125046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.754 qpair failed and we were unable to recover it. 00:33:13.754 [2024-07-26 09:06:32.125223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.754 [2024-07-26 09:06:32.125253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.754 qpair failed and we were unable to recover it. 00:33:13.754 [2024-07-26 09:06:32.125399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.754 [2024-07-26 09:06:32.125427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.754 qpair failed and we were unable to recover it. 00:33:13.754 [2024-07-26 09:06:32.125570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.754 [2024-07-26 09:06:32.125610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.754 qpair failed and we were unable to recover it. 00:33:13.754 [2024-07-26 09:06:32.125773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.754 [2024-07-26 09:06:32.125801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.754 qpair failed and we were unable to recover it. 00:33:13.754 [2024-07-26 09:06:32.125924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.754 [2024-07-26 09:06:32.125953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.754 qpair failed and we were unable to recover it. 00:33:13.754 [2024-07-26 09:06:32.126118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.754 [2024-07-26 09:06:32.126144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.754 qpair failed and we were unable to recover it. 00:33:13.754 [2024-07-26 09:06:32.126292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.754 [2024-07-26 09:06:32.126318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.754 qpair failed and we were unable to recover it. 00:33:13.754 [2024-07-26 09:06:32.126464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.754 [2024-07-26 09:06:32.126506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.754 qpair failed and we were unable to recover it. 00:33:13.754 [2024-07-26 09:06:32.126664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.754 [2024-07-26 09:06:32.126692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.754 qpair failed and we were unable to recover it. 00:33:13.754 [2024-07-26 09:06:32.126816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.754 [2024-07-26 09:06:32.126845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.754 qpair failed and we were unable to recover it. 00:33:13.754 [2024-07-26 09:06:32.127010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.754 [2024-07-26 09:06:32.127036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.754 qpair failed and we were unable to recover it. 00:33:13.754 [2024-07-26 09:06:32.127187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.754 [2024-07-26 09:06:32.127213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.754 qpair failed and we were unable to recover it. 00:33:13.754 [2024-07-26 09:06:32.127385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.754 [2024-07-26 09:06:32.127414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.754 qpair failed and we were unable to recover it. 00:33:13.754 [2024-07-26 09:06:32.127574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.754 [2024-07-26 09:06:32.127602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.754 qpair failed and we were unable to recover it. 00:33:13.754 [2024-07-26 09:06:32.127734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.754 [2024-07-26 09:06:32.127763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.754 qpair failed and we were unable to recover it. 00:33:13.754 [2024-07-26 09:06:32.127931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.754 [2024-07-26 09:06:32.127959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.754 qpair failed and we were unable to recover it. 00:33:13.754 [2024-07-26 09:06:32.128095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.754 [2024-07-26 09:06:32.128140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.754 qpair failed and we were unable to recover it. 00:33:13.754 [2024-07-26 09:06:32.128281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.754 [2024-07-26 09:06:32.128308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.754 qpair failed and we were unable to recover it. 00:33:13.754 [2024-07-26 09:06:32.128482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.755 [2024-07-26 09:06:32.128508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.755 qpair failed and we were unable to recover it. 00:33:13.755 [2024-07-26 09:06:32.128768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.755 [2024-07-26 09:06:32.128824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.755 qpair failed and we were unable to recover it. 00:33:13.755 [2024-07-26 09:06:32.128862] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb3a470 (9): Bad file descriptor 00:33:13.755 [2024-07-26 09:06:32.129122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.755 [2024-07-26 09:06:32.129161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.755 qpair failed and we were unable to recover it. 00:33:13.755 [2024-07-26 09:06:32.129283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.755 [2024-07-26 09:06:32.129310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.755 qpair failed and we were unable to recover it. 00:33:13.755 [2024-07-26 09:06:32.129491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.755 [2024-07-26 09:06:32.129516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.755 qpair failed and we were unable to recover it. 00:33:13.755 [2024-07-26 09:06:32.129673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.755 [2024-07-26 09:06:32.129702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.755 qpair failed and we were unable to recover it. 00:33:13.755 [2024-07-26 09:06:32.129862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.755 [2024-07-26 09:06:32.129890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.755 qpair failed and we were unable to recover it. 00:33:13.755 [2024-07-26 09:06:32.130032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.755 [2024-07-26 09:06:32.130064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.755 qpair failed and we were unable to recover it. 00:33:13.755 [2024-07-26 09:06:32.130228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.755 [2024-07-26 09:06:32.130253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.755 qpair failed and we were unable to recover it. 00:33:13.755 [2024-07-26 09:06:32.130387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.755 [2024-07-26 09:06:32.130415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.755 qpair failed and we were unable to recover it. 00:33:13.755 [2024-07-26 09:06:32.130587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.755 [2024-07-26 09:06:32.130612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.755 qpair failed and we were unable to recover it. 00:33:13.755 [2024-07-26 09:06:32.130795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.755 [2024-07-26 09:06:32.130849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.755 qpair failed and we were unable to recover it. 00:33:13.755 [2024-07-26 09:06:32.131011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.755 [2024-07-26 09:06:32.131039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.755 qpair failed and we were unable to recover it. 00:33:13.755 [2024-07-26 09:06:32.131208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.755 [2024-07-26 09:06:32.131234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.755 qpair failed and we were unable to recover it. 00:33:13.755 [2024-07-26 09:06:32.131347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.755 [2024-07-26 09:06:32.131390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.755 qpair failed and we were unable to recover it. 00:33:13.755 [2024-07-26 09:06:32.131576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.755 [2024-07-26 09:06:32.131604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.755 qpair failed and we were unable to recover it. 00:33:13.755 [2024-07-26 09:06:32.131897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.755 [2024-07-26 09:06:32.131948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.755 qpair failed and we were unable to recover it. 00:33:13.755 [2024-07-26 09:06:32.132149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.755 [2024-07-26 09:06:32.132187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.755 qpair failed and we were unable to recover it. 00:33:13.755 [2024-07-26 09:06:32.132340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.755 [2024-07-26 09:06:32.132367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.755 qpair failed and we were unable to recover it. 00:33:13.755 [2024-07-26 09:06:32.132536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.755 [2024-07-26 09:06:32.132562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.755 qpair failed and we were unable to recover it. 00:33:13.755 [2024-07-26 09:06:32.132813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.755 [2024-07-26 09:06:32.132862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.755 qpair failed and we were unable to recover it. 00:33:13.755 [2024-07-26 09:06:32.133036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.755 [2024-07-26 09:06:32.133071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.755 qpair failed and we were unable to recover it. 00:33:13.755 [2024-07-26 09:06:32.133226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.755 [2024-07-26 09:06:32.133252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.755 qpair failed and we were unable to recover it. 00:33:13.755 [2024-07-26 09:06:32.133403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.755 [2024-07-26 09:06:32.133447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.755 qpair failed and we were unable to recover it. 00:33:13.755 [2024-07-26 09:06:32.133586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.755 [2024-07-26 09:06:32.133614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.755 qpair failed and we were unable to recover it. 00:33:13.755 [2024-07-26 09:06:32.133763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.755 [2024-07-26 09:06:32.133788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.755 qpair failed and we were unable to recover it. 00:33:13.755 [2024-07-26 09:06:32.133932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.755 [2024-07-26 09:06:32.133959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.755 qpair failed and we were unable to recover it. 00:33:13.755 [2024-07-26 09:06:32.134123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.755 [2024-07-26 09:06:32.134153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.755 qpair failed and we were unable to recover it. 00:33:13.755 [2024-07-26 09:06:32.134321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.755 [2024-07-26 09:06:32.134346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.755 qpair failed and we were unable to recover it. 00:33:13.755 [2024-07-26 09:06:32.134468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.755 [2024-07-26 09:06:32.134509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.755 qpair failed and we were unable to recover it. 00:33:13.755 [2024-07-26 09:06:32.134663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.755 [2024-07-26 09:06:32.134688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.755 qpair failed and we were unable to recover it. 00:33:13.755 [2024-07-26 09:06:32.134860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.755 [2024-07-26 09:06:32.134886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.755 qpair failed and we were unable to recover it. 00:33:13.755 [2024-07-26 09:06:32.135020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.755 [2024-07-26 09:06:32.135050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.755 qpair failed and we were unable to recover it. 00:33:13.755 [2024-07-26 09:06:32.135204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.755 [2024-07-26 09:06:32.135230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.755 qpair failed and we were unable to recover it. 00:33:13.755 [2024-07-26 09:06:32.135372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.755 [2024-07-26 09:06:32.135398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.755 qpair failed and we were unable to recover it. 00:33:13.755 [2024-07-26 09:06:32.135541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.755 [2024-07-26 09:06:32.135566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.755 qpair failed and we were unable to recover it. 00:33:13.755 [2024-07-26 09:06:32.135716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.755 [2024-07-26 09:06:32.135745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.755 qpair failed and we were unable to recover it. 00:33:13.755 [2024-07-26 09:06:32.135890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.755 [2024-07-26 09:06:32.135916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.755 qpair failed and we were unable to recover it. 00:33:13.755 [2024-07-26 09:06:32.136075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.755 [2024-07-26 09:06:32.136115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.755 qpair failed and we were unable to recover it. 00:33:13.755 [2024-07-26 09:06:32.136311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.755 [2024-07-26 09:06:32.136341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.755 qpair failed and we were unable to recover it. 00:33:13.755 [2024-07-26 09:06:32.136481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.755 [2024-07-26 09:06:32.136507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.755 qpair failed and we were unable to recover it. 00:33:13.755 [2024-07-26 09:06:32.136631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.755 [2024-07-26 09:06:32.136657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.755 qpair failed and we were unable to recover it. 00:33:13.755 [2024-07-26 09:06:32.136809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.755 [2024-07-26 09:06:32.136835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.755 qpair failed and we were unable to recover it. 00:33:13.755 [2024-07-26 09:06:32.136982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.755 [2024-07-26 09:06:32.137008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.755 qpair failed and we were unable to recover it. 00:33:13.755 [2024-07-26 09:06:32.137135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.755 [2024-07-26 09:06:32.137179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.755 qpair failed and we were unable to recover it. 00:33:13.755 [2024-07-26 09:06:32.137306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.755 [2024-07-26 09:06:32.137334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.755 qpair failed and we were unable to recover it. 00:33:13.755 [2024-07-26 09:06:32.137466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.755 [2024-07-26 09:06:32.137491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.755 qpair failed and we were unable to recover it. 00:33:13.755 [2024-07-26 09:06:32.137637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.755 [2024-07-26 09:06:32.137662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.755 qpair failed and we were unable to recover it. 00:33:13.755 [2024-07-26 09:06:32.137916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.755 [2024-07-26 09:06:32.137965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.755 qpair failed and we were unable to recover it. 00:33:13.755 [2024-07-26 09:06:32.138141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.755 [2024-07-26 09:06:32.138168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.755 qpair failed and we were unable to recover it. 00:33:13.755 [2024-07-26 09:06:32.138330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.755 [2024-07-26 09:06:32.138359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.755 qpair failed and we were unable to recover it. 00:33:13.755 [2024-07-26 09:06:32.138544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.755 [2024-07-26 09:06:32.138573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.755 qpair failed and we were unable to recover it. 00:33:13.755 [2024-07-26 09:06:32.138759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.755 [2024-07-26 09:06:32.138784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.755 qpair failed and we were unable to recover it. 00:33:13.755 [2024-07-26 09:06:32.138948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.755 [2024-07-26 09:06:32.138977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.755 qpair failed and we were unable to recover it. 00:33:13.755 [2024-07-26 09:06:32.139122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.755 [2024-07-26 09:06:32.139148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.755 qpair failed and we were unable to recover it. 00:33:13.755 [2024-07-26 09:06:32.139289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.755 [2024-07-26 09:06:32.139315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.755 qpair failed and we were unable to recover it. 00:33:13.755 [2024-07-26 09:06:32.139486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.755 [2024-07-26 09:06:32.139546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.755 qpair failed and we were unable to recover it. 00:33:13.755 [2024-07-26 09:06:32.139705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.755 [2024-07-26 09:06:32.139734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.755 qpair failed and we were unable to recover it. 00:33:13.755 [2024-07-26 09:06:32.139896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.755 [2024-07-26 09:06:32.139921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.755 qpair failed and we were unable to recover it. 00:33:13.755 [2024-07-26 09:06:32.140040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.755 [2024-07-26 09:06:32.140087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.755 qpair failed and we were unable to recover it. 00:33:13.755 [2024-07-26 09:06:32.140282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.755 [2024-07-26 09:06:32.140307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.755 qpair failed and we were unable to recover it. 00:33:13.755 [2024-07-26 09:06:32.140454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.755 [2024-07-26 09:06:32.140479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.755 qpair failed and we were unable to recover it. 00:33:13.756 [2024-07-26 09:06:32.140670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.756 [2024-07-26 09:06:32.140698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.756 qpair failed and we were unable to recover it. 00:33:13.756 [2024-07-26 09:06:32.140862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.756 [2024-07-26 09:06:32.140892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.756 qpair failed and we were unable to recover it. 00:33:13.756 [2024-07-26 09:06:32.141056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.756 [2024-07-26 09:06:32.141086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.756 qpair failed and we were unable to recover it. 00:33:13.756 [2024-07-26 09:06:32.141214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.756 [2024-07-26 09:06:32.141239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.756 qpair failed and we were unable to recover it. 00:33:13.756 [2024-07-26 09:06:32.141391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.756 [2024-07-26 09:06:32.141416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.756 qpair failed and we were unable to recover it. 00:33:13.756 [2024-07-26 09:06:32.141570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.756 [2024-07-26 09:06:32.141595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.756 qpair failed and we were unable to recover it. 00:33:13.756 [2024-07-26 09:06:32.141712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.756 [2024-07-26 09:06:32.141738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.756 qpair failed and we were unable to recover it. 00:33:13.756 [2024-07-26 09:06:32.141874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.756 [2024-07-26 09:06:32.141902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.756 qpair failed and we were unable to recover it. 00:33:13.756 [2024-07-26 09:06:32.142096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.756 [2024-07-26 09:06:32.142123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.756 qpair failed and we were unable to recover it. 00:33:13.756 [2024-07-26 09:06:32.142262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.756 [2024-07-26 09:06:32.142292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.756 qpair failed and we were unable to recover it. 00:33:13.756 [2024-07-26 09:06:32.142456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.756 [2024-07-26 09:06:32.142484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.756 qpair failed and we were unable to recover it. 00:33:13.756 [2024-07-26 09:06:32.142650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.756 [2024-07-26 09:06:32.142676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.756 qpair failed and we were unable to recover it. 00:33:13.756 [2024-07-26 09:06:32.142826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.756 [2024-07-26 09:06:32.142851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.756 qpair failed and we were unable to recover it. 00:33:13.756 [2024-07-26 09:06:32.142997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.756 [2024-07-26 09:06:32.143022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.756 qpair failed and we were unable to recover it. 00:33:13.756 [2024-07-26 09:06:32.143197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.756 [2024-07-26 09:06:32.143227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.756 qpair failed and we were unable to recover it. 00:33:13.756 [2024-07-26 09:06:32.143431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.756 [2024-07-26 09:06:32.143460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.756 qpair failed and we were unable to recover it. 00:33:13.756 [2024-07-26 09:06:32.143613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.756 [2024-07-26 09:06:32.143641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.756 qpair failed and we were unable to recover it. 00:33:13.756 [2024-07-26 09:06:32.143804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.756 [2024-07-26 09:06:32.143829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.756 qpair failed and we were unable to recover it. 00:33:13.756 [2024-07-26 09:06:32.143996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.756 [2024-07-26 09:06:32.144021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.756 qpair failed and we were unable to recover it. 00:33:13.756 [2024-07-26 09:06:32.144211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.756 [2024-07-26 09:06:32.144237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.756 qpair failed and we were unable to recover it. 00:33:13.756 [2024-07-26 09:06:32.144359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.756 [2024-07-26 09:06:32.144384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.756 qpair failed and we were unable to recover it. 00:33:13.756 [2024-07-26 09:06:32.144562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.756 [2024-07-26 09:06:32.144603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.756 qpair failed and we were unable to recover it. 00:33:13.756 [2024-07-26 09:06:32.144765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.756 [2024-07-26 09:06:32.144793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.756 qpair failed and we were unable to recover it. 00:33:13.756 [2024-07-26 09:06:32.144956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.756 [2024-07-26 09:06:32.144985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.756 qpair failed and we were unable to recover it. 00:33:13.756 [2024-07-26 09:06:32.145156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.756 [2024-07-26 09:06:32.145182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.756 qpair failed and we were unable to recover it. 00:33:13.756 [2024-07-26 09:06:32.145301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.756 [2024-07-26 09:06:32.145326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.756 qpair failed and we were unable to recover it. 00:33:13.756 [2024-07-26 09:06:32.145469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.756 [2024-07-26 09:06:32.145494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.756 qpair failed and we were unable to recover it. 00:33:13.756 [2024-07-26 09:06:32.145618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.756 [2024-07-26 09:06:32.145644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.756 qpair failed and we were unable to recover it. 00:33:13.756 [2024-07-26 09:06:32.145794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.756 [2024-07-26 09:06:32.145820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.756 qpair failed and we were unable to recover it. 00:33:13.756 [2024-07-26 09:06:32.145938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.756 [2024-07-26 09:06:32.145963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.756 qpair failed and we were unable to recover it. 00:33:13.756 [2024-07-26 09:06:32.146080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.756 [2024-07-26 09:06:32.146108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.756 qpair failed and we were unable to recover it. 00:33:13.756 [2024-07-26 09:06:32.146292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.756 [2024-07-26 09:06:32.146321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.756 qpair failed and we were unable to recover it. 00:33:13.756 [2024-07-26 09:06:32.146510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.756 [2024-07-26 09:06:32.146536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.756 qpair failed and we were unable to recover it. 00:33:13.756 [2024-07-26 09:06:32.146692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.756 [2024-07-26 09:06:32.146735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.756 qpair failed and we were unable to recover it. 00:33:13.756 [2024-07-26 09:06:32.146907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.756 [2024-07-26 09:06:32.146934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.756 qpair failed and we were unable to recover it. 00:33:13.756 [2024-07-26 09:06:32.147092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.756 [2024-07-26 09:06:32.147129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.756 qpair failed and we were unable to recover it. 00:33:13.756 [2024-07-26 09:06:32.147273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.756 [2024-07-26 09:06:32.147300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.756 qpair failed and we were unable to recover it. 00:33:13.756 [2024-07-26 09:06:32.147453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.756 [2024-07-26 09:06:32.147482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.756 qpair failed and we were unable to recover it. 00:33:13.756 [2024-07-26 09:06:32.147621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.756 [2024-07-26 09:06:32.147647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.756 qpair failed and we were unable to recover it. 00:33:13.756 [2024-07-26 09:06:32.147776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.756 [2024-07-26 09:06:32.147802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.756 qpair failed and we were unable to recover it. 00:33:13.756 [2024-07-26 09:06:32.147975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.756 [2024-07-26 09:06:32.148018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.756 qpair failed and we were unable to recover it. 00:33:13.756 [2024-07-26 09:06:32.148185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.756 [2024-07-26 09:06:32.148214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.756 qpair failed and we were unable to recover it. 00:33:13.756 [2024-07-26 09:06:32.148330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.756 [2024-07-26 09:06:32.148372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.756 qpair failed and we were unable to recover it. 00:33:13.756 [2024-07-26 09:06:32.148523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.756 [2024-07-26 09:06:32.148548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.756 qpair failed and we were unable to recover it. 00:33:13.756 [2024-07-26 09:06:32.148695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.756 [2024-07-26 09:06:32.148720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.756 qpair failed and we were unable to recover it. 00:33:13.756 [2024-07-26 09:06:32.148825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.756 [2024-07-26 09:06:32.148850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.756 qpair failed and we were unable to recover it. 00:33:13.756 [2024-07-26 09:06:32.149054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.756 [2024-07-26 09:06:32.149087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.756 qpair failed and we were unable to recover it. 00:33:13.756 [2024-07-26 09:06:32.149271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.756 [2024-07-26 09:06:32.149297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.756 qpair failed and we were unable to recover it. 00:33:13.756 [2024-07-26 09:06:32.149469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.756 [2024-07-26 09:06:32.149494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.756 qpair failed and we were unable to recover it. 00:33:13.756 [2024-07-26 09:06:32.149680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.756 [2024-07-26 09:06:32.149705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.756 qpair failed and we were unable to recover it. 00:33:13.756 [2024-07-26 09:06:32.149829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.756 [2024-07-26 09:06:32.149854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.756 qpair failed and we were unable to recover it. 00:33:13.756 [2024-07-26 09:06:32.150023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.756 [2024-07-26 09:06:32.150048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.756 qpair failed and we were unable to recover it. 00:33:13.756 [2024-07-26 09:06:32.150200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.756 [2024-07-26 09:06:32.150226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.756 qpair failed and we were unable to recover it. 00:33:13.756 [2024-07-26 09:06:32.150417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.756 [2024-07-26 09:06:32.150442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.756 qpair failed and we were unable to recover it. 00:33:13.756 [2024-07-26 09:06:32.150632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.757 [2024-07-26 09:06:32.150683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.757 qpair failed and we were unable to recover it. 00:33:13.757 [2024-07-26 09:06:32.150851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.757 [2024-07-26 09:06:32.150879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.757 qpair failed and we were unable to recover it. 00:33:13.757 [2024-07-26 09:06:32.151017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.757 [2024-07-26 09:06:32.151042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:13.757 qpair failed and we were unable to recover it. 00:33:13.757 [2024-07-26 09:06:32.151251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.757 [2024-07-26 09:06:32.151289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.757 qpair failed and we were unable to recover it. 00:33:13.757 [2024-07-26 09:06:32.151477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.757 [2024-07-26 09:06:32.151504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.757 qpair failed and we were unable to recover it. 00:33:13.757 [2024-07-26 09:06:32.151620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.757 [2024-07-26 09:06:32.151646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.757 qpair failed and we were unable to recover it. 00:33:13.757 [2024-07-26 09:06:32.151771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.757 [2024-07-26 09:06:32.151797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.757 qpair failed and we were unable to recover it. 00:33:13.757 [2024-07-26 09:06:32.151980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.757 [2024-07-26 09:06:32.152006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.757 qpair failed and we were unable to recover it. 00:33:13.757 [2024-07-26 09:06:32.152156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.757 [2024-07-26 09:06:32.152184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.757 qpair failed and we were unable to recover it. 00:33:13.757 [2024-07-26 09:06:32.152301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.757 [2024-07-26 09:06:32.152327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.757 qpair failed and we were unable to recover it. 00:33:13.757 [2024-07-26 09:06:32.152451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.757 [2024-07-26 09:06:32.152477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.757 qpair failed and we were unable to recover it. 00:33:13.757 [2024-07-26 09:06:32.152650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.757 [2024-07-26 09:06:32.152676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.757 qpair failed and we were unable to recover it. 00:33:13.757 [2024-07-26 09:06:32.152821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.757 [2024-07-26 09:06:32.152846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.757 qpair failed and we were unable to recover it. 00:33:13.757 [2024-07-26 09:06:32.152997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.757 [2024-07-26 09:06:32.153042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.757 qpair failed and we were unable to recover it. 00:33:13.757 [2024-07-26 09:06:32.153223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.757 [2024-07-26 09:06:32.153255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.757 qpair failed and we were unable to recover it. 00:33:13.757 [2024-07-26 09:06:32.153455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.757 [2024-07-26 09:06:32.153483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.757 qpair failed and we were unable to recover it. 00:33:13.757 [2024-07-26 09:06:32.153672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.757 [2024-07-26 09:06:32.153700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.757 qpair failed and we were unable to recover it. 00:33:13.757 [2024-07-26 09:06:32.153863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.757 [2024-07-26 09:06:32.153890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.757 qpair failed and we were unable to recover it. 00:33:13.757 [2024-07-26 09:06:32.154084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.757 [2024-07-26 09:06:32.154122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.757 qpair failed and we were unable to recover it. 00:33:13.757 [2024-07-26 09:06:32.154283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.757 [2024-07-26 09:06:32.154311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.757 qpair failed and we were unable to recover it. 00:33:13.757 [2024-07-26 09:06:32.154476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.757 [2024-07-26 09:06:32.154501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.757 qpair failed and we were unable to recover it. 00:33:13.757 [2024-07-26 09:06:32.154617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.757 [2024-07-26 09:06:32.154659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.757 qpair failed and we were unable to recover it. 00:33:13.757 [2024-07-26 09:06:32.154844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.757 [2024-07-26 09:06:32.154873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.757 qpair failed and we were unable to recover it. 00:33:13.757 [2024-07-26 09:06:32.155010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.757 [2024-07-26 09:06:32.155035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.757 qpair failed and we were unable to recover it. 00:33:13.757 [2024-07-26 09:06:32.155170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.757 [2024-07-26 09:06:32.155196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.757 qpair failed and we were unable to recover it. 00:33:13.757 [2024-07-26 09:06:32.155373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.757 [2024-07-26 09:06:32.155407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.757 qpair failed and we were unable to recover it. 00:33:13.757 [2024-07-26 09:06:32.155593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.757 [2024-07-26 09:06:32.155619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.757 qpair failed and we were unable to recover it. 00:33:13.757 [2024-07-26 09:06:32.155763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.757 [2024-07-26 09:06:32.155789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.757 qpair failed and we were unable to recover it. 00:33:13.757 [2024-07-26 09:06:32.155949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.757 [2024-07-26 09:06:32.155991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.757 qpair failed and we were unable to recover it. 00:33:13.757 [2024-07-26 09:06:32.156132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.757 [2024-07-26 09:06:32.156157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.757 qpair failed and we were unable to recover it. 00:33:13.757 [2024-07-26 09:06:32.156307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.757 [2024-07-26 09:06:32.156333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.757 qpair failed and we were unable to recover it. 00:33:13.757 [2024-07-26 09:06:32.156478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.757 [2024-07-26 09:06:32.156521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.757 qpair failed and we were unable to recover it. 00:33:13.757 [2024-07-26 09:06:32.156708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.757 [2024-07-26 09:06:32.156734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.757 qpair failed and we were unable to recover it. 00:33:13.758 [2024-07-26 09:06:32.156907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.758 [2024-07-26 09:06:32.156935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.758 qpair failed and we were unable to recover it. 00:33:13.758 [2024-07-26 09:06:32.157102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.758 [2024-07-26 09:06:32.157143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.758 qpair failed and we were unable to recover it. 00:33:13.758 [2024-07-26 09:06:32.157285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.758 [2024-07-26 09:06:32.157310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.758 qpair failed and we were unable to recover it. 00:33:13.758 [2024-07-26 09:06:32.157472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.758 [2024-07-26 09:06:32.157500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.758 qpair failed and we were unable to recover it. 00:33:13.758 [2024-07-26 09:06:32.157672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.758 [2024-07-26 09:06:32.157698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.758 qpair failed and we were unable to recover it. 00:33:13.758 [2024-07-26 09:06:32.157871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.758 [2024-07-26 09:06:32.157897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.758 qpair failed and we were unable to recover it. 00:33:13.758 [2024-07-26 09:06:32.158080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.758 [2024-07-26 09:06:32.158129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.758 qpair failed and we were unable to recover it. 00:33:13.758 [2024-07-26 09:06:32.158258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.758 [2024-07-26 09:06:32.158283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.758 qpair failed and we were unable to recover it. 00:33:13.758 [2024-07-26 09:06:32.158414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.758 [2024-07-26 09:06:32.158440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.758 qpair failed and we were unable to recover it. 00:33:13.758 [2024-07-26 09:06:32.158590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.758 [2024-07-26 09:06:32.158615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.758 qpair failed and we were unable to recover it. 00:33:13.758 [2024-07-26 09:06:32.158772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.758 [2024-07-26 09:06:32.158798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.758 qpair failed and we were unable to recover it. 00:33:13.758 [2024-07-26 09:06:32.158921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.758 [2024-07-26 09:06:32.158946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.758 qpair failed and we were unable to recover it. 00:33:13.758 [2024-07-26 09:06:32.159093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.758 [2024-07-26 09:06:32.159137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.758 qpair failed and we were unable to recover it. 00:33:13.758 [2024-07-26 09:06:32.159263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.758 [2024-07-26 09:06:32.159291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.758 qpair failed and we were unable to recover it. 00:33:13.758 [2024-07-26 09:06:32.159492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.758 [2024-07-26 09:06:32.159517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.758 qpair failed and we were unable to recover it. 00:33:13.758 [2024-07-26 09:06:32.159672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.758 [2024-07-26 09:06:32.159700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.758 qpair failed and we were unable to recover it. 00:33:13.758 [2024-07-26 09:06:32.159834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.758 [2024-07-26 09:06:32.159863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.758 qpair failed and we were unable to recover it. 00:33:13.758 [2024-07-26 09:06:32.160026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.758 [2024-07-26 09:06:32.160052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.758 qpair failed and we were unable to recover it. 00:33:13.758 [2024-07-26 09:06:32.160215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.758 [2024-07-26 09:06:32.160241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.758 qpair failed and we were unable to recover it. 00:33:13.758 [2024-07-26 09:06:32.160388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.758 [2024-07-26 09:06:32.160430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.758 qpair failed and we were unable to recover it. 00:33:13.758 [2024-07-26 09:06:32.160558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.758 [2024-07-26 09:06:32.160584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.758 qpair failed and we were unable to recover it. 00:33:13.758 [2024-07-26 09:06:32.160725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.758 [2024-07-26 09:06:32.160755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:13.758 qpair failed and we were unable to recover it. 00:33:14.041 [2024-07-26 09:06:32.160929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.041 [2024-07-26 09:06:32.160959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.041 qpair failed and we were unable to recover it. 00:33:14.041 [2024-07-26 09:06:32.161109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.041 [2024-07-26 09:06:32.161136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.041 qpair failed and we were unable to recover it. 00:33:14.041 [2024-07-26 09:06:32.161269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.041 [2024-07-26 09:06:32.161308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.041 qpair failed and we were unable to recover it. 00:33:14.041 [2024-07-26 09:06:32.161532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.041 [2024-07-26 09:06:32.161561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.041 qpair failed and we were unable to recover it. 00:33:14.041 [2024-07-26 09:06:32.161713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.041 [2024-07-26 09:06:32.161738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.041 qpair failed and we were unable to recover it. 00:33:14.041 [2024-07-26 09:06:32.161879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.041 [2024-07-26 09:06:32.161905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.041 qpair failed and we were unable to recover it. 00:33:14.041 [2024-07-26 09:06:32.162072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.041 [2024-07-26 09:06:32.162119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.041 qpair failed and we were unable to recover it. 00:33:14.041 [2024-07-26 09:06:32.162286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.041 [2024-07-26 09:06:32.162312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.041 qpair failed and we were unable to recover it. 00:33:14.041 [2024-07-26 09:06:32.162485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.041 [2024-07-26 09:06:32.162510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.041 qpair failed and we were unable to recover it. 00:33:14.041 [2024-07-26 09:06:32.162630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.041 [2024-07-26 09:06:32.162655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.041 qpair failed and we were unable to recover it. 00:33:14.041 [2024-07-26 09:06:32.162825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.041 [2024-07-26 09:06:32.162850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.041 qpair failed and we were unable to recover it. 00:33:14.041 [2024-07-26 09:06:32.162998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.041 [2024-07-26 09:06:32.163028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.041 qpair failed and we were unable to recover it. 00:33:14.041 [2024-07-26 09:06:32.163191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.041 [2024-07-26 09:06:32.163217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.041 qpair failed and we were unable to recover it. 00:33:14.041 [2024-07-26 09:06:32.163375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.041 [2024-07-26 09:06:32.163400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.041 qpair failed and we were unable to recover it. 00:33:14.041 [2024-07-26 09:06:32.163565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.041 [2024-07-26 09:06:32.163593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.041 qpair failed and we were unable to recover it. 00:33:14.041 [2024-07-26 09:06:32.163726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.041 [2024-07-26 09:06:32.163753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.041 qpair failed and we were unable to recover it. 00:33:14.041 [2024-07-26 09:06:32.163916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.041 [2024-07-26 09:06:32.163941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.041 qpair failed and we were unable to recover it. 00:33:14.041 [2024-07-26 09:06:32.164083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.041 [2024-07-26 09:06:32.164110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.041 qpair failed and we were unable to recover it. 00:33:14.041 [2024-07-26 09:06:32.164252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.041 [2024-07-26 09:06:32.164277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.041 qpair failed and we were unable to recover it. 00:33:14.041 [2024-07-26 09:06:32.164402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.041 [2024-07-26 09:06:32.164429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.041 qpair failed and we were unable to recover it. 00:33:14.041 [2024-07-26 09:06:32.164600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.041 [2024-07-26 09:06:32.164641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.041 qpair failed and we were unable to recover it. 00:33:14.041 [2024-07-26 09:06:32.164801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.041 [2024-07-26 09:06:32.164830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.041 qpair failed and we were unable to recover it. 00:33:14.041 [2024-07-26 09:06:32.165020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.041 [2024-07-26 09:06:32.165045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.041 qpair failed and we were unable to recover it. 00:33:14.041 [2024-07-26 09:06:32.165195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.041 [2024-07-26 09:06:32.165220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.041 qpair failed and we were unable to recover it. 00:33:14.041 [2024-07-26 09:06:32.165394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.041 [2024-07-26 09:06:32.165423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.041 qpair failed and we were unable to recover it. 00:33:14.041 [2024-07-26 09:06:32.165565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.041 [2024-07-26 09:06:32.165591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.041 qpair failed and we were unable to recover it. 00:33:14.041 [2024-07-26 09:06:32.165736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.041 [2024-07-26 09:06:32.165767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.041 qpair failed and we were unable to recover it. 00:33:14.041 [2024-07-26 09:06:32.165895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.041 [2024-07-26 09:06:32.165937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.041 qpair failed and we were unable to recover it. 00:33:14.041 [2024-07-26 09:06:32.166077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.041 [2024-07-26 09:06:32.166114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.041 qpair failed and we were unable to recover it. 00:33:14.041 [2024-07-26 09:06:32.166234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.041 [2024-07-26 09:06:32.166260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.041 qpair failed and we were unable to recover it. 00:33:14.041 [2024-07-26 09:06:32.166387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.041 [2024-07-26 09:06:32.166413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.041 qpair failed and we were unable to recover it. 00:33:14.041 [2024-07-26 09:06:32.166585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.041 [2024-07-26 09:06:32.166611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.041 qpair failed and we were unable to recover it. 00:33:14.041 [2024-07-26 09:06:32.166779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.041 [2024-07-26 09:06:32.166808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.041 qpair failed and we were unable to recover it. 00:33:14.041 [2024-07-26 09:06:32.166941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.041 [2024-07-26 09:06:32.166969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.041 qpair failed and we were unable to recover it. 00:33:14.041 [2024-07-26 09:06:32.167136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.041 [2024-07-26 09:06:32.167163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.041 qpair failed and we were unable to recover it. 00:33:14.041 [2024-07-26 09:06:32.167354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.041 [2024-07-26 09:06:32.167382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.041 qpair failed and we were unable to recover it. 00:33:14.041 [2024-07-26 09:06:32.167549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.041 [2024-07-26 09:06:32.167577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.041 qpair failed and we were unable to recover it. 00:33:14.041 [2024-07-26 09:06:32.167742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.041 [2024-07-26 09:06:32.167768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.041 qpair failed and we were unable to recover it. 00:33:14.041 [2024-07-26 09:06:32.167915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.041 [2024-07-26 09:06:32.167941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.041 qpair failed and we were unable to recover it. 00:33:14.041 [2024-07-26 09:06:32.168092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.041 [2024-07-26 09:06:32.168118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.041 qpair failed and we were unable to recover it. 00:33:14.041 [2024-07-26 09:06:32.168245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.041 [2024-07-26 09:06:32.168270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.041 qpair failed and we were unable to recover it. 00:33:14.041 [2024-07-26 09:06:32.168404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.041 [2024-07-26 09:06:32.168431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.041 qpair failed and we were unable to recover it. 00:33:14.041 [2024-07-26 09:06:32.168547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.041 [2024-07-26 09:06:32.168572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.041 qpair failed and we were unable to recover it. 00:33:14.041 [2024-07-26 09:06:32.168721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.041 [2024-07-26 09:06:32.168747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.041 qpair failed and we were unable to recover it. 00:33:14.041 [2024-07-26 09:06:32.168911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.041 [2024-07-26 09:06:32.168939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.041 qpair failed and we were unable to recover it. 00:33:14.041 [2024-07-26 09:06:32.169108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.041 [2024-07-26 09:06:32.169133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.041 qpair failed and we were unable to recover it. 00:33:14.041 [2024-07-26 09:06:32.169283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.041 [2024-07-26 09:06:32.169309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.041 qpair failed and we were unable to recover it. 00:33:14.041 [2024-07-26 09:06:32.169453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.041 [2024-07-26 09:06:32.169478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.041 qpair failed and we were unable to recover it. 00:33:14.041 [2024-07-26 09:06:32.169628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.041 [2024-07-26 09:06:32.169653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.041 qpair failed and we were unable to recover it. 00:33:14.041 [2024-07-26 09:06:32.169767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.041 [2024-07-26 09:06:32.169793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.041 qpair failed and we were unable to recover it. 00:33:14.041 [2024-07-26 09:06:32.169912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.041 [2024-07-26 09:06:32.169937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.041 qpair failed and we were unable to recover it. 00:33:14.041 [2024-07-26 09:06:32.170125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.041 [2024-07-26 09:06:32.170154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.041 qpair failed and we were unable to recover it. 00:33:14.041 [2024-07-26 09:06:32.170300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.041 [2024-07-26 09:06:32.170327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.041 qpair failed and we were unable to recover it. 00:33:14.041 [2024-07-26 09:06:32.170536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.041 [2024-07-26 09:06:32.170578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.041 qpair failed and we were unable to recover it. 00:33:14.041 [2024-07-26 09:06:32.170730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.042 [2024-07-26 09:06:32.170756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.042 qpair failed and we were unable to recover it. 00:33:14.042 [2024-07-26 09:06:32.170885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.042 [2024-07-26 09:06:32.170910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.042 qpair failed and we were unable to recover it. 00:33:14.042 [2024-07-26 09:06:32.171053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.042 [2024-07-26 09:06:32.171092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.042 qpair failed and we were unable to recover it. 00:33:14.042 [2024-07-26 09:06:32.171216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.042 [2024-07-26 09:06:32.171243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.042 qpair failed and we were unable to recover it. 00:33:14.042 [2024-07-26 09:06:32.171361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.042 [2024-07-26 09:06:32.171387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.042 qpair failed and we were unable to recover it. 00:33:14.042 [2024-07-26 09:06:32.171536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.042 [2024-07-26 09:06:32.171561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.042 qpair failed and we were unable to recover it. 00:33:14.042 [2024-07-26 09:06:32.171680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.042 [2024-07-26 09:06:32.171705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.042 qpair failed and we were unable to recover it. 00:33:14.042 [2024-07-26 09:06:32.171881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.042 [2024-07-26 09:06:32.171907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.042 qpair failed and we were unable to recover it. 00:33:14.042 [2024-07-26 09:06:32.172023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.042 [2024-07-26 09:06:32.172075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.042 qpair failed and we were unable to recover it. 00:33:14.042 [2024-07-26 09:06:32.172251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.042 [2024-07-26 09:06:32.172277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.042 qpair failed and we were unable to recover it. 00:33:14.042 [2024-07-26 09:06:32.172425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.042 [2024-07-26 09:06:32.172451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.042 qpair failed and we were unable to recover it. 00:33:14.042 [2024-07-26 09:06:32.172632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.042 [2024-07-26 09:06:32.172675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.042 qpair failed and we were unable to recover it. 00:33:14.042 [2024-07-26 09:06:32.172804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.042 [2024-07-26 09:06:32.172832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.042 qpair failed and we were unable to recover it. 00:33:14.042 [2024-07-26 09:06:32.173005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.042 [2024-07-26 09:06:32.173030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.042 qpair failed and we were unable to recover it. 00:33:14.042 [2024-07-26 09:06:32.173169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.042 [2024-07-26 09:06:32.173195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.042 qpair failed and we were unable to recover it. 00:33:14.042 [2024-07-26 09:06:32.173336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.042 [2024-07-26 09:06:32.173361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.042 qpair failed and we were unable to recover it. 00:33:14.042 [2024-07-26 09:06:32.173471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.042 [2024-07-26 09:06:32.173496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.042 qpair failed and we were unable to recover it. 00:33:14.042 [2024-07-26 09:06:32.173614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.042 [2024-07-26 09:06:32.173640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.042 qpair failed and we were unable to recover it. 00:33:14.042 [2024-07-26 09:06:32.173762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.042 [2024-07-26 09:06:32.173787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.042 qpair failed and we were unable to recover it. 00:33:14.042 [2024-07-26 09:06:32.173948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.042 [2024-07-26 09:06:32.173976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.042 qpair failed and we were unable to recover it. 00:33:14.042 [2024-07-26 09:06:32.174145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.042 [2024-07-26 09:06:32.174172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.042 qpair failed and we were unable to recover it. 00:33:14.042 [2024-07-26 09:06:32.174292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.042 [2024-07-26 09:06:32.174317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.042 qpair failed and we were unable to recover it. 00:33:14.042 [2024-07-26 09:06:32.174480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.042 [2024-07-26 09:06:32.174505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.042 qpair failed and we were unable to recover it. 00:33:14.042 [2024-07-26 09:06:32.174676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.042 [2024-07-26 09:06:32.174702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.042 qpair failed and we were unable to recover it. 00:33:14.042 [2024-07-26 09:06:32.174867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.042 [2024-07-26 09:06:32.174895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.042 qpair failed and we were unable to recover it. 00:33:14.042 [2024-07-26 09:06:32.175056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.042 [2024-07-26 09:06:32.175088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.042 qpair failed and we were unable to recover it. 00:33:14.042 [2024-07-26 09:06:32.175237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.042 [2024-07-26 09:06:32.175266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.042 qpair failed and we were unable to recover it. 00:33:14.042 [2024-07-26 09:06:32.175446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.042 [2024-07-26 09:06:32.175471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.042 qpair failed and we were unable to recover it. 00:33:14.042 [2024-07-26 09:06:32.175644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.042 [2024-07-26 09:06:32.175670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.042 qpair failed and we were unable to recover it. 00:33:14.042 [2024-07-26 09:06:32.175787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.042 [2024-07-26 09:06:32.175812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.042 qpair failed and we were unable to recover it. 00:33:14.042 [2024-07-26 09:06:32.175987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.042 [2024-07-26 09:06:32.176012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.042 qpair failed and we were unable to recover it. 00:33:14.042 [2024-07-26 09:06:32.176187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.042 [2024-07-26 09:06:32.176213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.042 qpair failed and we were unable to recover it. 00:33:14.042 [2024-07-26 09:06:32.176333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.042 [2024-07-26 09:06:32.176360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.042 qpair failed and we were unable to recover it. 00:33:14.042 [2024-07-26 09:06:32.176528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.042 [2024-07-26 09:06:32.176556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.042 qpair failed and we were unable to recover it. 00:33:14.042 [2024-07-26 09:06:32.176716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.042 [2024-07-26 09:06:32.176741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.042 qpair failed and we were unable to recover it. 00:33:14.042 [2024-07-26 09:06:32.176858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.042 [2024-07-26 09:06:32.176897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.042 qpair failed and we were unable to recover it. 00:33:14.042 [2024-07-26 09:06:32.177048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.042 [2024-07-26 09:06:32.177083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.042 qpair failed and we were unable to recover it. 00:33:14.042 [2024-07-26 09:06:32.177219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.042 [2024-07-26 09:06:32.177244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.042 qpair failed and we were unable to recover it. 00:33:14.042 [2024-07-26 09:06:32.177436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.042 [2024-07-26 09:06:32.177464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.042 qpair failed and we were unable to recover it. 00:33:14.042 [2024-07-26 09:06:32.177653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.042 [2024-07-26 09:06:32.177681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.042 qpair failed and we were unable to recover it. 00:33:14.042 [2024-07-26 09:06:32.177820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.042 [2024-07-26 09:06:32.177846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.042 qpair failed and we were unable to recover it. 00:33:14.042 [2024-07-26 09:06:32.177994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.042 [2024-07-26 09:06:32.178036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.042 qpair failed and we were unable to recover it. 00:33:14.042 [2024-07-26 09:06:32.178224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.042 [2024-07-26 09:06:32.178251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.042 qpair failed and we were unable to recover it. 00:33:14.042 [2024-07-26 09:06:32.178398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.042 [2024-07-26 09:06:32.178423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.042 qpair failed and we were unable to recover it. 00:33:14.042 [2024-07-26 09:06:32.178585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.042 [2024-07-26 09:06:32.178615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.042 qpair failed and we were unable to recover it. 00:33:14.042 [2024-07-26 09:06:32.178775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.042 [2024-07-26 09:06:32.178804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.042 qpair failed and we were unable to recover it. 00:33:14.042 [2024-07-26 09:06:32.178934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.042 [2024-07-26 09:06:32.178959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.042 qpair failed and we were unable to recover it. 00:33:14.042 [2024-07-26 09:06:32.179123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.042 [2024-07-26 09:06:32.179165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.042 qpair failed and we were unable to recover it. 00:33:14.042 [2024-07-26 09:06:32.179297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.042 [2024-07-26 09:06:32.179326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.042 qpair failed and we were unable to recover it. 00:33:14.042 [2024-07-26 09:06:32.179486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.042 [2024-07-26 09:06:32.179511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.042 qpair failed and we were unable to recover it. 00:33:14.042 [2024-07-26 09:06:32.179623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.042 [2024-07-26 09:06:32.179649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.042 qpair failed and we were unable to recover it. 00:33:14.042 [2024-07-26 09:06:32.179787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.042 [2024-07-26 09:06:32.179815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.042 qpair failed and we were unable to recover it. 00:33:14.042 [2024-07-26 09:06:32.180041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.042 [2024-07-26 09:06:32.180078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.042 qpair failed and we were unable to recover it. 00:33:14.042 [2024-07-26 09:06:32.180238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.042 [2024-07-26 09:06:32.180268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.042 qpair failed and we were unable to recover it. 00:33:14.042 [2024-07-26 09:06:32.180412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.042 [2024-07-26 09:06:32.180437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.042 qpair failed and we were unable to recover it. 00:33:14.042 [2024-07-26 09:06:32.180583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.042 [2024-07-26 09:06:32.180609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.042 qpair failed and we were unable to recover it. 00:33:14.042 [2024-07-26 09:06:32.180721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.042 [2024-07-26 09:06:32.180763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.042 qpair failed and we were unable to recover it. 00:33:14.042 [2024-07-26 09:06:32.180924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.042 [2024-07-26 09:06:32.180952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.042 qpair failed and we were unable to recover it. 00:33:14.042 [2024-07-26 09:06:32.181118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.042 [2024-07-26 09:06:32.181144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.042 qpair failed and we were unable to recover it. 00:33:14.042 [2024-07-26 09:06:32.181286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.042 [2024-07-26 09:06:32.181311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.042 qpair failed and we were unable to recover it. 00:33:14.042 [2024-07-26 09:06:32.181436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.042 [2024-07-26 09:06:32.181461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.042 qpair failed and we were unable to recover it. 00:33:14.042 [2024-07-26 09:06:32.181573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.042 [2024-07-26 09:06:32.181598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.042 qpair failed and we were unable to recover it. 00:33:14.042 [2024-07-26 09:06:32.181742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.042 [2024-07-26 09:06:32.181767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.042 qpair failed and we were unable to recover it. 00:33:14.042 [2024-07-26 09:06:32.181960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.043 [2024-07-26 09:06:32.181988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.043 qpair failed and we were unable to recover it. 00:33:14.043 [2024-07-26 09:06:32.182158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.043 [2024-07-26 09:06:32.182185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.043 qpair failed and we were unable to recover it. 00:33:14.043 [2024-07-26 09:06:32.182355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.043 [2024-07-26 09:06:32.182381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.043 qpair failed and we were unable to recover it. 00:33:14.043 [2024-07-26 09:06:32.182578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.043 [2024-07-26 09:06:32.182606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.043 qpair failed and we were unable to recover it. 00:33:14.043 [2024-07-26 09:06:32.182776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.043 [2024-07-26 09:06:32.182801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.043 qpair failed and we were unable to recover it. 00:33:14.043 [2024-07-26 09:06:32.182970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.043 [2024-07-26 09:06:32.182998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.043 qpair failed and we were unable to recover it. 00:33:14.043 [2024-07-26 09:06:32.183168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.043 [2024-07-26 09:06:32.183195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.043 qpair failed and we were unable to recover it. 00:33:14.043 [2024-07-26 09:06:32.183336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.043 [2024-07-26 09:06:32.183362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.043 qpair failed and we were unable to recover it. 00:33:14.043 [2024-07-26 09:06:32.183475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.043 [2024-07-26 09:06:32.183500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.043 qpair failed and we were unable to recover it. 00:33:14.043 [2024-07-26 09:06:32.183654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.043 [2024-07-26 09:06:32.183680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.043 qpair failed and we were unable to recover it. 00:33:14.043 [2024-07-26 09:06:32.183822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.043 [2024-07-26 09:06:32.183847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.043 qpair failed and we were unable to recover it. 00:33:14.043 [2024-07-26 09:06:32.184010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.043 [2024-07-26 09:06:32.184038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.043 qpair failed and we were unable to recover it. 00:33:14.043 [2024-07-26 09:06:32.184189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.043 [2024-07-26 09:06:32.184215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.043 qpair failed and we were unable to recover it. 00:33:14.043 [2024-07-26 09:06:32.184339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.043 [2024-07-26 09:06:32.184364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.043 qpair failed and we were unable to recover it. 00:33:14.043 [2024-07-26 09:06:32.184533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.043 [2024-07-26 09:06:32.184574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.043 qpair failed and we were unable to recover it. 00:33:14.043 [2024-07-26 09:06:32.184744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.043 [2024-07-26 09:06:32.184769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.043 qpair failed and we were unable to recover it. 00:33:14.043 [2024-07-26 09:06:32.184944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.043 [2024-07-26 09:06:32.184969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.043 qpair failed and we were unable to recover it. 00:33:14.043 [2024-07-26 09:06:32.185117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.043 [2024-07-26 09:06:32.185143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.043 qpair failed and we were unable to recover it. 00:33:14.043 [2024-07-26 09:06:32.185333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.043 [2024-07-26 09:06:32.185362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.043 qpair failed and we were unable to recover it. 00:33:14.043 [2024-07-26 09:06:32.185503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.043 [2024-07-26 09:06:32.185528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.043 qpair failed and we were unable to recover it. 00:33:14.043 [2024-07-26 09:06:32.185676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.043 [2024-07-26 09:06:32.185702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.043 qpair failed and we were unable to recover it. 00:33:14.043 [2024-07-26 09:06:32.185847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.043 [2024-07-26 09:06:32.185889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.043 qpair failed and we were unable to recover it. 00:33:14.043 [2024-07-26 09:06:32.186127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.043 [2024-07-26 09:06:32.186154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.043 qpair failed and we were unable to recover it. 00:33:14.043 [2024-07-26 09:06:32.186293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.043 [2024-07-26 09:06:32.186318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.043 qpair failed and we were unable to recover it. 00:33:14.043 [2024-07-26 09:06:32.186490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.043 [2024-07-26 09:06:32.186519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.043 qpair failed and we were unable to recover it. 00:33:14.043 [2024-07-26 09:06:32.186664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.043 [2024-07-26 09:06:32.186690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.043 qpair failed and we were unable to recover it. 00:33:14.043 [2024-07-26 09:06:32.186831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.043 [2024-07-26 09:06:32.186856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.043 qpair failed and we were unable to recover it. 00:33:14.043 [2024-07-26 09:06:32.187044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.043 [2024-07-26 09:06:32.187081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.043 qpair failed and we were unable to recover it. 00:33:14.043 [2024-07-26 09:06:32.187255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.043 [2024-07-26 09:06:32.187281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.043 qpair failed and we were unable to recover it. 00:33:14.043 [2024-07-26 09:06:32.187441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.043 [2024-07-26 09:06:32.187470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.043 qpair failed and we were unable to recover it. 00:33:14.043 [2024-07-26 09:06:32.187631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.043 [2024-07-26 09:06:32.187659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.043 qpair failed and we were unable to recover it. 00:33:14.043 [2024-07-26 09:06:32.187824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.043 [2024-07-26 09:06:32.187857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.043 qpair failed and we were unable to recover it. 00:33:14.043 [2024-07-26 09:06:32.188006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.043 [2024-07-26 09:06:32.188031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.043 qpair failed and we were unable to recover it. 00:33:14.043 [2024-07-26 09:06:32.188188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.043 [2024-07-26 09:06:32.188232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.043 qpair failed and we were unable to recover it. 00:33:14.043 [2024-07-26 09:06:32.188368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.043 [2024-07-26 09:06:32.188393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.043 qpair failed and we were unable to recover it. 00:33:14.043 [2024-07-26 09:06:32.188539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.043 [2024-07-26 09:06:32.188564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.043 qpair failed and we were unable to recover it. 00:33:14.043 [2024-07-26 09:06:32.188716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.043 [2024-07-26 09:06:32.188746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.043 qpair failed and we were unable to recover it. 00:33:14.043 [2024-07-26 09:06:32.188918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.043 [2024-07-26 09:06:32.188944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.043 qpair failed and we were unable to recover it. 00:33:14.043 [2024-07-26 09:06:32.189093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.043 [2024-07-26 09:06:32.189119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.043 qpair failed and we were unable to recover it. 00:33:14.043 [2024-07-26 09:06:32.189245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.043 [2024-07-26 09:06:32.189270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.043 qpair failed and we were unable to recover it. 00:33:14.043 [2024-07-26 09:06:32.189391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.043 [2024-07-26 09:06:32.189416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.043 qpair failed and we were unable to recover it. 00:33:14.043 [2024-07-26 09:06:32.189557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.043 [2024-07-26 09:06:32.189599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.043 qpair failed and we were unable to recover it. 00:33:14.043 [2024-07-26 09:06:32.189736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.043 [2024-07-26 09:06:32.189764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.043 qpair failed and we were unable to recover it. 00:33:14.043 [2024-07-26 09:06:32.189935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.043 [2024-07-26 09:06:32.189961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.043 qpair failed and we were unable to recover it. 00:33:14.043 [2024-07-26 09:06:32.190120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.043 [2024-07-26 09:06:32.190150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.043 qpair failed and we were unable to recover it. 00:33:14.043 [2024-07-26 09:06:32.190291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.043 [2024-07-26 09:06:32.190319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.043 qpair failed and we were unable to recover it. 00:33:14.043 [2024-07-26 09:06:32.190455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.043 [2024-07-26 09:06:32.190480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.043 qpair failed and we were unable to recover it. 00:33:14.043 [2024-07-26 09:06:32.190634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.043 [2024-07-26 09:06:32.190676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.043 qpair failed and we were unable to recover it. 00:33:14.043 [2024-07-26 09:06:32.190827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.043 [2024-07-26 09:06:32.190855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.043 qpair failed and we were unable to recover it. 00:33:14.043 [2024-07-26 09:06:32.191021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.043 [2024-07-26 09:06:32.191046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.043 qpair failed and we were unable to recover it. 00:33:14.043 [2024-07-26 09:06:32.191202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.043 [2024-07-26 09:06:32.191228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.043 qpair failed and we were unable to recover it. 00:33:14.043 [2024-07-26 09:06:32.191398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.043 [2024-07-26 09:06:32.191423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.043 qpair failed and we were unable to recover it. 00:33:14.043 [2024-07-26 09:06:32.191546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.043 [2024-07-26 09:06:32.191571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.043 qpair failed and we were unable to recover it. 00:33:14.043 [2024-07-26 09:06:32.191681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.043 [2024-07-26 09:06:32.191706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.043 qpair failed and we were unable to recover it. 00:33:14.043 [2024-07-26 09:06:32.191878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.043 [2024-07-26 09:06:32.191906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.043 qpair failed and we were unable to recover it. 00:33:14.044 [2024-07-26 09:06:32.192070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.044 [2024-07-26 09:06:32.192096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.044 qpair failed and we were unable to recover it. 00:33:14.044 [2024-07-26 09:06:32.192254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.044 [2024-07-26 09:06:32.192280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.044 qpair failed and we were unable to recover it. 00:33:14.044 [2024-07-26 09:06:32.192401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.044 [2024-07-26 09:06:32.192443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.044 qpair failed and we were unable to recover it. 00:33:14.044 [2024-07-26 09:06:32.192587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.044 [2024-07-26 09:06:32.192612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.044 qpair failed and we were unable to recover it. 00:33:14.044 [2024-07-26 09:06:32.192786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.044 [2024-07-26 09:06:32.192811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.044 qpair failed and we were unable to recover it. 00:33:14.044 [2024-07-26 09:06:32.192967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.044 [2024-07-26 09:06:32.192995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.044 qpair failed and we were unable to recover it. 00:33:14.044 [2024-07-26 09:06:32.193131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.044 [2024-07-26 09:06:32.193157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.044 qpair failed and we were unable to recover it. 00:33:14.044 [2024-07-26 09:06:32.193278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.044 [2024-07-26 09:06:32.193303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.044 qpair failed and we were unable to recover it. 00:33:14.044 [2024-07-26 09:06:32.193499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.044 [2024-07-26 09:06:32.193527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.044 qpair failed and we were unable to recover it. 00:33:14.044 [2024-07-26 09:06:32.193670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.044 [2024-07-26 09:06:32.193695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.044 qpair failed and we were unable to recover it. 00:33:14.044 [2024-07-26 09:06:32.193819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.044 [2024-07-26 09:06:32.193844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.044 qpair failed and we were unable to recover it. 00:33:14.044 [2024-07-26 09:06:32.194007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.044 [2024-07-26 09:06:32.194034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.044 qpair failed and we were unable to recover it. 00:33:14.044 [2024-07-26 09:06:32.194234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.044 [2024-07-26 09:06:32.194261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.044 qpair failed and we were unable to recover it. 00:33:14.044 [2024-07-26 09:06:32.194423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.044 [2024-07-26 09:06:32.194451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.044 qpair failed and we were unable to recover it. 00:33:14.044 [2024-07-26 09:06:32.194574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.044 [2024-07-26 09:06:32.194603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.044 qpair failed and we were unable to recover it. 00:33:14.044 [2024-07-26 09:06:32.194766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.044 [2024-07-26 09:06:32.194792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.044 qpair failed and we were unable to recover it. 00:33:14.044 [2024-07-26 09:06:32.194901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.044 [2024-07-26 09:06:32.194926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.044 qpair failed and we were unable to recover it. 00:33:14.044 [2024-07-26 09:06:32.195133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.044 [2024-07-26 09:06:32.195162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.044 qpair failed and we were unable to recover it. 00:33:14.044 [2024-07-26 09:06:32.195297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.044 [2024-07-26 09:06:32.195322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.044 qpair failed and we were unable to recover it. 00:33:14.044 [2024-07-26 09:06:32.195468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.044 [2024-07-26 09:06:32.195493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.044 qpair failed and we were unable to recover it. 00:33:14.044 [2024-07-26 09:06:32.195632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.044 [2024-07-26 09:06:32.195660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.044 qpair failed and we were unable to recover it. 00:33:14.044 [2024-07-26 09:06:32.195851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.044 [2024-07-26 09:06:32.195876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.044 qpair failed and we were unable to recover it. 00:33:14.044 [2024-07-26 09:06:32.196040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.044 [2024-07-26 09:06:32.196076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.044 qpair failed and we were unable to recover it. 00:33:14.044 [2024-07-26 09:06:32.196238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.044 [2024-07-26 09:06:32.196266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.044 qpair failed and we were unable to recover it. 00:33:14.044 [2024-07-26 09:06:32.196409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.044 [2024-07-26 09:06:32.196434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.044 qpair failed and we were unable to recover it. 00:33:14.044 [2024-07-26 09:06:32.196602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.044 [2024-07-26 09:06:32.196643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.044 qpair failed and we were unable to recover it. 00:33:14.044 [2024-07-26 09:06:32.196818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.044 [2024-07-26 09:06:32.196844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.044 qpair failed and we were unable to recover it. 00:33:14.044 [2024-07-26 09:06:32.196992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.044 [2024-07-26 09:06:32.197018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.044 qpair failed and we were unable to recover it. 00:33:14.044 [2024-07-26 09:06:32.197168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.044 [2024-07-26 09:06:32.197194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.044 qpair failed and we were unable to recover it. 00:33:14.044 [2024-07-26 09:06:32.197365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.044 [2024-07-26 09:06:32.197391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.044 qpair failed and we were unable to recover it. 00:33:14.044 [2024-07-26 09:06:32.197532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.044 [2024-07-26 09:06:32.197557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.044 qpair failed and we were unable to recover it. 00:33:14.044 [2024-07-26 09:06:32.197752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.044 [2024-07-26 09:06:32.197780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.044 qpair failed and we were unable to recover it. 00:33:14.044 [2024-07-26 09:06:32.197944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.044 [2024-07-26 09:06:32.197974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.044 qpair failed and we were unable to recover it. 00:33:14.044 [2024-07-26 09:06:32.198112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.044 [2024-07-26 09:06:32.198138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.044 qpair failed and we were unable to recover it. 00:33:14.044 [2024-07-26 09:06:32.198283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.044 [2024-07-26 09:06:32.198324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.044 qpair failed and we were unable to recover it. 00:33:14.044 [2024-07-26 09:06:32.198483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.044 [2024-07-26 09:06:32.198511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.044 qpair failed and we were unable to recover it. 00:33:14.044 [2024-07-26 09:06:32.198655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.044 [2024-07-26 09:06:32.198680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.044 qpair failed and we were unable to recover it. 00:33:14.044 [2024-07-26 09:06:32.198805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.044 [2024-07-26 09:06:32.198830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.044 qpair failed and we were unable to recover it. 00:33:14.044 [2024-07-26 09:06:32.199002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.044 [2024-07-26 09:06:32.199030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.044 qpair failed and we were unable to recover it. 00:33:14.044 [2024-07-26 09:06:32.199194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.044 [2024-07-26 09:06:32.199219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.044 qpair failed and we were unable to recover it. 00:33:14.044 [2024-07-26 09:06:32.199381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.044 [2024-07-26 09:06:32.199409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.044 qpair failed and we were unable to recover it. 00:33:14.044 [2024-07-26 09:06:32.199563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.044 [2024-07-26 09:06:32.199591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.044 qpair failed and we were unable to recover it. 00:33:14.044 [2024-07-26 09:06:32.199728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.044 [2024-07-26 09:06:32.199753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.044 qpair failed and we were unable to recover it. 00:33:14.044 [2024-07-26 09:06:32.199897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.044 [2024-07-26 09:06:32.199938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.044 qpair failed and we were unable to recover it. 00:33:14.044 [2024-07-26 09:06:32.200114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.044 [2024-07-26 09:06:32.200143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.044 qpair failed and we were unable to recover it. 00:33:14.044 [2024-07-26 09:06:32.200312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.044 [2024-07-26 09:06:32.200338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.044 qpair failed and we were unable to recover it. 00:33:14.044 [2024-07-26 09:06:32.200477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.044 [2024-07-26 09:06:32.200505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.044 qpair failed and we were unable to recover it. 00:33:14.044 [2024-07-26 09:06:32.200694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.044 [2024-07-26 09:06:32.200722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.044 qpair failed and we were unable to recover it. 00:33:14.044 [2024-07-26 09:06:32.200912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.044 [2024-07-26 09:06:32.200937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.044 qpair failed and we were unable to recover it. 00:33:14.044 [2024-07-26 09:06:32.201070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.044 [2024-07-26 09:06:32.201099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.044 qpair failed and we were unable to recover it. 00:33:14.044 [2024-07-26 09:06:32.201229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.044 [2024-07-26 09:06:32.201256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.044 qpair failed and we were unable to recover it. 00:33:14.044 [2024-07-26 09:06:32.201426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.044 [2024-07-26 09:06:32.201452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.044 qpair failed and we were unable to recover it. 00:33:14.044 [2024-07-26 09:06:32.201645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.044 [2024-07-26 09:06:32.201674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.044 qpair failed and we were unable to recover it. 00:33:14.044 [2024-07-26 09:06:32.201796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.044 [2024-07-26 09:06:32.201824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.044 qpair failed and we were unable to recover it. 00:33:14.044 [2024-07-26 09:06:32.201986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.044 [2024-07-26 09:06:32.202012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.044 qpair failed and we were unable to recover it. 00:33:14.044 [2024-07-26 09:06:32.202135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.044 [2024-07-26 09:06:32.202162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.044 qpair failed and we were unable to recover it. 00:33:14.044 [2024-07-26 09:06:32.202279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.044 [2024-07-26 09:06:32.202304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.044 qpair failed and we were unable to recover it. 00:33:14.044 [2024-07-26 09:06:32.202450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.044 [2024-07-26 09:06:32.202476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.044 qpair failed and we were unable to recover it. 00:33:14.044 [2024-07-26 09:06:32.202624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.044 [2024-07-26 09:06:32.202649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.044 qpair failed and we were unable to recover it. 00:33:14.044 [2024-07-26 09:06:32.202818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.044 [2024-07-26 09:06:32.202846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.044 qpair failed and we were unable to recover it. 00:33:14.044 [2024-07-26 09:06:32.203014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.044 [2024-07-26 09:06:32.203039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.044 qpair failed and we were unable to recover it. 00:33:14.044 [2024-07-26 09:06:32.203218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.044 [2024-07-26 09:06:32.203246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.044 qpair failed and we were unable to recover it. 00:33:14.044 [2024-07-26 09:06:32.203409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.044 [2024-07-26 09:06:32.203437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.044 qpair failed and we were unable to recover it. 00:33:14.045 [2024-07-26 09:06:32.203612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.045 [2024-07-26 09:06:32.203637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.045 qpair failed and we were unable to recover it. 00:33:14.045 [2024-07-26 09:06:32.203789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.045 [2024-07-26 09:06:32.203814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.045 qpair failed and we were unable to recover it. 00:33:14.045 [2024-07-26 09:06:32.203936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.045 [2024-07-26 09:06:32.203961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.045 qpair failed and we were unable to recover it. 00:33:14.045 [2024-07-26 09:06:32.204184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.045 [2024-07-26 09:06:32.204210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.045 qpair failed and we were unable to recover it. 00:33:14.045 [2024-07-26 09:06:32.204403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.045 [2024-07-26 09:06:32.204431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.045 qpair failed and we were unable to recover it. 00:33:14.045 [2024-07-26 09:06:32.204582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.045 [2024-07-26 09:06:32.204610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.045 qpair failed and we were unable to recover it. 00:33:14.045 [2024-07-26 09:06:32.204785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.045 [2024-07-26 09:06:32.204809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.045 qpair failed and we were unable to recover it. 00:33:14.045 [2024-07-26 09:06:32.204983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.045 [2024-07-26 09:06:32.205008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.045 qpair failed and we were unable to recover it. 00:33:14.045 [2024-07-26 09:06:32.205207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.045 [2024-07-26 09:06:32.205233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.045 qpair failed and we were unable to recover it. 00:33:14.045 [2024-07-26 09:06:32.205359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.045 [2024-07-26 09:06:32.205385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.045 qpair failed and we were unable to recover it. 00:33:14.045 [2024-07-26 09:06:32.205525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.045 [2024-07-26 09:06:32.205551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.045 qpair failed and we were unable to recover it. 00:33:14.045 [2024-07-26 09:06:32.205710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.045 [2024-07-26 09:06:32.205736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.045 qpair failed and we were unable to recover it. 00:33:14.045 [2024-07-26 09:06:32.205904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.045 [2024-07-26 09:06:32.205932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.045 qpair failed and we were unable to recover it. 00:33:14.045 [2024-07-26 09:06:32.206085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.045 [2024-07-26 09:06:32.206128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.045 qpair failed and we were unable to recover it. 00:33:14.045 [2024-07-26 09:06:32.206245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.045 [2024-07-26 09:06:32.206270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.045 qpair failed and we were unable to recover it. 00:33:14.045 [2024-07-26 09:06:32.206441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.045 [2024-07-26 09:06:32.206466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.045 qpair failed and we were unable to recover it. 00:33:14.045 [2024-07-26 09:06:32.206663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.045 [2024-07-26 09:06:32.206688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.045 qpair failed and we were unable to recover it. 00:33:14.045 [2024-07-26 09:06:32.206800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.045 [2024-07-26 09:06:32.206824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.045 qpair failed and we were unable to recover it. 00:33:14.045 [2024-07-26 09:06:32.206969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.045 [2024-07-26 09:06:32.206994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.045 qpair failed and we were unable to recover it. 00:33:14.045 [2024-07-26 09:06:32.207163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.045 [2024-07-26 09:06:32.207192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.045 qpair failed and we were unable to recover it. 00:33:14.045 [2024-07-26 09:06:32.207355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.045 [2024-07-26 09:06:32.207383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.045 qpair failed and we were unable to recover it. 00:33:14.045 [2024-07-26 09:06:32.207511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.045 [2024-07-26 09:06:32.207537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.045 qpair failed and we were unable to recover it. 00:33:14.045 [2024-07-26 09:06:32.207689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.045 [2024-07-26 09:06:32.207715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.045 qpair failed and we were unable to recover it. 00:33:14.045 [2024-07-26 09:06:32.207914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.045 [2024-07-26 09:06:32.207941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.045 qpair failed and we were unable to recover it. 00:33:14.045 [2024-07-26 09:06:32.208136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.045 [2024-07-26 09:06:32.208163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.045 qpair failed and we were unable to recover it. 00:33:14.045 [2024-07-26 09:06:32.208310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.045 [2024-07-26 09:06:32.208352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.045 qpair failed and we were unable to recover it. 00:33:14.045 [2024-07-26 09:06:32.208485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.045 [2024-07-26 09:06:32.208513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.045 qpair failed and we were unable to recover it. 00:33:14.045 [2024-07-26 09:06:32.208672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.045 [2024-07-26 09:06:32.208697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.045 qpair failed and we were unable to recover it. 00:33:14.045 [2024-07-26 09:06:32.208813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.045 [2024-07-26 09:06:32.208838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.045 qpair failed and we were unable to recover it. 00:33:14.045 [2024-07-26 09:06:32.209006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.045 [2024-07-26 09:06:32.209034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.045 qpair failed and we were unable to recover it. 00:33:14.045 [2024-07-26 09:06:32.209219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.045 [2024-07-26 09:06:32.209245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.045 qpair failed and we were unable to recover it. 00:33:14.045 [2024-07-26 09:06:32.209366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.045 [2024-07-26 09:06:32.209391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.045 qpair failed and we were unable to recover it. 00:33:14.045 [2024-07-26 09:06:32.209529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.045 [2024-07-26 09:06:32.209554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.045 qpair failed and we were unable to recover it. 00:33:14.045 [2024-07-26 09:06:32.209702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.045 [2024-07-26 09:06:32.209728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.045 qpair failed and we were unable to recover it. 00:33:14.045 [2024-07-26 09:06:32.209854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.045 [2024-07-26 09:06:32.209880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.045 qpair failed and we were unable to recover it. 00:33:14.045 [2024-07-26 09:06:32.210019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.045 [2024-07-26 09:06:32.210044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.045 qpair failed and we were unable to recover it. 00:33:14.045 [2024-07-26 09:06:32.210231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.045 [2024-07-26 09:06:32.210258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.045 qpair failed and we were unable to recover it. 00:33:14.045 [2024-07-26 09:06:32.210439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.045 [2024-07-26 09:06:32.210467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.045 qpair failed and we were unable to recover it. 00:33:14.045 [2024-07-26 09:06:32.210609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.045 [2024-07-26 09:06:32.210637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.045 qpair failed and we were unable to recover it. 00:33:14.045 [2024-07-26 09:06:32.210843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.045 [2024-07-26 09:06:32.210868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.045 qpair failed and we were unable to recover it. 00:33:14.045 [2024-07-26 09:06:32.210999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.045 [2024-07-26 09:06:32.211027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.045 qpair failed and we were unable to recover it. 00:33:14.045 [2024-07-26 09:06:32.211228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.045 [2024-07-26 09:06:32.211254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.045 qpair failed and we were unable to recover it. 00:33:14.045 [2024-07-26 09:06:32.211406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.045 [2024-07-26 09:06:32.211431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.045 qpair failed and we were unable to recover it. 00:33:14.045 [2024-07-26 09:06:32.211542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.045 [2024-07-26 09:06:32.211583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.045 qpair failed and we were unable to recover it. 00:33:14.045 [2024-07-26 09:06:32.211736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.045 [2024-07-26 09:06:32.211765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.045 qpair failed and we were unable to recover it. 00:33:14.045 [2024-07-26 09:06:32.211952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.045 [2024-07-26 09:06:32.211979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.045 qpair failed and we were unable to recover it. 00:33:14.045 [2024-07-26 09:06:32.212184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.045 [2024-07-26 09:06:32.212210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.045 qpair failed and we were unable to recover it. 00:33:14.045 [2024-07-26 09:06:32.212377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.045 [2024-07-26 09:06:32.212406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.045 qpair failed and we were unable to recover it. 00:33:14.045 [2024-07-26 09:06:32.212566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.045 [2024-07-26 09:06:32.212591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.045 qpair failed and we were unable to recover it. 00:33:14.045 [2024-07-26 09:06:32.212784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.045 [2024-07-26 09:06:32.212816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.045 qpair failed and we were unable to recover it. 00:33:14.045 [2024-07-26 09:06:32.212994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.045 [2024-07-26 09:06:32.213019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.045 qpair failed and we were unable to recover it. 00:33:14.045 [2024-07-26 09:06:32.213172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.045 [2024-07-26 09:06:32.213198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.045 qpair failed and we were unable to recover it. 00:33:14.045 [2024-07-26 09:06:32.213346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.045 [2024-07-26 09:06:32.213371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.045 qpair failed and we were unable to recover it. 00:33:14.045 [2024-07-26 09:06:32.213519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.045 [2024-07-26 09:06:32.213544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.045 qpair failed and we were unable to recover it. 00:33:14.045 [2024-07-26 09:06:32.213707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.045 [2024-07-26 09:06:32.213733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.045 qpair failed and we were unable to recover it. 00:33:14.045 [2024-07-26 09:06:32.213853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.045 [2024-07-26 09:06:32.213878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.045 qpair failed and we were unable to recover it. 00:33:14.045 [2024-07-26 09:06:32.213993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.045 [2024-07-26 09:06:32.214017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.045 qpair failed and we were unable to recover it. 00:33:14.045 [2024-07-26 09:06:32.214191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.045 [2024-07-26 09:06:32.214218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.045 qpair failed and we were unable to recover it. 00:33:14.045 [2024-07-26 09:06:32.214403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.045 [2024-07-26 09:06:32.214431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.045 qpair failed and we were unable to recover it. 00:33:14.045 [2024-07-26 09:06:32.214608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.045 [2024-07-26 09:06:32.214633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.045 qpair failed and we were unable to recover it. 00:33:14.045 [2024-07-26 09:06:32.214753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.045 [2024-07-26 09:06:32.214778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.045 qpair failed and we were unable to recover it. 00:33:14.045 [2024-07-26 09:06:32.214920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.045 [2024-07-26 09:06:32.214962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.045 qpair failed and we were unable to recover it. 00:33:14.045 [2024-07-26 09:06:32.215132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.045 [2024-07-26 09:06:32.215158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.045 qpair failed and we were unable to recover it. 00:33:14.045 [2024-07-26 09:06:32.215305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.045 [2024-07-26 09:06:32.215330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.045 qpair failed and we were unable to recover it. 00:33:14.045 [2024-07-26 09:06:32.215498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.046 [2024-07-26 09:06:32.215526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.046 qpair failed and we were unable to recover it. 00:33:14.046 [2024-07-26 09:06:32.215657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.046 [2024-07-26 09:06:32.215686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.046 qpair failed and we were unable to recover it. 00:33:14.046 [2024-07-26 09:06:32.215856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.046 [2024-07-26 09:06:32.215881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.046 qpair failed and we were unable to recover it. 00:33:14.046 [2024-07-26 09:06:32.216042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.046 [2024-07-26 09:06:32.216080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.046 qpair failed and we were unable to recover it. 00:33:14.046 [2024-07-26 09:06:32.216213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.046 [2024-07-26 09:06:32.216241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.046 qpair failed and we were unable to recover it. 00:33:14.046 [2024-07-26 09:06:32.216407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.046 [2024-07-26 09:06:32.216432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.046 qpair failed and we were unable to recover it. 00:33:14.046 [2024-07-26 09:06:32.216574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.046 [2024-07-26 09:06:32.216617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.046 qpair failed and we were unable to recover it. 00:33:14.046 [2024-07-26 09:06:32.216780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.046 [2024-07-26 09:06:32.216808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.046 qpair failed and we were unable to recover it. 00:33:14.046 [2024-07-26 09:06:32.217002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.046 [2024-07-26 09:06:32.217028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.046 qpair failed and we were unable to recover it. 00:33:14.046 [2024-07-26 09:06:32.217185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.046 [2024-07-26 09:06:32.217210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.046 qpair failed and we were unable to recover it. 00:33:14.046 [2024-07-26 09:06:32.217329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.046 [2024-07-26 09:06:32.217354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.046 qpair failed and we were unable to recover it. 00:33:14.046 [2024-07-26 09:06:32.217504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.046 [2024-07-26 09:06:32.217529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.046 qpair failed and we were unable to recover it. 00:33:14.046 [2024-07-26 09:06:32.217664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.046 [2024-07-26 09:06:32.217693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.046 qpair failed and we were unable to recover it. 00:33:14.046 [2024-07-26 09:06:32.217862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.046 [2024-07-26 09:06:32.217890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.046 qpair failed and we were unable to recover it. 00:33:14.046 [2024-07-26 09:06:32.218135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.046 [2024-07-26 09:06:32.218161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.046 qpair failed and we were unable to recover it. 00:33:14.046 [2024-07-26 09:06:32.218304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.046 [2024-07-26 09:06:32.218330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.046 qpair failed and we were unable to recover it. 00:33:14.046 [2024-07-26 09:06:32.218500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.046 [2024-07-26 09:06:32.218529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.046 qpair failed and we were unable to recover it. 00:33:14.046 [2024-07-26 09:06:32.218693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.046 [2024-07-26 09:06:32.218718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.046 qpair failed and we were unable to recover it. 00:33:14.046 [2024-07-26 09:06:32.218880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.046 [2024-07-26 09:06:32.218908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.046 qpair failed and we were unable to recover it. 00:33:14.046 [2024-07-26 09:06:32.219092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.046 [2024-07-26 09:06:32.219118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.046 qpair failed and we were unable to recover it. 00:33:14.046 [2024-07-26 09:06:32.219267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.046 [2024-07-26 09:06:32.219293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.046 qpair failed and we were unable to recover it. 00:33:14.046 [2024-07-26 09:06:32.219483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.046 [2024-07-26 09:06:32.219511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.046 qpair failed and we were unable to recover it. 00:33:14.046 [2024-07-26 09:06:32.219705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.046 [2024-07-26 09:06:32.219730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.046 qpair failed and we were unable to recover it. 00:33:14.046 [2024-07-26 09:06:32.219902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.046 [2024-07-26 09:06:32.219928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.046 qpair failed and we were unable to recover it. 00:33:14.046 [2024-07-26 09:06:32.220043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.046 [2024-07-26 09:06:32.220076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.046 qpair failed and we were unable to recover it. 00:33:14.046 [2024-07-26 09:06:32.220214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.046 [2024-07-26 09:06:32.220239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.046 qpair failed and we were unable to recover it. 00:33:14.046 [2024-07-26 09:06:32.220396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.046 [2024-07-26 09:06:32.220426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.046 qpair failed and we were unable to recover it. 00:33:14.046 [2024-07-26 09:06:32.220565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.046 [2024-07-26 09:06:32.220591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.046 qpair failed and we were unable to recover it. 00:33:14.046 [2024-07-26 09:06:32.220711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.046 [2024-07-26 09:06:32.220736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.046 qpair failed and we were unable to recover it. 00:33:14.046 [2024-07-26 09:06:32.220855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.046 [2024-07-26 09:06:32.220880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.046 qpair failed and we were unable to recover it. 00:33:14.046 [2024-07-26 09:06:32.221020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.046 [2024-07-26 09:06:32.221046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.046 qpair failed and we were unable to recover it. 00:33:14.046 [2024-07-26 09:06:32.221185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.046 [2024-07-26 09:06:32.221213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.046 qpair failed and we were unable to recover it. 00:33:14.046 [2024-07-26 09:06:32.221374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.046 [2024-07-26 09:06:32.221399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.046 qpair failed and we were unable to recover it. 00:33:14.046 [2024-07-26 09:06:32.221571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.046 [2024-07-26 09:06:32.221614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.046 qpair failed and we were unable to recover it. 00:33:14.046 [2024-07-26 09:06:32.221772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.046 [2024-07-26 09:06:32.221800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.046 qpair failed and we were unable to recover it. 00:33:14.046 [2024-07-26 09:06:32.221996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.046 [2024-07-26 09:06:32.222022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.046 qpair failed and we were unable to recover it. 00:33:14.046 [2024-07-26 09:06:32.222204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.046 [2024-07-26 09:06:32.222230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.046 qpair failed and we were unable to recover it. 00:33:14.046 [2024-07-26 09:06:32.222400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.046 [2024-07-26 09:06:32.222428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.046 qpair failed and we were unable to recover it. 00:33:14.046 [2024-07-26 09:06:32.222589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.046 [2024-07-26 09:06:32.222614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.046 qpair failed and we were unable to recover it. 00:33:14.046 [2024-07-26 09:06:32.222758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.046 [2024-07-26 09:06:32.222800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.046 qpair failed and we were unable to recover it. 00:33:14.046 [2024-07-26 09:06:32.222986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.046 [2024-07-26 09:06:32.223014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.046 qpair failed and we were unable to recover it. 00:33:14.046 [2024-07-26 09:06:32.223209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.046 [2024-07-26 09:06:32.223235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.046 qpair failed and we were unable to recover it. 00:33:14.046 [2024-07-26 09:06:32.223377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.046 [2024-07-26 09:06:32.223418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.046 qpair failed and we were unable to recover it. 00:33:14.046 [2024-07-26 09:06:32.223606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.046 [2024-07-26 09:06:32.223634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.046 qpair failed and we were unable to recover it. 00:33:14.046 [2024-07-26 09:06:32.223799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.046 [2024-07-26 09:06:32.223824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.046 qpair failed and we were unable to recover it. 00:33:14.046 [2024-07-26 09:06:32.223968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.046 [2024-07-26 09:06:32.224009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.046 qpair failed and we were unable to recover it. 00:33:14.046 [2024-07-26 09:06:32.224163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.046 [2024-07-26 09:06:32.224189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.046 qpair failed and we were unable to recover it. 00:33:14.046 [2024-07-26 09:06:32.224334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.046 [2024-07-26 09:06:32.224360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.046 qpair failed and we were unable to recover it. 00:33:14.046 [2024-07-26 09:06:32.224530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.046 [2024-07-26 09:06:32.224558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.046 qpair failed and we were unable to recover it. 00:33:14.046 [2024-07-26 09:06:32.224726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.046 [2024-07-26 09:06:32.224751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.046 qpair failed and we were unable to recover it. 00:33:14.046 [2024-07-26 09:06:32.224899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.046 [2024-07-26 09:06:32.224926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.046 qpair failed and we were unable to recover it. 00:33:14.046 [2024-07-26 09:06:32.225113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.046 [2024-07-26 09:06:32.225143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.046 qpair failed and we were unable to recover it. 00:33:14.046 [2024-07-26 09:06:32.225331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.046 [2024-07-26 09:06:32.225359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.046 qpair failed and we were unable to recover it. 00:33:14.046 [2024-07-26 09:06:32.225528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.046 [2024-07-26 09:06:32.225560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.046 qpair failed and we were unable to recover it. 00:33:14.046 [2024-07-26 09:06:32.225733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.046 [2024-07-26 09:06:32.225775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.046 qpair failed and we were unable to recover it. 00:33:14.046 [2024-07-26 09:06:32.225913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.046 [2024-07-26 09:06:32.225941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.046 qpair failed and we were unable to recover it. 00:33:14.046 [2024-07-26 09:06:32.226100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.046 [2024-07-26 09:06:32.226127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.046 qpair failed and we were unable to recover it. 00:33:14.046 [2024-07-26 09:06:32.226277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.047 [2024-07-26 09:06:32.226302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.047 qpair failed and we were unable to recover it. 00:33:14.047 [2024-07-26 09:06:32.226473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.047 [2024-07-26 09:06:32.226498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.047 qpair failed and we were unable to recover it. 00:33:14.047 [2024-07-26 09:06:32.226642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.047 [2024-07-26 09:06:32.226666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.047 qpair failed and we were unable to recover it. 00:33:14.047 [2024-07-26 09:06:32.226846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.047 [2024-07-26 09:06:32.226888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.047 qpair failed and we were unable to recover it. 00:33:14.047 [2024-07-26 09:06:32.227072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.047 [2024-07-26 09:06:32.227101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.047 qpair failed and we were unable to recover it. 00:33:14.047 [2024-07-26 09:06:32.227260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.047 [2024-07-26 09:06:32.227285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.047 qpair failed and we were unable to recover it. 00:33:14.047 [2024-07-26 09:06:32.227477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.047 [2024-07-26 09:06:32.227505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.047 qpair failed and we were unable to recover it. 00:33:14.047 [2024-07-26 09:06:32.227633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.047 [2024-07-26 09:06:32.227663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.047 qpair failed and we were unable to recover it. 00:33:14.047 [2024-07-26 09:06:32.227826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.047 [2024-07-26 09:06:32.227851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.047 qpair failed and we were unable to recover it. 00:33:14.047 [2024-07-26 09:06:32.227975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.047 [2024-07-26 09:06:32.228016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.047 qpair failed and we were unable to recover it. 00:33:14.047 [2024-07-26 09:06:32.228184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.047 [2024-07-26 09:06:32.228213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.047 qpair failed and we were unable to recover it. 00:33:14.047 [2024-07-26 09:06:32.228349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.047 [2024-07-26 09:06:32.228375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.047 qpair failed and we were unable to recover it. 00:33:14.047 [2024-07-26 09:06:32.228560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.047 [2024-07-26 09:06:32.228588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.047 qpair failed and we were unable to recover it. 00:33:14.047 [2024-07-26 09:06:32.228769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.047 [2024-07-26 09:06:32.228798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.047 qpair failed and we were unable to recover it. 00:33:14.047 [2024-07-26 09:06:32.228933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.047 [2024-07-26 09:06:32.228959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.047 qpair failed and we were unable to recover it. 00:33:14.047 [2024-07-26 09:06:32.229099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.047 [2024-07-26 09:06:32.229125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.047 qpair failed and we were unable to recover it. 00:33:14.047 [2024-07-26 09:06:32.229309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.047 [2024-07-26 09:06:32.229334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.047 qpair failed and we were unable to recover it. 00:33:14.047 [2024-07-26 09:06:32.229479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.047 [2024-07-26 09:06:32.229504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.047 qpair failed and we were unable to recover it. 00:33:14.047 [2024-07-26 09:06:32.229665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.047 [2024-07-26 09:06:32.229693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.047 qpair failed and we were unable to recover it. 00:33:14.047 [2024-07-26 09:06:32.229821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.047 [2024-07-26 09:06:32.229849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.047 qpair failed and we were unable to recover it. 00:33:14.047 [2024-07-26 09:06:32.230034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.047 [2024-07-26 09:06:32.230071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.047 qpair failed and we were unable to recover it. 00:33:14.047 [2024-07-26 09:06:32.230276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.047 [2024-07-26 09:06:32.230302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.047 qpair failed and we were unable to recover it. 00:33:14.047 [2024-07-26 09:06:32.230471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.047 [2024-07-26 09:06:32.230499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.047 qpair failed and we were unable to recover it. 00:33:14.047 [2024-07-26 09:06:32.230637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.047 [2024-07-26 09:06:32.230663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.047 qpair failed and we were unable to recover it. 00:33:14.047 [2024-07-26 09:06:32.230852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.047 [2024-07-26 09:06:32.230880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.047 qpair failed and we were unable to recover it. 00:33:14.047 [2024-07-26 09:06:32.231041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.047 [2024-07-26 09:06:32.231075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.047 qpair failed and we were unable to recover it. 00:33:14.047 [2024-07-26 09:06:32.231247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.047 [2024-07-26 09:06:32.231272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.047 qpair failed and we were unable to recover it. 00:33:14.047 [2024-07-26 09:06:32.231396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.047 [2024-07-26 09:06:32.231422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.047 qpair failed and we were unable to recover it. 00:33:14.047 [2024-07-26 09:06:32.231617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.047 [2024-07-26 09:06:32.231645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.047 qpair failed and we were unable to recover it. 00:33:14.047 [2024-07-26 09:06:32.231809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.047 [2024-07-26 09:06:32.231834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.047 qpair failed and we were unable to recover it. 00:33:14.047 [2024-07-26 09:06:32.232020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.047 [2024-07-26 09:06:32.232047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.047 qpair failed and we were unable to recover it. 00:33:14.047 [2024-07-26 09:06:32.232221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.047 [2024-07-26 09:06:32.232250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.047 qpair failed and we were unable to recover it. 00:33:14.047 [2024-07-26 09:06:32.232393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.047 [2024-07-26 09:06:32.232419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.047 qpair failed and we were unable to recover it. 00:33:14.047 [2024-07-26 09:06:32.232591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.047 [2024-07-26 09:06:32.232632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.047 qpair failed and we were unable to recover it. 00:33:14.047 [2024-07-26 09:06:32.232834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.047 [2024-07-26 09:06:32.232859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.047 qpair failed and we were unable to recover it. 00:33:14.047 [2024-07-26 09:06:32.233012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.047 [2024-07-26 09:06:32.233037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.047 qpair failed and we were unable to recover it. 00:33:14.047 [2024-07-26 09:06:32.233190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.047 [2024-07-26 09:06:32.233215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.047 qpair failed and we were unable to recover it. 00:33:14.047 [2024-07-26 09:06:32.233427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.047 [2024-07-26 09:06:32.233459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.047 qpair failed and we were unable to recover it. 00:33:14.047 [2024-07-26 09:06:32.233636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.047 [2024-07-26 09:06:32.233661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.047 qpair failed and we were unable to recover it. 00:33:14.047 [2024-07-26 09:06:32.233782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.047 [2024-07-26 09:06:32.233807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.047 qpair failed and we were unable to recover it. 00:33:14.047 [2024-07-26 09:06:32.233952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.047 [2024-07-26 09:06:32.233977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.047 qpair failed and we were unable to recover it. 00:33:14.047 [2024-07-26 09:06:32.234145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.047 [2024-07-26 09:06:32.234172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.047 qpair failed and we were unable to recover it. 00:33:14.047 [2024-07-26 09:06:32.234306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.047 [2024-07-26 09:06:32.234334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.047 qpair failed and we were unable to recover it. 00:33:14.047 [2024-07-26 09:06:32.234505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.047 [2024-07-26 09:06:32.234531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.047 qpair failed and we were unable to recover it. 00:33:14.047 [2024-07-26 09:06:32.234706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.047 [2024-07-26 09:06:32.234731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.047 qpair failed and we were unable to recover it. 00:33:14.047 [2024-07-26 09:06:32.234922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.047 [2024-07-26 09:06:32.234950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.047 qpair failed and we were unable to recover it. 00:33:14.047 [2024-07-26 09:06:32.235116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.047 [2024-07-26 09:06:32.235145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.047 qpair failed and we were unable to recover it. 00:33:14.047 [2024-07-26 09:06:32.235298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.047 [2024-07-26 09:06:32.235323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.047 qpair failed and we were unable to recover it. 00:33:14.047 [2024-07-26 09:06:32.235464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.047 [2024-07-26 09:06:32.235489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.047 qpair failed and we were unable to recover it. 00:33:14.047 [2024-07-26 09:06:32.235670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.047 [2024-07-26 09:06:32.235695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.047 qpair failed and we were unable to recover it. 00:33:14.047 [2024-07-26 09:06:32.235839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.047 [2024-07-26 09:06:32.235881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.047 qpair failed and we were unable to recover it. 00:33:14.047 [2024-07-26 09:06:32.236030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.047 [2024-07-26 09:06:32.236055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.047 qpair failed and we were unable to recover it. 00:33:14.047 [2024-07-26 09:06:32.236205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.047 [2024-07-26 09:06:32.236230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.047 qpair failed and we were unable to recover it. 00:33:14.047 [2024-07-26 09:06:32.236346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.047 [2024-07-26 09:06:32.236371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.047 qpair failed and we were unable to recover it. 00:33:14.047 [2024-07-26 09:06:32.236482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.047 [2024-07-26 09:06:32.236508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.047 qpair failed and we were unable to recover it. 00:33:14.047 [2024-07-26 09:06:32.236711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.047 [2024-07-26 09:06:32.236737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.047 qpair failed and we were unable to recover it. 00:33:14.047 [2024-07-26 09:06:32.236916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.047 [2024-07-26 09:06:32.236941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.047 qpair failed and we were unable to recover it. 00:33:14.047 [2024-07-26 09:06:32.237100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.047 [2024-07-26 09:06:32.237128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.047 qpair failed and we were unable to recover it. 00:33:14.047 [2024-07-26 09:06:32.237284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.047 [2024-07-26 09:06:32.237312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.047 qpair failed and we were unable to recover it. 00:33:14.047 [2024-07-26 09:06:32.237485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.047 [2024-07-26 09:06:32.237511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.047 qpair failed and we were unable to recover it. 00:33:14.047 [2024-07-26 09:06:32.237678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.047 [2024-07-26 09:06:32.237703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.047 qpair failed and we were unable to recover it. 00:33:14.047 [2024-07-26 09:06:32.237893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.047 [2024-07-26 09:06:32.237921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.047 qpair failed and we were unable to recover it. 00:33:14.047 [2024-07-26 09:06:32.238075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.047 [2024-07-26 09:06:32.238102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.047 qpair failed and we were unable to recover it. 00:33:14.047 [2024-07-26 09:06:32.238222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.047 [2024-07-26 09:06:32.238248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.047 qpair failed and we were unable to recover it. 00:33:14.047 [2024-07-26 09:06:32.238413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.047 [2024-07-26 09:06:32.238446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.047 qpair failed and we were unable to recover it. 00:33:14.048 [2024-07-26 09:06:32.238611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.048 [2024-07-26 09:06:32.238637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.048 qpair failed and we were unable to recover it. 00:33:14.048 [2024-07-26 09:06:32.238791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.048 [2024-07-26 09:06:32.238819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.048 qpair failed and we were unable to recover it. 00:33:14.048 [2024-07-26 09:06:32.238968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.048 [2024-07-26 09:06:32.238993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.048 qpair failed and we were unable to recover it. 00:33:14.048 [2024-07-26 09:06:32.239134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.048 [2024-07-26 09:06:32.239160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.048 qpair failed and we were unable to recover it. 00:33:14.048 [2024-07-26 09:06:32.239270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.048 [2024-07-26 09:06:32.239310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.048 qpair failed and we were unable to recover it. 00:33:14.048 [2024-07-26 09:06:32.239469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.048 [2024-07-26 09:06:32.239497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.048 qpair failed and we were unable to recover it. 00:33:14.048 [2024-07-26 09:06:32.239658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.048 [2024-07-26 09:06:32.239683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.048 qpair failed and we were unable to recover it. 00:33:14.048 [2024-07-26 09:06:32.239809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.048 [2024-07-26 09:06:32.239837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.048 qpair failed and we were unable to recover it. 00:33:14.048 [2024-07-26 09:06:32.239966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.048 [2024-07-26 09:06:32.239992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.048 qpair failed and we were unable to recover it. 00:33:14.048 [2024-07-26 09:06:32.240147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.048 [2024-07-26 09:06:32.240172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.048 qpair failed and we were unable to recover it. 00:33:14.048 [2024-07-26 09:06:32.240293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.048 [2024-07-26 09:06:32.240319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.048 qpair failed and we were unable to recover it. 00:33:14.048 [2024-07-26 09:06:32.240446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.048 [2024-07-26 09:06:32.240471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.048 qpair failed and we were unable to recover it. 00:33:14.048 [2024-07-26 09:06:32.240619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.048 [2024-07-26 09:06:32.240644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.048 qpair failed and we were unable to recover it. 00:33:14.048 [2024-07-26 09:06:32.240807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.048 [2024-07-26 09:06:32.240836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.048 qpair failed and we were unable to recover it. 00:33:14.048 [2024-07-26 09:06:32.240964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.048 [2024-07-26 09:06:32.240992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.048 qpair failed and we were unable to recover it. 00:33:14.048 [2024-07-26 09:06:32.241157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.048 [2024-07-26 09:06:32.241183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.048 qpair failed and we were unable to recover it. 00:33:14.048 [2024-07-26 09:06:32.241305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.048 [2024-07-26 09:06:32.241330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.048 qpair failed and we were unable to recover it. 00:33:14.048 [2024-07-26 09:06:32.241484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.048 [2024-07-26 09:06:32.241509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.048 qpair failed and we were unable to recover it. 00:33:14.048 [2024-07-26 09:06:32.241680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.048 [2024-07-26 09:06:32.241705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.048 qpair failed and we were unable to recover it. 00:33:14.048 [2024-07-26 09:06:32.241830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.048 [2024-07-26 09:06:32.241857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.048 qpair failed and we were unable to recover it. 00:33:14.048 [2024-07-26 09:06:32.241999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.048 [2024-07-26 09:06:32.242025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.048 qpair failed and we were unable to recover it. 00:33:14.048 [2024-07-26 09:06:32.242248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.048 [2024-07-26 09:06:32.242274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.048 qpair failed and we were unable to recover it. 00:33:14.048 [2024-07-26 09:06:32.242470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.048 [2024-07-26 09:06:32.242498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.048 qpair failed and we were unable to recover it. 00:33:14.048 [2024-07-26 09:06:32.242692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.048 [2024-07-26 09:06:32.242717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.048 qpair failed and we were unable to recover it. 00:33:14.048 [2024-07-26 09:06:32.242851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.048 [2024-07-26 09:06:32.242879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.048 qpair failed and we were unable to recover it. 00:33:14.048 [2024-07-26 09:06:32.243068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.048 [2024-07-26 09:06:32.243115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.048 qpair failed and we were unable to recover it. 00:33:14.048 [2024-07-26 09:06:32.243238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.048 [2024-07-26 09:06:32.243264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.048 qpair failed and we were unable to recover it. 00:33:14.048 [2024-07-26 09:06:32.243415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.048 [2024-07-26 09:06:32.243440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.048 qpair failed and we were unable to recover it. 00:33:14.048 [2024-07-26 09:06:32.243627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.048 [2024-07-26 09:06:32.243655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.048 qpair failed and we were unable to recover it. 00:33:14.048 [2024-07-26 09:06:32.243838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.048 [2024-07-26 09:06:32.243866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.048 qpair failed and we were unable to recover it. 00:33:14.048 [2024-07-26 09:06:32.244005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.048 [2024-07-26 09:06:32.244031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.048 qpair failed and we were unable to recover it. 00:33:14.048 [2024-07-26 09:06:32.244206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.048 [2024-07-26 09:06:32.244248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.048 qpair failed and we were unable to recover it. 00:33:14.048 [2024-07-26 09:06:32.244419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.048 [2024-07-26 09:06:32.244446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.048 qpair failed and we were unable to recover it. 00:33:14.048 [2024-07-26 09:06:32.244585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.048 [2024-07-26 09:06:32.244611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.048 qpair failed and we were unable to recover it. 00:33:14.048 [2024-07-26 09:06:32.244746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.048 [2024-07-26 09:06:32.244789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.048 qpair failed and we were unable to recover it. 00:33:14.048 [2024-07-26 09:06:32.244959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.048 [2024-07-26 09:06:32.244984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.048 qpair failed and we were unable to recover it. 00:33:14.048 [2024-07-26 09:06:32.245105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.048 [2024-07-26 09:06:32.245131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.048 qpair failed and we were unable to recover it. 00:33:14.048 [2024-07-26 09:06:32.245273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.048 [2024-07-26 09:06:32.245299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.048 qpair failed and we were unable to recover it. 00:33:14.048 [2024-07-26 09:06:32.245423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.048 [2024-07-26 09:06:32.245449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.048 qpair failed and we were unable to recover it. 00:33:14.048 [2024-07-26 09:06:32.245561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.048 [2024-07-26 09:06:32.245586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.048 qpair failed and we were unable to recover it. 00:33:14.048 [2024-07-26 09:06:32.245757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.048 [2024-07-26 09:06:32.245801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.048 qpair failed and we were unable to recover it. 00:33:14.048 [2024-07-26 09:06:32.245972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.048 [2024-07-26 09:06:32.245997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.048 qpair failed and we were unable to recover it. 00:33:14.048 [2024-07-26 09:06:32.246143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.048 [2024-07-26 09:06:32.246169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.048 qpair failed and we were unable to recover it. 00:33:14.048 [2024-07-26 09:06:32.246341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.048 [2024-07-26 09:06:32.246369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.048 qpair failed and we were unable to recover it. 00:33:14.048 [2024-07-26 09:06:32.246535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.048 [2024-07-26 09:06:32.246560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.048 qpair failed and we were unable to recover it. 00:33:14.048 [2024-07-26 09:06:32.246682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.048 [2024-07-26 09:06:32.246707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.048 qpair failed and we were unable to recover it. 00:33:14.048 [2024-07-26 09:06:32.246851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.048 [2024-07-26 09:06:32.246876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.048 qpair failed and we were unable to recover it. 00:33:14.048 [2024-07-26 09:06:32.247045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.048 [2024-07-26 09:06:32.247087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.048 qpair failed and we were unable to recover it. 00:33:14.048 [2024-07-26 09:06:32.247267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.048 [2024-07-26 09:06:32.247293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.048 qpair failed and we were unable to recover it. 00:33:14.048 [2024-07-26 09:06:32.247449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.048 [2024-07-26 09:06:32.247478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.048 qpair failed and we were unable to recover it. 00:33:14.048 [2024-07-26 09:06:32.247630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.048 [2024-07-26 09:06:32.247658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.048 qpair failed and we were unable to recover it. 00:33:14.048 [2024-07-26 09:06:32.247809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.048 [2024-07-26 09:06:32.247834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.048 qpair failed and we were unable to recover it. 00:33:14.048 [2024-07-26 09:06:32.247981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.048 [2024-07-26 09:06:32.248024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.048 qpair failed and we were unable to recover it. 00:33:14.048 [2024-07-26 09:06:32.248225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.048 [2024-07-26 09:06:32.248251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.048 qpair failed and we were unable to recover it. 00:33:14.048 [2024-07-26 09:06:32.248397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.048 [2024-07-26 09:06:32.248423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.048 qpair failed and we were unable to recover it. 00:33:14.048 [2024-07-26 09:06:32.248539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.048 [2024-07-26 09:06:32.248579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.048 qpair failed and we were unable to recover it. 00:33:14.048 [2024-07-26 09:06:32.248734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.048 [2024-07-26 09:06:32.248760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.048 qpair failed and we were unable to recover it. 00:33:14.048 [2024-07-26 09:06:32.248874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.048 [2024-07-26 09:06:32.248899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.048 qpair failed and we were unable to recover it. 00:33:14.048 [2024-07-26 09:06:32.249047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.048 [2024-07-26 09:06:32.249098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.048 qpair failed and we were unable to recover it. 00:33:14.048 [2024-07-26 09:06:32.249242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.048 [2024-07-26 09:06:32.249270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.048 qpair failed and we were unable to recover it. 00:33:14.048 [2024-07-26 09:06:32.249415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.048 [2024-07-26 09:06:32.249440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.048 qpair failed and we were unable to recover it. 00:33:14.048 [2024-07-26 09:06:32.249585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.049 [2024-07-26 09:06:32.249610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.049 qpair failed and we were unable to recover it. 00:33:14.049 [2024-07-26 09:06:32.249804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.049 [2024-07-26 09:06:32.249832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.049 qpair failed and we were unable to recover it. 00:33:14.049 [2024-07-26 09:06:32.249976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.049 [2024-07-26 09:06:32.250001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.049 qpair failed and we were unable to recover it. 00:33:14.049 [2024-07-26 09:06:32.250158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.049 [2024-07-26 09:06:32.250187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.049 qpair failed and we were unable to recover it. 00:33:14.049 [2024-07-26 09:06:32.250340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.049 [2024-07-26 09:06:32.250369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.049 qpair failed and we were unable to recover it. 00:33:14.049 [2024-07-26 09:06:32.250507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.049 [2024-07-26 09:06:32.250532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.049 qpair failed and we were unable to recover it. 00:33:14.049 [2024-07-26 09:06:32.250678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.049 [2024-07-26 09:06:32.250723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.049 qpair failed and we were unable to recover it. 00:33:14.049 [2024-07-26 09:06:32.250876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.049 [2024-07-26 09:06:32.250904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.049 qpair failed and we were unable to recover it. 00:33:14.049 [2024-07-26 09:06:32.251072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.049 [2024-07-26 09:06:32.251101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.049 qpair failed and we were unable to recover it. 00:33:14.049 [2024-07-26 09:06:32.251294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.049 [2024-07-26 09:06:32.251322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.049 qpair failed and we were unable to recover it. 00:33:14.049 [2024-07-26 09:06:32.251499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.049 [2024-07-26 09:06:32.251524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.049 qpair failed and we were unable to recover it. 00:33:14.049 [2024-07-26 09:06:32.251692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.049 [2024-07-26 09:06:32.251717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.049 qpair failed and we were unable to recover it. 00:33:14.049 [2024-07-26 09:06:32.251847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.049 [2024-07-26 09:06:32.251875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.049 qpair failed and we were unable to recover it. 00:33:14.049 [2024-07-26 09:06:32.252041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.049 [2024-07-26 09:06:32.252086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.049 qpair failed and we were unable to recover it. 00:33:14.049 [2024-07-26 09:06:32.252229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.049 [2024-07-26 09:06:32.252254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.049 qpair failed and we were unable to recover it. 00:33:14.049 [2024-07-26 09:06:32.252400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.049 [2024-07-26 09:06:32.252425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.049 qpair failed and we were unable to recover it. 00:33:14.049 [2024-07-26 09:06:32.252631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.049 [2024-07-26 09:06:32.252656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.049 qpair failed and we were unable to recover it. 00:33:14.049 [2024-07-26 09:06:32.252802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.049 [2024-07-26 09:06:32.252827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.049 qpair failed and we were unable to recover it. 00:33:14.049 [2024-07-26 09:06:32.252981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.049 [2024-07-26 09:06:32.253008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.049 qpair failed and we were unable to recover it. 00:33:14.049 [2024-07-26 09:06:32.253192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.049 [2024-07-26 09:06:32.253218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.049 qpair failed and we were unable to recover it. 00:33:14.049 [2024-07-26 09:06:32.253370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.049 [2024-07-26 09:06:32.253395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.049 qpair failed and we were unable to recover it. 00:33:14.049 [2024-07-26 09:06:32.253518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.049 [2024-07-26 09:06:32.253561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.049 qpair failed and we were unable to recover it. 00:33:14.049 [2024-07-26 09:06:32.253682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.049 [2024-07-26 09:06:32.253710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.049 qpair failed and we were unable to recover it. 00:33:14.049 [2024-07-26 09:06:32.253858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.049 [2024-07-26 09:06:32.253884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.049 qpair failed and we were unable to recover it. 00:33:14.049 [2024-07-26 09:06:32.254065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.049 [2024-07-26 09:06:32.254091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.049 qpair failed and we were unable to recover it. 00:33:14.049 [2024-07-26 09:06:32.254223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.049 [2024-07-26 09:06:32.254251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.049 qpair failed and we were unable to recover it. 00:33:14.049 [2024-07-26 09:06:32.254449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.049 [2024-07-26 09:06:32.254475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.049 qpair failed and we were unable to recover it. 00:33:14.049 [2024-07-26 09:06:32.254640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.049 [2024-07-26 09:06:32.254668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.049 qpair failed and we were unable to recover it. 00:33:14.049 [2024-07-26 09:06:32.254824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.049 [2024-07-26 09:06:32.254852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.049 qpair failed and we were unable to recover it. 00:33:14.049 [2024-07-26 09:06:32.255034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.049 [2024-07-26 09:06:32.255072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.049 qpair failed and we were unable to recover it. 00:33:14.049 [2024-07-26 09:06:32.255272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.049 [2024-07-26 09:06:32.255298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.049 qpair failed and we were unable to recover it. 00:33:14.049 [2024-07-26 09:06:32.255474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.049 [2024-07-26 09:06:32.255499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.049 qpair failed and we were unable to recover it. 00:33:14.049 [2024-07-26 09:06:32.255669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.049 [2024-07-26 09:06:32.255694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.049 qpair failed and we were unable to recover it. 00:33:14.049 [2024-07-26 09:06:32.255808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.049 [2024-07-26 09:06:32.255833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.049 qpair failed and we were unable to recover it. 00:33:14.049 [2024-07-26 09:06:32.255987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.049 [2024-07-26 09:06:32.256011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.049 qpair failed and we were unable to recover it. 00:33:14.049 [2024-07-26 09:06:32.256185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.049 [2024-07-26 09:06:32.256211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.049 qpair failed and we were unable to recover it. 00:33:14.049 [2024-07-26 09:06:32.256372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.049 [2024-07-26 09:06:32.256400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.049 qpair failed and we were unable to recover it. 00:33:14.049 [2024-07-26 09:06:32.256595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.049 [2024-07-26 09:06:32.256621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.049 qpair failed and we were unable to recover it. 00:33:14.049 [2024-07-26 09:06:32.256732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.049 [2024-07-26 09:06:32.256758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.049 qpair failed and we were unable to recover it. 00:33:14.049 [2024-07-26 09:06:32.256901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.049 [2024-07-26 09:06:32.256940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.049 qpair failed and we were unable to recover it. 00:33:14.049 [2024-07-26 09:06:32.257076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.049 [2024-07-26 09:06:32.257105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.049 qpair failed and we were unable to recover it. 00:33:14.049 [2024-07-26 09:06:32.257248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.049 [2024-07-26 09:06:32.257274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.049 qpair failed and we were unable to recover it. 00:33:14.049 [2024-07-26 09:06:32.257388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.049 [2024-07-26 09:06:32.257413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.049 qpair failed and we were unable to recover it. 00:33:14.049 [2024-07-26 09:06:32.257535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.049 [2024-07-26 09:06:32.257561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.049 qpair failed and we were unable to recover it. 00:33:14.049 [2024-07-26 09:06:32.257728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.049 [2024-07-26 09:06:32.257753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.049 qpair failed and we were unable to recover it. 00:33:14.049 [2024-07-26 09:06:32.257918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.049 [2024-07-26 09:06:32.257945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.049 qpair failed and we were unable to recover it. 00:33:14.049 [2024-07-26 09:06:32.258112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.049 [2024-07-26 09:06:32.258142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.049 qpair failed and we were unable to recover it. 00:33:14.049 [2024-07-26 09:06:32.258278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.049 [2024-07-26 09:06:32.258308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.049 qpair failed and we were unable to recover it. 00:33:14.049 [2024-07-26 09:06:32.258466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.049 [2024-07-26 09:06:32.258492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.049 qpair failed and we were unable to recover it. 00:33:14.049 [2024-07-26 09:06:32.258685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.049 [2024-07-26 09:06:32.258711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.049 qpair failed and we were unable to recover it. 00:33:14.049 [2024-07-26 09:06:32.258849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.049 [2024-07-26 09:06:32.258874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.049 qpair failed and we were unable to recover it. 00:33:14.049 [2024-07-26 09:06:32.258994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.049 [2024-07-26 09:06:32.259019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.050 qpair failed and we were unable to recover it. 00:33:14.050 [2024-07-26 09:06:32.259199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.050 [2024-07-26 09:06:32.259243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.050 qpair failed and we were unable to recover it. 00:33:14.050 [2024-07-26 09:06:32.259380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.050 [2024-07-26 09:06:32.259405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.050 qpair failed and we were unable to recover it. 00:33:14.050 [2024-07-26 09:06:32.259595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.050 [2024-07-26 09:06:32.259623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.050 qpair failed and we were unable to recover it. 00:33:14.050 [2024-07-26 09:06:32.259806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.050 [2024-07-26 09:06:32.259834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.050 qpair failed and we were unable to recover it. 00:33:14.050 [2024-07-26 09:06:32.259994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.050 [2024-07-26 09:06:32.260019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.050 qpair failed and we were unable to recover it. 00:33:14.050 [2024-07-26 09:06:32.260142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.050 [2024-07-26 09:06:32.260168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.050 qpair failed and we were unable to recover it. 00:33:14.050 [2024-07-26 09:06:32.260305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.050 [2024-07-26 09:06:32.260349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.050 qpair failed and we were unable to recover it. 00:33:14.050 [2024-07-26 09:06:32.260516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.050 [2024-07-26 09:06:32.260541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.050 qpair failed and we were unable to recover it. 00:33:14.050 [2024-07-26 09:06:32.260653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.050 [2024-07-26 09:06:32.260693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.050 qpair failed and we were unable to recover it. 00:33:14.050 [2024-07-26 09:06:32.260891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.050 [2024-07-26 09:06:32.260919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.050 qpair failed and we were unable to recover it. 00:33:14.050 [2024-07-26 09:06:32.261065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.050 [2024-07-26 09:06:32.261092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.050 qpair failed and we were unable to recover it. 00:33:14.050 [2024-07-26 09:06:32.261244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.050 [2024-07-26 09:06:32.261270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.050 qpair failed and we were unable to recover it. 00:33:14.050 [2024-07-26 09:06:32.261396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.050 [2024-07-26 09:06:32.261424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.050 qpair failed and we were unable to recover it. 00:33:14.050 [2024-07-26 09:06:32.261617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.050 [2024-07-26 09:06:32.261642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.050 qpair failed and we were unable to recover it. 00:33:14.050 [2024-07-26 09:06:32.261838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.050 [2024-07-26 09:06:32.261866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.050 qpair failed and we were unable to recover it. 00:33:14.050 [2024-07-26 09:06:32.262025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.050 [2024-07-26 09:06:32.262053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.050 qpair failed and we were unable to recover it. 00:33:14.050 [2024-07-26 09:06:32.262202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.050 [2024-07-26 09:06:32.262227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.050 qpair failed and we were unable to recover it. 00:33:14.050 [2024-07-26 09:06:32.262339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.050 [2024-07-26 09:06:32.262364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.050 qpair failed and we were unable to recover it. 00:33:14.050 [2024-07-26 09:06:32.262491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.050 [2024-07-26 09:06:32.262519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.050 qpair failed and we were unable to recover it. 00:33:14.050 [2024-07-26 09:06:32.262687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.050 [2024-07-26 09:06:32.262712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.050 qpair failed and we were unable to recover it. 00:33:14.050 [2024-07-26 09:06:32.262870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.050 [2024-07-26 09:06:32.262898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.050 qpair failed and we were unable to recover it. 00:33:14.050 [2024-07-26 09:06:32.263095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.050 [2024-07-26 09:06:32.263122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.050 qpair failed and we were unable to recover it. 00:33:14.050 [2024-07-26 09:06:32.263249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.050 [2024-07-26 09:06:32.263274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.050 qpair failed and we were unable to recover it. 00:33:14.050 [2024-07-26 09:06:32.263404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.050 [2024-07-26 09:06:32.263445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.050 qpair failed and we were unable to recover it. 00:33:14.050 [2024-07-26 09:06:32.263600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.050 [2024-07-26 09:06:32.263628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.050 qpair failed and we were unable to recover it. 00:33:14.050 [2024-07-26 09:06:32.263791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.050 [2024-07-26 09:06:32.263817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.050 qpair failed and we were unable to recover it. 00:33:14.050 [2024-07-26 09:06:32.264011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.050 [2024-07-26 09:06:32.264039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.050 qpair failed and we were unable to recover it. 00:33:14.050 [2024-07-26 09:06:32.264230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.050 [2024-07-26 09:06:32.264258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.050 qpair failed and we were unable to recover it. 00:33:14.050 [2024-07-26 09:06:32.264428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.050 [2024-07-26 09:06:32.264453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.050 qpair failed and we were unable to recover it. 00:33:14.050 [2024-07-26 09:06:32.264599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.050 [2024-07-26 09:06:32.264641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.050 qpair failed and we were unable to recover it. 00:33:14.050 [2024-07-26 09:06:32.264774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.050 [2024-07-26 09:06:32.264803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.050 qpair failed and we were unable to recover it. 00:33:14.050 [2024-07-26 09:06:32.264967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.050 [2024-07-26 09:06:32.264993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.050 qpair failed and we were unable to recover it. 00:33:14.050 [2024-07-26 09:06:32.265152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.050 [2024-07-26 09:06:32.265180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.050 qpair failed and we were unable to recover it. 00:33:14.050 [2024-07-26 09:06:32.265341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.050 [2024-07-26 09:06:32.265370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.050 qpair failed and we were unable to recover it. 00:33:14.050 [2024-07-26 09:06:32.265528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.050 [2024-07-26 09:06:32.265553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.050 qpair failed and we were unable to recover it. 00:33:14.050 [2024-07-26 09:06:32.265667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.050 [2024-07-26 09:06:32.265709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.050 qpair failed and we were unable to recover it. 00:33:14.050 [2024-07-26 09:06:32.265849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.050 [2024-07-26 09:06:32.265878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.050 qpair failed and we were unable to recover it. 00:33:14.050 [2024-07-26 09:06:32.266038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.050 [2024-07-26 09:06:32.266069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.050 qpair failed and we were unable to recover it. 00:33:14.050 [2024-07-26 09:06:32.266233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.050 [2024-07-26 09:06:32.266261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.050 qpair failed and we were unable to recover it. 00:33:14.050 [2024-07-26 09:06:32.266395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.050 [2024-07-26 09:06:32.266424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.050 qpair failed and we were unable to recover it. 00:33:14.050 [2024-07-26 09:06:32.266594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.050 [2024-07-26 09:06:32.266619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.050 qpair failed and we were unable to recover it. 00:33:14.050 [2024-07-26 09:06:32.266788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.050 [2024-07-26 09:06:32.266816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.050 qpair failed and we were unable to recover it. 00:33:14.050 [2024-07-26 09:06:32.266986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.050 [2024-07-26 09:06:32.267011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.050 qpair failed and we were unable to recover it. 00:33:14.050 [2024-07-26 09:06:32.267154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.050 [2024-07-26 09:06:32.267191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.050 qpair failed and we were unable to recover it. 00:33:14.050 [2024-07-26 09:06:32.267338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.050 [2024-07-26 09:06:32.267363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.050 qpair failed and we were unable to recover it. 00:33:14.050 [2024-07-26 09:06:32.267505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.050 [2024-07-26 09:06:32.267545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.050 qpair failed and we were unable to recover it. 00:33:14.050 [2024-07-26 09:06:32.267709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.050 [2024-07-26 09:06:32.267734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.050 qpair failed and we were unable to recover it. 00:33:14.050 [2024-07-26 09:06:32.267859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.050 [2024-07-26 09:06:32.267883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.050 qpair failed and we were unable to recover it. 00:33:14.050 [2024-07-26 09:06:32.268037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.050 [2024-07-26 09:06:32.268075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.050 qpair failed and we were unable to recover it. 00:33:14.050 [2024-07-26 09:06:32.268198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.050 [2024-07-26 09:06:32.268224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.050 qpair failed and we were unable to recover it. 00:33:14.050 [2024-07-26 09:06:32.268369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.050 [2024-07-26 09:06:32.268394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.050 qpair failed and we were unable to recover it. 00:33:14.050 [2024-07-26 09:06:32.268592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.050 [2024-07-26 09:06:32.268620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.050 qpair failed and we were unable to recover it. 00:33:14.050 [2024-07-26 09:06:32.268786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.050 [2024-07-26 09:06:32.268811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.050 qpair failed and we were unable to recover it. 00:33:14.050 [2024-07-26 09:06:32.268972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.050 [2024-07-26 09:06:32.269000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.050 qpair failed and we were unable to recover it. 00:33:14.050 [2024-07-26 09:06:32.269176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.050 [2024-07-26 09:06:32.269202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.050 qpair failed and we were unable to recover it. 00:33:14.050 [2024-07-26 09:06:32.269347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.050 [2024-07-26 09:06:32.269372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.050 qpair failed and we were unable to recover it. 00:33:14.050 [2024-07-26 09:06:32.269534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.050 [2024-07-26 09:06:32.269562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.050 qpair failed and we were unable to recover it. 00:33:14.050 [2024-07-26 09:06:32.269724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.050 [2024-07-26 09:06:32.269749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.050 qpair failed and we were unable to recover it. 00:33:14.050 [2024-07-26 09:06:32.269896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.050 [2024-07-26 09:06:32.269921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.050 qpair failed and we were unable to recover it. 00:33:14.050 [2024-07-26 09:06:32.270072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.050 [2024-07-26 09:06:32.270098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.050 qpair failed and we were unable to recover it. 00:33:14.050 [2024-07-26 09:06:32.270228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.050 [2024-07-26 09:06:32.270257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.050 qpair failed and we were unable to recover it. 00:33:14.050 [2024-07-26 09:06:32.270428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.051 [2024-07-26 09:06:32.270453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.051 qpair failed and we were unable to recover it. 00:33:14.051 [2024-07-26 09:06:32.270593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.051 [2024-07-26 09:06:32.270618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.051 qpair failed and we were unable to recover it. 00:33:14.051 [2024-07-26 09:06:32.270777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.051 [2024-07-26 09:06:32.270809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.051 qpair failed and we were unable to recover it. 00:33:14.051 [2024-07-26 09:06:32.270939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.051 [2024-07-26 09:06:32.270964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.051 qpair failed and we were unable to recover it. 00:33:14.051 [2024-07-26 09:06:32.271119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.051 [2024-07-26 09:06:32.271145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.051 qpair failed and we were unable to recover it. 00:33:14.051 [2024-07-26 09:06:32.271273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.051 [2024-07-26 09:06:32.271298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.051 qpair failed and we were unable to recover it. 00:33:14.051 [2024-07-26 09:06:32.271445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.051 [2024-07-26 09:06:32.271470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.051 qpair failed and we were unable to recover it. 00:33:14.051 [2024-07-26 09:06:32.271613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.051 [2024-07-26 09:06:32.271655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.051 qpair failed and we were unable to recover it. 00:33:14.051 [2024-07-26 09:06:32.271778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.051 [2024-07-26 09:06:32.271806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.051 qpair failed and we were unable to recover it. 00:33:14.051 [2024-07-26 09:06:32.271950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.051 [2024-07-26 09:06:32.271975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.051 qpair failed and we were unable to recover it. 00:33:14.051 [2024-07-26 09:06:32.272096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.051 [2024-07-26 09:06:32.272123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.051 qpair failed and we were unable to recover it. 00:33:14.051 [2024-07-26 09:06:32.272314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.051 [2024-07-26 09:06:32.272340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.051 qpair failed and we were unable to recover it. 00:33:14.051 [2024-07-26 09:06:32.272454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.051 [2024-07-26 09:06:32.272479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.051 qpair failed and we were unable to recover it. 00:33:14.051 [2024-07-26 09:06:32.272618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.051 [2024-07-26 09:06:32.272659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.051 qpair failed and we were unable to recover it. 00:33:14.051 [2024-07-26 09:06:32.272817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.051 [2024-07-26 09:06:32.272845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.051 qpair failed and we were unable to recover it. 00:33:14.051 [2024-07-26 09:06:32.273012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.051 [2024-07-26 09:06:32.273037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.051 qpair failed and we were unable to recover it. 00:33:14.051 [2024-07-26 09:06:32.273209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.051 [2024-07-26 09:06:32.273237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.051 qpair failed and we were unable to recover it. 00:33:14.051 [2024-07-26 09:06:32.273360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.051 [2024-07-26 09:06:32.273388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.051 qpair failed and we were unable to recover it. 00:33:14.051 [2024-07-26 09:06:32.273568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.051 [2024-07-26 09:06:32.273593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.051 qpair failed and we were unable to recover it. 00:33:14.051 [2024-07-26 09:06:32.273742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.051 [2024-07-26 09:06:32.273768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.051 qpair failed and we were unable to recover it. 00:33:14.051 [2024-07-26 09:06:32.273904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.051 [2024-07-26 09:06:32.273929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.051 qpair failed and we were unable to recover it. 00:33:14.051 [2024-07-26 09:06:32.274104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.051 [2024-07-26 09:06:32.274130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.051 qpair failed and we were unable to recover it. 00:33:14.051 [2024-07-26 09:06:32.274318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.051 [2024-07-26 09:06:32.274346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.051 qpair failed and we were unable to recover it. 00:33:14.051 [2024-07-26 09:06:32.274530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.051 [2024-07-26 09:06:32.274558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.051 qpair failed and we were unable to recover it. 00:33:14.051 [2024-07-26 09:06:32.274692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.051 [2024-07-26 09:06:32.274717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.051 qpair failed and we were unable to recover it. 00:33:14.051 [2024-07-26 09:06:32.274909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.051 [2024-07-26 09:06:32.274937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.051 qpair failed and we were unable to recover it. 00:33:14.051 [2024-07-26 09:06:32.275068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.051 [2024-07-26 09:06:32.275101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.051 qpair failed and we were unable to recover it. 00:33:14.051 [2024-07-26 09:06:32.275269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.051 [2024-07-26 09:06:32.275295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.051 qpair failed and we were unable to recover it. 00:33:14.051 [2024-07-26 09:06:32.275436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.051 [2024-07-26 09:06:32.275461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.051 qpair failed and we were unable to recover it. 00:33:14.051 [2024-07-26 09:06:32.275616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.051 [2024-07-26 09:06:32.275645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.051 qpair failed and we were unable to recover it. 00:33:14.051 [2024-07-26 09:06:32.275812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.051 [2024-07-26 09:06:32.275837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.051 qpair failed and we were unable to recover it. 00:33:14.051 [2024-07-26 09:06:32.275996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.051 [2024-07-26 09:06:32.276024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.051 qpair failed and we were unable to recover it. 00:33:14.051 [2024-07-26 09:06:32.276174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.051 [2024-07-26 09:06:32.276200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.051 qpair failed and we were unable to recover it. 00:33:14.051 [2024-07-26 09:06:32.276341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.051 [2024-07-26 09:06:32.276367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.051 qpair failed and we were unable to recover it. 00:33:14.051 [2024-07-26 09:06:32.276559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.051 [2024-07-26 09:06:32.276587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.051 qpair failed and we were unable to recover it. 00:33:14.051 [2024-07-26 09:06:32.276769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.051 [2024-07-26 09:06:32.276797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.051 qpair failed and we were unable to recover it. 00:33:14.051 [2024-07-26 09:06:32.276970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.051 [2024-07-26 09:06:32.276995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.051 qpair failed and we were unable to recover it. 00:33:14.051 [2024-07-26 09:06:32.277164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.051 [2024-07-26 09:06:32.277190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.051 qpair failed and we were unable to recover it. 00:33:14.051 [2024-07-26 09:06:32.277403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.051 [2024-07-26 09:06:32.277428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.051 qpair failed and we were unable to recover it. 00:33:14.051 [2024-07-26 09:06:32.277575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.051 [2024-07-26 09:06:32.277600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.051 qpair failed and we were unable to recover it. 00:33:14.051 [2024-07-26 09:06:32.277758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.051 [2024-07-26 09:06:32.277786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.051 qpair failed and we were unable to recover it. 00:33:14.051 [2024-07-26 09:06:32.277976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.051 [2024-07-26 09:06:32.278001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.051 qpair failed and we were unable to recover it. 00:33:14.051 [2024-07-26 09:06:32.278148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.051 [2024-07-26 09:06:32.278174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.051 qpair failed and we were unable to recover it. 00:33:14.051 [2024-07-26 09:06:32.278345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.051 [2024-07-26 09:06:32.278391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.051 qpair failed and we were unable to recover it. 00:33:14.051 [2024-07-26 09:06:32.278546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.051 [2024-07-26 09:06:32.278574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.051 qpair failed and we were unable to recover it. 00:33:14.051 [2024-07-26 09:06:32.278713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.051 [2024-07-26 09:06:32.278739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.051 qpair failed and we were unable to recover it. 00:33:14.051 [2024-07-26 09:06:32.278887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.051 [2024-07-26 09:06:32.278912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.051 qpair failed and we were unable to recover it. 00:33:14.051 [2024-07-26 09:06:32.279071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.051 [2024-07-26 09:06:32.279100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.051 qpair failed and we were unable to recover it. 00:33:14.051 [2024-07-26 09:06:32.279273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.051 [2024-07-26 09:06:32.279299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.051 qpair failed and we were unable to recover it. 00:33:14.051 [2024-07-26 09:06:32.279412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.051 [2024-07-26 09:06:32.279439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.051 qpair failed and we were unable to recover it. 00:33:14.051 [2024-07-26 09:06:32.279610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.051 [2024-07-26 09:06:32.279635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.051 qpair failed and we were unable to recover it. 00:33:14.051 [2024-07-26 09:06:32.279757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.051 [2024-07-26 09:06:32.279782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.051 qpair failed and we were unable to recover it. 00:33:14.051 [2024-07-26 09:06:32.279923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.051 [2024-07-26 09:06:32.279966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.051 qpair failed and we were unable to recover it. 00:33:14.051 [2024-07-26 09:06:32.280143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.051 [2024-07-26 09:06:32.280169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.051 qpair failed and we were unable to recover it. 00:33:14.051 [2024-07-26 09:06:32.280313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.051 [2024-07-26 09:06:32.280338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.051 qpair failed and we were unable to recover it. 00:33:14.051 [2024-07-26 09:06:32.280482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.051 [2024-07-26 09:06:32.280508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.051 qpair failed and we were unable to recover it. 00:33:14.051 [2024-07-26 09:06:32.280654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.051 [2024-07-26 09:06:32.280679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.051 qpair failed and we were unable to recover it. 00:33:14.051 [2024-07-26 09:06:32.280802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.051 [2024-07-26 09:06:32.280828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.051 qpair failed and we were unable to recover it. 00:33:14.051 [2024-07-26 09:06:32.280979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.051 [2024-07-26 09:06:32.281022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.051 qpair failed and we were unable to recover it. 00:33:14.051 [2024-07-26 09:06:32.281191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.051 [2024-07-26 09:06:32.281220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.051 qpair failed and we were unable to recover it. 00:33:14.051 [2024-07-26 09:06:32.281406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.051 [2024-07-26 09:06:32.281431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.051 qpair failed and we were unable to recover it. 00:33:14.051 [2024-07-26 09:06:32.281590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.051 [2024-07-26 09:06:32.281618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.051 qpair failed and we were unable to recover it. 00:33:14.051 [2024-07-26 09:06:32.281790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.051 [2024-07-26 09:06:32.281818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.051 qpair failed and we were unable to recover it. 00:33:14.051 [2024-07-26 09:06:32.282000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.051 [2024-07-26 09:06:32.282028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.051 qpair failed and we were unable to recover it. 00:33:14.051 [2024-07-26 09:06:32.282169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.051 [2024-07-26 09:06:32.282195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.051 qpair failed and we were unable to recover it. 00:33:14.051 [2024-07-26 09:06:32.282365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.051 [2024-07-26 09:06:32.282393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.051 qpair failed and we were unable to recover it. 00:33:14.051 [2024-07-26 09:06:32.282534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.052 [2024-07-26 09:06:32.282559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.052 qpair failed and we were unable to recover it. 00:33:14.052 [2024-07-26 09:06:32.282685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.052 [2024-07-26 09:06:32.282711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.052 qpair failed and we were unable to recover it. 00:33:14.052 [2024-07-26 09:06:32.282881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.052 [2024-07-26 09:06:32.282909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.052 qpair failed and we were unable to recover it. 00:33:14.052 [2024-07-26 09:06:32.283072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.052 [2024-07-26 09:06:32.283102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.052 qpair failed and we were unable to recover it. 00:33:14.052 [2024-07-26 09:06:32.283228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.052 [2024-07-26 09:06:32.283257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.052 qpair failed and we were unable to recover it. 00:33:14.052 [2024-07-26 09:06:32.283433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.052 [2024-07-26 09:06:32.283459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.052 qpair failed and we were unable to recover it. 00:33:14.052 [2024-07-26 09:06:32.283640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.052 [2024-07-26 09:06:32.283665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.052 qpair failed and we were unable to recover it. 00:33:14.052 [2024-07-26 09:06:32.283786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.052 [2024-07-26 09:06:32.283811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.052 qpair failed and we were unable to recover it. 00:33:14.052 [2024-07-26 09:06:32.283964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.052 [2024-07-26 09:06:32.283990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.052 qpair failed and we were unable to recover it. 00:33:14.052 [2024-07-26 09:06:32.284161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.052 [2024-07-26 09:06:32.284187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.052 qpair failed and we were unable to recover it. 00:33:14.052 [2024-07-26 09:06:32.284374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.052 [2024-07-26 09:06:32.284402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.052 qpair failed and we were unable to recover it. 00:33:14.052 [2024-07-26 09:06:32.284581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.052 [2024-07-26 09:06:32.284607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.052 qpair failed and we were unable to recover it. 00:33:14.052 [2024-07-26 09:06:32.284744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.052 [2024-07-26 09:06:32.284770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.052 qpair failed and we were unable to recover it. 00:33:14.052 [2024-07-26 09:06:32.284932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.052 [2024-07-26 09:06:32.284960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.052 qpair failed and we were unable to recover it. 00:33:14.052 [2024-07-26 09:06:32.285120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.052 [2024-07-26 09:06:32.285148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.052 qpair failed and we were unable to recover it. 00:33:14.052 [2024-07-26 09:06:32.285314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.052 [2024-07-26 09:06:32.285339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.052 qpair failed and we were unable to recover it. 00:33:14.052 [2024-07-26 09:06:32.285456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.052 [2024-07-26 09:06:32.285481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.052 qpair failed and we were unable to recover it. 00:33:14.052 [2024-07-26 09:06:32.285621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.052 [2024-07-26 09:06:32.285646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.052 qpair failed and we were unable to recover it. 00:33:14.052 [2024-07-26 09:06:32.285820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.052 [2024-07-26 09:06:32.285846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.052 qpair failed and we were unable to recover it. 00:33:14.052 [2024-07-26 09:06:32.286007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.052 [2024-07-26 09:06:32.286035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.052 qpair failed and we were unable to recover it. 00:33:14.052 [2024-07-26 09:06:32.286202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.052 [2024-07-26 09:06:32.286231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.052 qpair failed and we were unable to recover it. 00:33:14.052 [2024-07-26 09:06:32.286374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.052 [2024-07-26 09:06:32.286400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.052 qpair failed and we were unable to recover it. 00:33:14.052 [2024-07-26 09:06:32.286523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.052 [2024-07-26 09:06:32.286549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.052 qpair failed and we were unable to recover it. 00:33:14.052 [2024-07-26 09:06:32.286684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.052 [2024-07-26 09:06:32.286712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.052 qpair failed and we were unable to recover it. 00:33:14.052 [2024-07-26 09:06:32.286875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.052 [2024-07-26 09:06:32.286900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.052 qpair failed and we were unable to recover it. 00:33:14.052 [2024-07-26 09:06:32.287057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.052 [2024-07-26 09:06:32.287098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.052 qpair failed and we were unable to recover it. 00:33:14.052 [2024-07-26 09:06:32.287269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.052 [2024-07-26 09:06:32.287295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.052 qpair failed and we were unable to recover it. 00:33:14.052 [2024-07-26 09:06:32.287463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.052 [2024-07-26 09:06:32.287488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.052 qpair failed and we were unable to recover it. 00:33:14.052 [2024-07-26 09:06:32.287657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.052 [2024-07-26 09:06:32.287685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.052 qpair failed and we were unable to recover it. 00:33:14.052 [2024-07-26 09:06:32.287846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.052 [2024-07-26 09:06:32.287874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.052 qpair failed and we were unable to recover it. 00:33:14.052 [2024-07-26 09:06:32.288042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.052 [2024-07-26 09:06:32.288080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.052 qpair failed and we were unable to recover it. 00:33:14.052 [2024-07-26 09:06:32.288198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.052 [2024-07-26 09:06:32.288240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.052 qpair failed and we were unable to recover it. 00:33:14.052 [2024-07-26 09:06:32.288444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.052 [2024-07-26 09:06:32.288470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.052 qpair failed and we were unable to recover it. 00:33:14.052 [2024-07-26 09:06:32.288587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.052 [2024-07-26 09:06:32.288613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.052 qpair failed and we were unable to recover it. 00:33:14.052 [2024-07-26 09:06:32.288763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.052 [2024-07-26 09:06:32.288804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.052 qpair failed and we were unable to recover it. 00:33:14.052 [2024-07-26 09:06:32.288970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.052 [2024-07-26 09:06:32.288998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.052 qpair failed and we were unable to recover it. 00:33:14.052 [2024-07-26 09:06:32.289190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.052 [2024-07-26 09:06:32.289216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.052 qpair failed and we were unable to recover it. 00:33:14.052 [2024-07-26 09:06:32.289339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.052 [2024-07-26 09:06:32.289365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.052 qpair failed and we were unable to recover it. 00:33:14.052 [2024-07-26 09:06:32.289515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.052 [2024-07-26 09:06:32.289541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.052 qpair failed and we were unable to recover it. 00:33:14.052 [2024-07-26 09:06:32.289657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.052 [2024-07-26 09:06:32.289682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.052 qpair failed and we were unable to recover it. 00:33:14.052 [2024-07-26 09:06:32.289824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.052 [2024-07-26 09:06:32.289850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.052 qpair failed and we were unable to recover it. 00:33:14.052 [2024-07-26 09:06:32.290013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.052 [2024-07-26 09:06:32.290042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.052 qpair failed and we were unable to recover it. 00:33:14.052 [2024-07-26 09:06:32.290190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.052 [2024-07-26 09:06:32.290215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.052 qpair failed and we were unable to recover it. 00:33:14.052 [2024-07-26 09:06:32.290379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.052 [2024-07-26 09:06:32.290407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.052 qpair failed and we were unable to recover it. 00:33:14.052 [2024-07-26 09:06:32.290541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.052 [2024-07-26 09:06:32.290569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.052 qpair failed and we were unable to recover it. 00:33:14.052 [2024-07-26 09:06:32.290706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.052 [2024-07-26 09:06:32.290735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.052 qpair failed and we were unable to recover it. 00:33:14.052 [2024-07-26 09:06:32.290882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.052 [2024-07-26 09:06:32.290907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.052 qpair failed and we were unable to recover it. 00:33:14.052 [2024-07-26 09:06:32.291054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.052 [2024-07-26 09:06:32.291090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.052 qpair failed and we were unable to recover it. 00:33:14.052 [2024-07-26 09:06:32.291244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.052 [2024-07-26 09:06:32.291270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.052 qpair failed and we were unable to recover it. 00:33:14.052 [2024-07-26 09:06:32.291414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.052 [2024-07-26 09:06:32.291439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.052 qpair failed and we were unable to recover it. 00:33:14.052 [2024-07-26 09:06:32.291625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.052 [2024-07-26 09:06:32.291653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.052 qpair failed and we were unable to recover it. 00:33:14.052 [2024-07-26 09:06:32.291815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.052 [2024-07-26 09:06:32.291841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.052 qpair failed and we were unable to recover it. 00:33:14.052 [2024-07-26 09:06:32.292000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.052 [2024-07-26 09:06:32.292028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.052 qpair failed and we were unable to recover it. 00:33:14.052 [2024-07-26 09:06:32.292159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.052 [2024-07-26 09:06:32.292187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.052 qpair failed and we were unable to recover it. 00:33:14.052 [2024-07-26 09:06:32.292379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.053 [2024-07-26 09:06:32.292405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.053 qpair failed and we were unable to recover it. 00:33:14.053 [2024-07-26 09:06:32.292551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.053 [2024-07-26 09:06:32.292576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.053 qpair failed and we were unable to recover it. 00:33:14.053 [2024-07-26 09:06:32.292762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.053 [2024-07-26 09:06:32.292790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.053 qpair failed and we were unable to recover it. 00:33:14.053 [2024-07-26 09:06:32.292955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.053 [2024-07-26 09:06:32.292980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.053 qpair failed and we were unable to recover it. 00:33:14.053 [2024-07-26 09:06:32.293131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.053 [2024-07-26 09:06:32.293157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.053 qpair failed and we were unable to recover it. 00:33:14.053 [2024-07-26 09:06:32.293308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.053 [2024-07-26 09:06:32.293351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.053 qpair failed and we were unable to recover it. 00:33:14.053 [2024-07-26 09:06:32.293525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.053 [2024-07-26 09:06:32.293550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.053 qpair failed and we were unable to recover it. 00:33:14.053 [2024-07-26 09:06:32.293693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.053 [2024-07-26 09:06:32.293718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.053 qpair failed and we were unable to recover it. 00:33:14.053 [2024-07-26 09:06:32.293849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.053 [2024-07-26 09:06:32.293877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.053 qpair failed and we were unable to recover it. 00:33:14.053 [2024-07-26 09:06:32.294035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.053 [2024-07-26 09:06:32.294065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.053 qpair failed and we were unable to recover it. 00:33:14.053 [2024-07-26 09:06:32.294230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.053 [2024-07-26 09:06:32.294258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.053 qpair failed and we were unable to recover it. 00:33:14.053 [2024-07-26 09:06:32.294393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.053 [2024-07-26 09:06:32.294422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.053 qpair failed and we were unable to recover it. 00:33:14.053 [2024-07-26 09:06:32.294587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.053 [2024-07-26 09:06:32.294613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.053 qpair failed and we were unable to recover it. 00:33:14.053 [2024-07-26 09:06:32.294777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.053 [2024-07-26 09:06:32.294805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.053 qpair failed and we were unable to recover it. 00:33:14.053 [2024-07-26 09:06:32.294974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.053 [2024-07-26 09:06:32.295000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.053 qpair failed and we were unable to recover it. 00:33:14.053 [2024-07-26 09:06:32.295148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.053 [2024-07-26 09:06:32.295175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.053 qpair failed and we were unable to recover it. 00:33:14.053 [2024-07-26 09:06:32.295323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.053 [2024-07-26 09:06:32.295349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.053 qpair failed and we were unable to recover it. 00:33:14.053 [2024-07-26 09:06:32.295518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.053 [2024-07-26 09:06:32.295546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.053 qpair failed and we were unable to recover it. 00:33:14.053 [2024-07-26 09:06:32.295684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.053 [2024-07-26 09:06:32.295713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.053 qpair failed and we were unable to recover it. 00:33:14.053 [2024-07-26 09:06:32.295852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.053 [2024-07-26 09:06:32.295893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.053 qpair failed and we were unable to recover it. 00:33:14.053 [2024-07-26 09:06:32.296021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.053 [2024-07-26 09:06:32.296049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.053 qpair failed and we were unable to recover it. 00:33:14.053 [2024-07-26 09:06:32.296224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.053 [2024-07-26 09:06:32.296250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.053 qpair failed and we were unable to recover it. 00:33:14.053 [2024-07-26 09:06:32.296416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.053 [2024-07-26 09:06:32.296446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.053 qpair failed and we were unable to recover it. 00:33:14.053 [2024-07-26 09:06:32.296601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.053 [2024-07-26 09:06:32.296629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.053 qpair failed and we were unable to recover it. 00:33:14.053 [2024-07-26 09:06:32.296824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.053 [2024-07-26 09:06:32.296850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.053 qpair failed and we were unable to recover it. 00:33:14.053 [2024-07-26 09:06:32.297003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.053 [2024-07-26 09:06:32.297044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.053 qpair failed and we were unable to recover it. 00:33:14.053 [2024-07-26 09:06:32.297225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.053 [2024-07-26 09:06:32.297250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.053 qpair failed and we were unable to recover it. 00:33:14.053 [2024-07-26 09:06:32.297362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.053 [2024-07-26 09:06:32.297387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.053 qpair failed and we were unable to recover it. 00:33:14.053 [2024-07-26 09:06:32.297527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.053 [2024-07-26 09:06:32.297568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.053 qpair failed and we were unable to recover it. 00:33:14.053 [2024-07-26 09:06:32.297694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.053 [2024-07-26 09:06:32.297723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.053 qpair failed and we were unable to recover it. 00:33:14.053 [2024-07-26 09:06:32.297865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.053 [2024-07-26 09:06:32.297890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.053 qpair failed and we were unable to recover it. 00:33:14.053 [2024-07-26 09:06:32.298042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.053 [2024-07-26 09:06:32.298073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.053 qpair failed and we were unable to recover it. 00:33:14.053 [2024-07-26 09:06:32.298227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.053 [2024-07-26 09:06:32.298253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.053 qpair failed and we were unable to recover it. 00:33:14.053 [2024-07-26 09:06:32.298404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.053 [2024-07-26 09:06:32.298430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.053 qpair failed and we were unable to recover it. 00:33:14.053 [2024-07-26 09:06:32.298548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.053 [2024-07-26 09:06:32.298593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.053 qpair failed and we were unable to recover it. 00:33:14.053 [2024-07-26 09:06:32.298747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.053 [2024-07-26 09:06:32.298775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.053 qpair failed and we were unable to recover it. 00:33:14.053 [2024-07-26 09:06:32.298924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.053 [2024-07-26 09:06:32.298950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.053 qpair failed and we were unable to recover it. 00:33:14.053 [2024-07-26 09:06:32.299096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.053 [2024-07-26 09:06:32.299123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.053 qpair failed and we were unable to recover it. 00:33:14.053 [2024-07-26 09:06:32.299245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.053 [2024-07-26 09:06:32.299270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.053 qpair failed and we were unable to recover it. 00:33:14.053 [2024-07-26 09:06:32.299390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.053 [2024-07-26 09:06:32.299415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.053 qpair failed and we were unable to recover it. 00:33:14.053 [2024-07-26 09:06:32.299562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.053 [2024-07-26 09:06:32.299605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.053 qpair failed and we were unable to recover it. 00:33:14.053 [2024-07-26 09:06:32.299780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.053 [2024-07-26 09:06:32.299805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.053 qpair failed and we were unable to recover it. 00:33:14.053 [2024-07-26 09:06:32.299951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.053 [2024-07-26 09:06:32.299977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.053 qpair failed and we were unable to recover it. 00:33:14.053 [2024-07-26 09:06:32.300118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.053 [2024-07-26 09:06:32.300144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.053 qpair failed and we were unable to recover it. 00:33:14.053 [2024-07-26 09:06:32.300262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.053 [2024-07-26 09:06:32.300287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.053 qpair failed and we were unable to recover it. 00:33:14.053 [2024-07-26 09:06:32.300494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.053 [2024-07-26 09:06:32.300520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.053 qpair failed and we were unable to recover it. 00:33:14.053 [2024-07-26 09:06:32.300676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.053 [2024-07-26 09:06:32.300701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.053 qpair failed and we were unable to recover it. 00:33:14.053 [2024-07-26 09:06:32.300817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.053 [2024-07-26 09:06:32.300842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.053 qpair failed and we were unable to recover it. 00:33:14.053 [2024-07-26 09:06:32.300969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.053 [2024-07-26 09:06:32.300994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.053 qpair failed and we were unable to recover it. 00:33:14.053 [2024-07-26 09:06:32.301135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.053 [2024-07-26 09:06:32.301161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.053 qpair failed and we were unable to recover it. 00:33:14.053 [2024-07-26 09:06:32.301301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.053 [2024-07-26 09:06:32.301329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.053 qpair failed and we were unable to recover it. 00:33:14.053 [2024-07-26 09:06:32.301497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.053 [2024-07-26 09:06:32.301522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.053 qpair failed and we were unable to recover it. 00:33:14.053 [2024-07-26 09:06:32.301664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.053 [2024-07-26 09:06:32.301689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.053 qpair failed and we were unable to recover it. 00:33:14.053 [2024-07-26 09:06:32.301838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.053 [2024-07-26 09:06:32.301863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.053 qpair failed and we were unable to recover it. 00:33:14.053 [2024-07-26 09:06:32.302026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.053 [2024-07-26 09:06:32.302055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.053 qpair failed and we were unable to recover it. 00:33:14.053 [2024-07-26 09:06:32.302225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.053 [2024-07-26 09:06:32.302250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.053 qpair failed and we were unable to recover it. 00:33:14.053 [2024-07-26 09:06:32.302407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.053 [2024-07-26 09:06:32.302435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.053 qpair failed and we were unable to recover it. 00:33:14.053 [2024-07-26 09:06:32.302607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.053 [2024-07-26 09:06:32.302633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.053 qpair failed and we were unable to recover it. 00:33:14.053 [2024-07-26 09:06:32.302780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.053 [2024-07-26 09:06:32.302805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.053 qpair failed and we were unable to recover it. 00:33:14.053 [2024-07-26 09:06:32.302945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.053 [2024-07-26 09:06:32.302975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.053 qpair failed and we were unable to recover it. 00:33:14.053 [2024-07-26 09:06:32.303117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.053 [2024-07-26 09:06:32.303144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.053 qpair failed and we were unable to recover it. 00:33:14.053 [2024-07-26 09:06:32.303255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.053 [2024-07-26 09:06:32.303280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.053 qpair failed and we were unable to recover it. 00:33:14.053 [2024-07-26 09:06:32.303393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.053 [2024-07-26 09:06:32.303418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.053 qpair failed and we were unable to recover it. 00:33:14.053 [2024-07-26 09:06:32.303586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.054 [2024-07-26 09:06:32.303611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.054 qpair failed and we were unable to recover it. 00:33:14.054 [2024-07-26 09:06:32.303800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.054 [2024-07-26 09:06:32.303828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.054 qpair failed and we were unable to recover it. 00:33:14.054 [2024-07-26 09:06:32.304016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.054 [2024-07-26 09:06:32.304045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.054 qpair failed and we were unable to recover it. 00:33:14.054 [2024-07-26 09:06:32.304202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.054 [2024-07-26 09:06:32.304228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.054 qpair failed and we were unable to recover it. 00:33:14.054 [2024-07-26 09:06:32.304373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.054 [2024-07-26 09:06:32.304399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.054 qpair failed and we were unable to recover it. 00:33:14.054 [2024-07-26 09:06:32.304569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.054 [2024-07-26 09:06:32.304597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.054 qpair failed and we were unable to recover it. 00:33:14.054 [2024-07-26 09:06:32.304765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.054 [2024-07-26 09:06:32.304791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.054 qpair failed and we were unable to recover it. 00:33:14.054 [2024-07-26 09:06:32.304911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.054 [2024-07-26 09:06:32.304936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.054 qpair failed and we were unable to recover it. 00:33:14.054 [2024-07-26 09:06:32.305082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.054 [2024-07-26 09:06:32.305108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.054 qpair failed and we were unable to recover it. 00:33:14.054 [2024-07-26 09:06:32.305257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.054 [2024-07-26 09:06:32.305282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.054 qpair failed and we were unable to recover it. 00:33:14.054 [2024-07-26 09:06:32.305395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.054 [2024-07-26 09:06:32.305436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.054 qpair failed and we were unable to recover it. 00:33:14.054 [2024-07-26 09:06:32.305618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.054 [2024-07-26 09:06:32.305646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.054 qpair failed and we were unable to recover it. 00:33:14.054 [2024-07-26 09:06:32.305777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.054 [2024-07-26 09:06:32.305802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.054 qpair failed and we were unable to recover it. 00:33:14.054 [2024-07-26 09:06:32.305968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.054 [2024-07-26 09:06:32.306011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.054 qpair failed and we were unable to recover it. 00:33:14.054 [2024-07-26 09:06:32.306201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.054 [2024-07-26 09:06:32.306230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.054 qpair failed and we were unable to recover it. 00:33:14.054 [2024-07-26 09:06:32.306369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.054 [2024-07-26 09:06:32.306394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.054 qpair failed and we were unable to recover it. 00:33:14.054 [2024-07-26 09:06:32.306522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.054 [2024-07-26 09:06:32.306547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.054 qpair failed and we were unable to recover it. 00:33:14.054 [2024-07-26 09:06:32.306688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.054 [2024-07-26 09:06:32.306714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.054 qpair failed and we were unable to recover it. 00:33:14.054 [2024-07-26 09:06:32.306867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.054 [2024-07-26 09:06:32.306893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.054 qpair failed and we were unable to recover it. 00:33:14.054 [2024-07-26 09:06:32.307086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.054 [2024-07-26 09:06:32.307118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.054 qpair failed and we were unable to recover it. 00:33:14.054 [2024-07-26 09:06:32.307265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.054 [2024-07-26 09:06:32.307291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.054 qpair failed and we were unable to recover it. 00:33:14.054 [2024-07-26 09:06:32.307439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.054 [2024-07-26 09:06:32.307465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.054 qpair failed and we were unable to recover it. 00:33:14.054 [2024-07-26 09:06:32.307610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.054 [2024-07-26 09:06:32.307651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.054 qpair failed and we were unable to recover it. 00:33:14.054 [2024-07-26 09:06:32.307778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.054 [2024-07-26 09:06:32.307810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.054 qpair failed and we were unable to recover it. 00:33:14.054 [2024-07-26 09:06:32.308040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.054 [2024-07-26 09:06:32.308081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.054 qpair failed and we were unable to recover it. 00:33:14.054 [2024-07-26 09:06:32.308252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.054 [2024-07-26 09:06:32.308279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.054 qpair failed and we were unable to recover it. 00:33:14.054 [2024-07-26 09:06:32.308480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.054 [2024-07-26 09:06:32.308508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.054 qpair failed and we were unable to recover it. 00:33:14.054 [2024-07-26 09:06:32.308682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.054 [2024-07-26 09:06:32.308707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.054 qpair failed and we were unable to recover it. 00:33:14.054 [2024-07-26 09:06:32.308869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.054 [2024-07-26 09:06:32.308897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.054 qpair failed and we were unable to recover it. 00:33:14.054 [2024-07-26 09:06:32.309066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.054 [2024-07-26 09:06:32.309094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.054 qpair failed and we were unable to recover it. 00:33:14.054 [2024-07-26 09:06:32.309258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.054 [2024-07-26 09:06:32.309283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.054 qpair failed and we were unable to recover it. 00:33:14.054 [2024-07-26 09:06:32.309422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.054 [2024-07-26 09:06:32.309464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.054 qpair failed and we were unable to recover it. 00:33:14.054 [2024-07-26 09:06:32.309606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.054 [2024-07-26 09:06:32.309634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.054 qpair failed and we were unable to recover it. 00:33:14.054 [2024-07-26 09:06:32.309802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.054 [2024-07-26 09:06:32.309827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.054 qpair failed and we were unable to recover it. 00:33:14.054 [2024-07-26 09:06:32.309972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.054 [2024-07-26 09:06:32.309997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.054 qpair failed and we were unable to recover it. 00:33:14.054 [2024-07-26 09:06:32.310144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.054 [2024-07-26 09:06:32.310173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.054 qpair failed and we were unable to recover it. 00:33:14.054 [2024-07-26 09:06:32.310340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.054 [2024-07-26 09:06:32.310365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.054 qpair failed and we were unable to recover it. 00:33:14.054 [2024-07-26 09:06:32.310519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.054 [2024-07-26 09:06:32.310544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.054 qpair failed and we were unable to recover it. 00:33:14.054 [2024-07-26 09:06:32.310703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.054 [2024-07-26 09:06:32.310729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.054 qpair failed and we were unable to recover it. 00:33:14.054 [2024-07-26 09:06:32.310853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.054 [2024-07-26 09:06:32.310878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.054 qpair failed and we were unable to recover it. 00:33:14.054 [2024-07-26 09:06:32.310997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.054 [2024-07-26 09:06:32.311022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.054 qpair failed and we were unable to recover it. 00:33:14.054 [2024-07-26 09:06:32.311179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.054 [2024-07-26 09:06:32.311205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.054 qpair failed and we were unable to recover it. 00:33:14.054 [2024-07-26 09:06:32.311352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.054 [2024-07-26 09:06:32.311378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.054 qpair failed and we were unable to recover it. 00:33:14.054 [2024-07-26 09:06:32.311542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.054 [2024-07-26 09:06:32.311570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.054 qpair failed and we were unable to recover it. 00:33:14.054 [2024-07-26 09:06:32.311728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.054 [2024-07-26 09:06:32.311756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.054 qpair failed and we were unable to recover it. 00:33:14.054 [2024-07-26 09:06:32.311899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.054 [2024-07-26 09:06:32.311924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.054 qpair failed and we were unable to recover it. 00:33:14.054 [2024-07-26 09:06:32.312101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.054 [2024-07-26 09:06:32.312130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.054 qpair failed and we were unable to recover it. 00:33:14.054 [2024-07-26 09:06:32.312291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.054 [2024-07-26 09:06:32.312319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.054 qpair failed and we were unable to recover it. 00:33:14.054 [2024-07-26 09:06:32.312461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.054 [2024-07-26 09:06:32.312486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.054 qpair failed and we were unable to recover it. 00:33:14.054 [2024-07-26 09:06:32.312611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.054 [2024-07-26 09:06:32.312636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.054 qpair failed and we were unable to recover it. 00:33:14.054 [2024-07-26 09:06:32.312785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.054 [2024-07-26 09:06:32.312811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.054 qpair failed and we were unable to recover it. 00:33:14.054 [2024-07-26 09:06:32.312952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.054 [2024-07-26 09:06:32.312978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.054 qpair failed and we were unable to recover it. 00:33:14.054 [2024-07-26 09:06:32.313171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.054 [2024-07-26 09:06:32.313199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.054 qpair failed and we were unable to recover it. 00:33:14.054 [2024-07-26 09:06:32.313333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.054 [2024-07-26 09:06:32.313362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.054 qpair failed and we were unable to recover it. 00:33:14.054 [2024-07-26 09:06:32.313550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.054 [2024-07-26 09:06:32.313576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.054 qpair failed and we were unable to recover it. 00:33:14.054 [2024-07-26 09:06:32.313722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.054 [2024-07-26 09:06:32.313763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.054 qpair failed and we were unable to recover it. 00:33:14.054 [2024-07-26 09:06:32.313947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.054 [2024-07-26 09:06:32.313976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.054 qpair failed and we were unable to recover it. 00:33:14.054 [2024-07-26 09:06:32.314165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.054 [2024-07-26 09:06:32.314191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.054 qpair failed and we were unable to recover it. 00:33:14.054 [2024-07-26 09:06:32.314352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.054 [2024-07-26 09:06:32.314382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.054 qpair failed and we were unable to recover it. 00:33:14.054 [2024-07-26 09:06:32.314543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.054 [2024-07-26 09:06:32.314572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.054 qpair failed and we were unable to recover it. 00:33:14.054 [2024-07-26 09:06:32.314738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.054 [2024-07-26 09:06:32.314763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.054 qpair failed and we were unable to recover it. 00:33:14.054 [2024-07-26 09:06:32.314887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.054 [2024-07-26 09:06:32.314931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.054 qpair failed and we were unable to recover it. 00:33:14.054 [2024-07-26 09:06:32.315089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.054 [2024-07-26 09:06:32.315116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.054 qpair failed and we were unable to recover it. 00:33:14.054 [2024-07-26 09:06:32.315232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.054 [2024-07-26 09:06:32.315257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.054 qpair failed and we were unable to recover it. 00:33:14.055 [2024-07-26 09:06:32.315398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.055 [2024-07-26 09:06:32.315444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.055 qpair failed and we were unable to recover it. 00:33:14.055 [2024-07-26 09:06:32.315604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.055 [2024-07-26 09:06:32.315632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.055 qpair failed and we were unable to recover it. 00:33:14.055 [2024-07-26 09:06:32.315799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.055 [2024-07-26 09:06:32.315824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.055 qpair failed and we were unable to recover it. 00:33:14.055 [2024-07-26 09:06:32.315984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.055 [2024-07-26 09:06:32.316009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.055 qpair failed and we were unable to recover it. 00:33:14.055 [2024-07-26 09:06:32.316130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.055 [2024-07-26 09:06:32.316171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.055 qpair failed and we were unable to recover it. 00:33:14.055 [2024-07-26 09:06:32.316338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.055 [2024-07-26 09:06:32.316363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.055 qpair failed and we were unable to recover it. 00:33:14.055 [2024-07-26 09:06:32.316498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.055 [2024-07-26 09:06:32.316541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.055 qpair failed and we were unable to recover it. 00:33:14.055 [2024-07-26 09:06:32.316695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.055 [2024-07-26 09:06:32.316720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.055 qpair failed and we were unable to recover it. 00:33:14.055 [2024-07-26 09:06:32.316865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.055 [2024-07-26 09:06:32.316891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.055 qpair failed and we were unable to recover it. 00:33:14.055 [2024-07-26 09:06:32.317011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.055 [2024-07-26 09:06:32.317054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.055 qpair failed and we were unable to recover it. 00:33:14.055 [2024-07-26 09:06:32.317222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.055 [2024-07-26 09:06:32.317251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.055 qpair failed and we were unable to recover it. 00:33:14.055 [2024-07-26 09:06:32.317418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.055 [2024-07-26 09:06:32.317444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.055 qpair failed and we were unable to recover it. 00:33:14.055 [2024-07-26 09:06:32.317634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.055 [2024-07-26 09:06:32.317662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.055 qpair failed and we were unable to recover it. 00:33:14.055 [2024-07-26 09:06:32.317832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.055 [2024-07-26 09:06:32.317857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.055 qpair failed and we were unable to recover it. 00:33:14.055 [2024-07-26 09:06:32.318033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.055 [2024-07-26 09:06:32.318064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.055 qpair failed and we were unable to recover it. 00:33:14.055 [2024-07-26 09:06:32.318197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.055 [2024-07-26 09:06:32.318226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.055 qpair failed and we were unable to recover it. 00:33:14.055 [2024-07-26 09:06:32.318410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.055 [2024-07-26 09:06:32.318439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.055 qpair failed and we were unable to recover it. 00:33:14.055 [2024-07-26 09:06:32.318608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.055 [2024-07-26 09:06:32.318634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.055 qpair failed and we were unable to recover it. 00:33:14.055 [2024-07-26 09:06:32.318824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.055 [2024-07-26 09:06:32.318852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.055 qpair failed and we were unable to recover it. 00:33:14.055 [2024-07-26 09:06:32.319014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.055 [2024-07-26 09:06:32.319043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.055 qpair failed and we were unable to recover it. 00:33:14.055 [2024-07-26 09:06:32.319192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.055 [2024-07-26 09:06:32.319218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.055 qpair failed and we were unable to recover it. 00:33:14.055 [2024-07-26 09:06:32.319360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.055 [2024-07-26 09:06:32.319385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.055 qpair failed and we were unable to recover it. 00:33:14.055 [2024-07-26 09:06:32.319588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.055 [2024-07-26 09:06:32.319616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.055 qpair failed and we were unable to recover it. 00:33:14.055 [2024-07-26 09:06:32.319750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.055 [2024-07-26 09:06:32.319775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.055 qpair failed and we were unable to recover it. 00:33:14.055 [2024-07-26 09:06:32.319963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.055 [2024-07-26 09:06:32.319992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.055 qpair failed and we were unable to recover it. 00:33:14.055 [2024-07-26 09:06:32.320165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.055 [2024-07-26 09:06:32.320192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.055 qpair failed and we were unable to recover it. 00:33:14.055 [2024-07-26 09:06:32.320372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.055 [2024-07-26 09:06:32.320397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.055 qpair failed and we were unable to recover it. 00:33:14.055 [2024-07-26 09:06:32.320519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.055 [2024-07-26 09:06:32.320545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.055 qpair failed and we were unable to recover it. 00:33:14.055 [2024-07-26 09:06:32.320719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.055 [2024-07-26 09:06:32.320748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.055 qpair failed and we were unable to recover it. 00:33:14.055 [2024-07-26 09:06:32.320939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.055 [2024-07-26 09:06:32.320967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.055 qpair failed and we were unable to recover it. 00:33:14.055 [2024-07-26 09:06:32.321102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.055 [2024-07-26 09:06:32.321128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.055 qpair failed and we were unable to recover it. 00:33:14.055 [2024-07-26 09:06:32.321246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.055 [2024-07-26 09:06:32.321271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.055 qpair failed and we were unable to recover it. 00:33:14.055 [2024-07-26 09:06:32.321388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.055 [2024-07-26 09:06:32.321415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.055 qpair failed and we were unable to recover it. 00:33:14.055 [2024-07-26 09:06:32.321562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.055 [2024-07-26 09:06:32.321605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.055 qpair failed and we were unable to recover it. 00:33:14.055 [2024-07-26 09:06:32.321746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.055 [2024-07-26 09:06:32.321774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.055 qpair failed and we were unable to recover it. 00:33:14.055 [2024-07-26 09:06:32.321919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.055 [2024-07-26 09:06:32.321945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.055 qpair failed and we were unable to recover it. 00:33:14.055 [2024-07-26 09:06:32.322090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.055 [2024-07-26 09:06:32.322116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.055 qpair failed and we were unable to recover it. 00:33:14.055 [2024-07-26 09:06:32.322260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.055 [2024-07-26 09:06:32.322288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.055 qpair failed and we were unable to recover it. 00:33:14.055 [2024-07-26 09:06:32.322428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.055 [2024-07-26 09:06:32.322453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.055 qpair failed and we were unable to recover it. 00:33:14.055 [2024-07-26 09:06:32.322567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.055 [2024-07-26 09:06:32.322592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.055 qpair failed and we were unable to recover it. 00:33:14.055 [2024-07-26 09:06:32.322783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.055 [2024-07-26 09:06:32.322811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.055 qpair failed and we were unable to recover it. 00:33:14.055 [2024-07-26 09:06:32.323004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.055 [2024-07-26 09:06:32.323029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.055 qpair failed and we were unable to recover it. 00:33:14.055 [2024-07-26 09:06:32.323181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.055 [2024-07-26 09:06:32.323208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.055 qpair failed and we were unable to recover it. 00:33:14.055 [2024-07-26 09:06:32.323332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.055 [2024-07-26 09:06:32.323375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.055 qpair failed and we were unable to recover it. 00:33:14.055 [2024-07-26 09:06:32.323541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.055 [2024-07-26 09:06:32.323566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.055 qpair failed and we were unable to recover it. 00:33:14.055 [2024-07-26 09:06:32.323683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.055 [2024-07-26 09:06:32.323723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.055 qpair failed and we were unable to recover it. 00:33:14.055 [2024-07-26 09:06:32.323907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.055 [2024-07-26 09:06:32.323935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.055 qpair failed and we were unable to recover it. 00:33:14.055 [2024-07-26 09:06:32.324098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.055 [2024-07-26 09:06:32.324125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.055 qpair failed and we were unable to recover it. 00:33:14.055 [2024-07-26 09:06:32.324284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.055 [2024-07-26 09:06:32.324312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.055 qpair failed and we were unable to recover it. 00:33:14.055 [2024-07-26 09:06:32.324447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.055 [2024-07-26 09:06:32.324475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.055 qpair failed and we were unable to recover it. 00:33:14.055 [2024-07-26 09:06:32.324612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.055 [2024-07-26 09:06:32.324638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.055 qpair failed and we were unable to recover it. 00:33:14.055 [2024-07-26 09:06:32.324785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.055 [2024-07-26 09:06:32.324811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.055 qpair failed and we were unable to recover it. 00:33:14.055 [2024-07-26 09:06:32.324953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.055 [2024-07-26 09:06:32.324993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.055 qpair failed and we were unable to recover it. 00:33:14.055 [2024-07-26 09:06:32.325160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.055 [2024-07-26 09:06:32.325186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.055 qpair failed and we were unable to recover it. 00:33:14.055 [2024-07-26 09:06:32.325309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.055 [2024-07-26 09:06:32.325335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.055 qpair failed and we were unable to recover it. 00:33:14.055 [2024-07-26 09:06:32.325448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.055 [2024-07-26 09:06:32.325473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.055 qpair failed and we were unable to recover it. 00:33:14.055 [2024-07-26 09:06:32.325645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.056 [2024-07-26 09:06:32.325670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.056 qpair failed and we were unable to recover it. 00:33:14.056 [2024-07-26 09:06:32.325813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.056 [2024-07-26 09:06:32.325838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.056 qpair failed and we were unable to recover it. 00:33:14.056 [2024-07-26 09:06:32.325949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.056 [2024-07-26 09:06:32.325974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.056 qpair failed and we were unable to recover it. 00:33:14.056 [2024-07-26 09:06:32.326159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.056 [2024-07-26 09:06:32.326185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.056 qpair failed and we were unable to recover it. 00:33:14.056 [2024-07-26 09:06:32.326334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.056 [2024-07-26 09:06:32.326359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.056 qpair failed and we were unable to recover it. 00:33:14.056 [2024-07-26 09:06:32.326518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.056 [2024-07-26 09:06:32.326544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.056 qpair failed and we were unable to recover it. 00:33:14.056 [2024-07-26 09:06:32.326691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.056 [2024-07-26 09:06:32.326716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.056 qpair failed and we were unable to recover it. 00:33:14.056 [2024-07-26 09:06:32.326858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.056 [2024-07-26 09:06:32.326883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.056 qpair failed and we were unable to recover it. 00:33:14.056 [2024-07-26 09:06:32.327028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.056 [2024-07-26 09:06:32.327054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.056 qpair failed and we were unable to recover it. 00:33:14.056 [2024-07-26 09:06:32.327207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.056 [2024-07-26 09:06:32.327232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.056 qpair failed and we were unable to recover it. 00:33:14.056 [2024-07-26 09:06:32.327393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.056 [2024-07-26 09:06:32.327421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.056 qpair failed and we were unable to recover it. 00:33:14.056 [2024-07-26 09:06:32.327542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.056 [2024-07-26 09:06:32.327570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.056 qpair failed and we were unable to recover it. 00:33:14.056 [2024-07-26 09:06:32.327728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.056 [2024-07-26 09:06:32.327757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.056 qpair failed and we were unable to recover it. 00:33:14.056 [2024-07-26 09:06:32.327887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.056 [2024-07-26 09:06:32.327912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.056 qpair failed and we were unable to recover it. 00:33:14.056 [2024-07-26 09:06:32.328056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.056 [2024-07-26 09:06:32.328094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.056 qpair failed and we were unable to recover it. 00:33:14.056 [2024-07-26 09:06:32.328241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.056 [2024-07-26 09:06:32.328267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.056 qpair failed and we were unable to recover it. 00:33:14.056 [2024-07-26 09:06:32.328381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.056 [2024-07-26 09:06:32.328424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.056 qpair failed and we were unable to recover it. 00:33:14.056 [2024-07-26 09:06:32.328563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.056 [2024-07-26 09:06:32.328591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.056 qpair failed and we were unable to recover it. 00:33:14.056 [2024-07-26 09:06:32.328782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.056 [2024-07-26 09:06:32.328806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.056 qpair failed and we were unable to recover it. 00:33:14.056 [2024-07-26 09:06:32.328929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.056 [2024-07-26 09:06:32.328954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.056 qpair failed and we were unable to recover it. 00:33:14.056 [2024-07-26 09:06:32.329099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.056 [2024-07-26 09:06:32.329125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.056 qpair failed and we were unable to recover it. 00:33:14.056 [2024-07-26 09:06:32.329305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.056 [2024-07-26 09:06:32.329330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.056 qpair failed and we were unable to recover it. 00:33:14.056 [2024-07-26 09:06:32.329470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.056 [2024-07-26 09:06:32.329514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.056 qpair failed and we were unable to recover it. 00:33:14.056 [2024-07-26 09:06:32.329702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.056 [2024-07-26 09:06:32.329730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.056 qpair failed and we were unable to recover it. 00:33:14.056 [2024-07-26 09:06:32.329903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.056 [2024-07-26 09:06:32.329928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.056 qpair failed and we were unable to recover it. 00:33:14.056 [2024-07-26 09:06:32.330042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.056 [2024-07-26 09:06:32.330089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.056 qpair failed and we were unable to recover it. 00:33:14.056 [2024-07-26 09:06:32.330256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.056 [2024-07-26 09:06:32.330285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.056 qpair failed and we were unable to recover it. 00:33:14.056 [2024-07-26 09:06:32.330450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.056 [2024-07-26 09:06:32.330475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.056 qpair failed and we were unable to recover it. 00:33:14.056 [2024-07-26 09:06:32.330594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.056 [2024-07-26 09:06:32.330619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.056 qpair failed and we were unable to recover it. 00:33:14.056 [2024-07-26 09:06:32.330790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.056 [2024-07-26 09:06:32.330830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.056 qpair failed and we were unable to recover it. 00:33:14.056 [2024-07-26 09:06:32.330999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.056 [2024-07-26 09:06:32.331024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.056 qpair failed and we were unable to recover it. 00:33:14.056 [2024-07-26 09:06:32.331225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.056 [2024-07-26 09:06:32.331254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.056 qpair failed and we were unable to recover it. 00:33:14.056 [2024-07-26 09:06:32.331407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.056 [2024-07-26 09:06:32.331435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.056 qpair failed and we were unable to recover it. 00:33:14.056 [2024-07-26 09:06:32.331600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.056 [2024-07-26 09:06:32.331626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.056 qpair failed and we were unable to recover it. 00:33:14.056 [2024-07-26 09:06:32.331739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.056 [2024-07-26 09:06:32.331780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.056 qpair failed and we were unable to recover it. 00:33:14.056 [2024-07-26 09:06:32.331947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.056 [2024-07-26 09:06:32.331973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.056 qpair failed and we were unable to recover it. 00:33:14.056 [2024-07-26 09:06:32.332120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.056 [2024-07-26 09:06:32.332147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.056 qpair failed and we were unable to recover it. 00:33:14.056 [2024-07-26 09:06:32.332269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.056 [2024-07-26 09:06:32.332294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.056 qpair failed and we were unable to recover it. 00:33:14.056 [2024-07-26 09:06:32.332470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.056 [2024-07-26 09:06:32.332495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.056 qpair failed and we were unable to recover it. 00:33:14.056 [2024-07-26 09:06:32.332637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.056 [2024-07-26 09:06:32.332662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.056 qpair failed and we were unable to recover it. 00:33:14.056 [2024-07-26 09:06:32.332860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.056 [2024-07-26 09:06:32.332888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.056 qpair failed and we were unable to recover it. 00:33:14.056 [2024-07-26 09:06:32.333045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.056 [2024-07-26 09:06:32.333082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.056 qpair failed and we were unable to recover it. 00:33:14.056 [2024-07-26 09:06:32.333242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.056 [2024-07-26 09:06:32.333268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.056 qpair failed and we were unable to recover it. 00:33:14.056 [2024-07-26 09:06:32.333385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.056 [2024-07-26 09:06:32.333410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.056 qpair failed and we were unable to recover it. 00:33:14.056 [2024-07-26 09:06:32.333523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.056 [2024-07-26 09:06:32.333548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.056 qpair failed and we were unable to recover it. 00:33:14.056 [2024-07-26 09:06:32.333687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.056 [2024-07-26 09:06:32.333712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.056 qpair failed and we were unable to recover it. 00:33:14.056 [2024-07-26 09:06:32.333863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.056 [2024-07-26 09:06:32.333888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.056 qpair failed and we were unable to recover it. 00:33:14.056 [2024-07-26 09:06:32.334064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.056 [2024-07-26 09:06:32.334108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.056 qpair failed and we were unable to recover it. 00:33:14.056 [2024-07-26 09:06:32.334269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.056 [2024-07-26 09:06:32.334295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.056 qpair failed and we were unable to recover it. 00:33:14.056 [2024-07-26 09:06:32.334455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.056 [2024-07-26 09:06:32.334483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.056 qpair failed and we were unable to recover it. 00:33:14.056 [2024-07-26 09:06:32.334679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.056 [2024-07-26 09:06:32.334707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.056 qpair failed and we were unable to recover it. 00:33:14.056 [2024-07-26 09:06:32.334864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.056 [2024-07-26 09:06:32.334890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.056 qpair failed and we were unable to recover it. 00:33:14.056 [2024-07-26 09:06:32.335035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.056 [2024-07-26 09:06:32.335066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.056 qpair failed and we were unable to recover it. 00:33:14.056 [2024-07-26 09:06:32.335216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.056 [2024-07-26 09:06:32.335241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.056 qpair failed and we were unable to recover it. 00:33:14.056 [2024-07-26 09:06:32.335383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.056 [2024-07-26 09:06:32.335408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.056 qpair failed and we were unable to recover it. 00:33:14.056 [2024-07-26 09:06:32.335558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.056 [2024-07-26 09:06:32.335583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.056 qpair failed and we were unable to recover it. 00:33:14.056 [2024-07-26 09:06:32.335769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.056 [2024-07-26 09:06:32.335797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.056 qpair failed and we were unable to recover it. 00:33:14.056 [2024-07-26 09:06:32.335988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.056 [2024-07-26 09:06:32.336013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.056 qpair failed and we were unable to recover it. 00:33:14.056 [2024-07-26 09:06:32.336154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.056 [2024-07-26 09:06:32.336197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.056 qpair failed and we were unable to recover it. 00:33:14.056 [2024-07-26 09:06:32.336324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.056 [2024-07-26 09:06:32.336353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.056 qpair failed and we were unable to recover it. 00:33:14.056 [2024-07-26 09:06:32.336520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.056 [2024-07-26 09:06:32.336544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.056 qpair failed and we were unable to recover it. 00:33:14.056 [2024-07-26 09:06:32.336694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.056 [2024-07-26 09:06:32.336719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.056 qpair failed and we were unable to recover it. 00:33:14.056 [2024-07-26 09:06:32.336836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.056 [2024-07-26 09:06:32.336862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.056 qpair failed and we were unable to recover it. 00:33:14.056 [2024-07-26 09:06:32.337036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.057 [2024-07-26 09:06:32.337080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.057 qpair failed and we were unable to recover it. 00:33:14.057 [2024-07-26 09:06:32.337249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.057 [2024-07-26 09:06:32.337274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.057 qpair failed and we were unable to recover it. 00:33:14.057 [2024-07-26 09:06:32.337458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.057 [2024-07-26 09:06:32.337486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.057 qpair failed and we were unable to recover it. 00:33:14.057 [2024-07-26 09:06:32.337649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.057 [2024-07-26 09:06:32.337674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.057 qpair failed and we were unable to recover it. 00:33:14.057 [2024-07-26 09:06:32.337847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.057 [2024-07-26 09:06:32.337875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.057 qpair failed and we were unable to recover it. 00:33:14.057 [2024-07-26 09:06:32.338001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.057 [2024-07-26 09:06:32.338029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.057 qpair failed and we were unable to recover it. 00:33:14.057 [2024-07-26 09:06:32.338213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.057 [2024-07-26 09:06:32.338239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.057 qpair failed and we were unable to recover it. 00:33:14.057 [2024-07-26 09:06:32.338424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.057 [2024-07-26 09:06:32.338452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.057 qpair failed and we were unable to recover it. 00:33:14.057 [2024-07-26 09:06:32.338610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.057 [2024-07-26 09:06:32.338638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.057 qpair failed and we were unable to recover it. 00:33:14.057 [2024-07-26 09:06:32.338779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.057 [2024-07-26 09:06:32.338805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.057 qpair failed and we were unable to recover it. 00:33:14.057 [2024-07-26 09:06:32.338949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.057 [2024-07-26 09:06:32.338975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.057 qpair failed and we were unable to recover it. 00:33:14.057 [2024-07-26 09:06:32.339120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.057 [2024-07-26 09:06:32.339149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.057 qpair failed and we were unable to recover it. 00:33:14.057 [2024-07-26 09:06:32.339343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.057 [2024-07-26 09:06:32.339369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.057 qpair failed and we were unable to recover it. 00:33:14.057 [2024-07-26 09:06:32.339526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.057 [2024-07-26 09:06:32.339551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.057 qpair failed and we were unable to recover it. 00:33:14.057 [2024-07-26 09:06:32.339672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.057 [2024-07-26 09:06:32.339697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.057 qpair failed and we were unable to recover it. 00:33:14.057 [2024-07-26 09:06:32.339873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.057 [2024-07-26 09:06:32.339898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.057 qpair failed and we were unable to recover it. 00:33:14.057 [2024-07-26 09:06:32.340088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.057 [2024-07-26 09:06:32.340117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.057 qpair failed and we were unable to recover it. 00:33:14.057 [2024-07-26 09:06:32.340243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.057 [2024-07-26 09:06:32.340277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.057 qpair failed and we were unable to recover it. 00:33:14.057 [2024-07-26 09:06:32.340444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.057 [2024-07-26 09:06:32.340470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.057 qpair failed and we were unable to recover it. 00:33:14.057 [2024-07-26 09:06:32.340632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.057 [2024-07-26 09:06:32.340660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.057 qpair failed and we were unable to recover it. 00:33:14.057 [2024-07-26 09:06:32.340841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.057 [2024-07-26 09:06:32.340870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.057 qpair failed and we were unable to recover it. 00:33:14.057 [2024-07-26 09:06:32.341046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.057 [2024-07-26 09:06:32.341079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.057 qpair failed and we were unable to recover it. 00:33:14.057 [2024-07-26 09:06:32.341250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.057 [2024-07-26 09:06:32.341278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.057 qpair failed and we were unable to recover it. 00:33:14.057 [2024-07-26 09:06:32.341414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.057 [2024-07-26 09:06:32.341442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.057 qpair failed and we were unable to recover it. 00:33:14.057 [2024-07-26 09:06:32.341611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.057 [2024-07-26 09:06:32.341637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.057 qpair failed and we were unable to recover it. 00:33:14.057 [2024-07-26 09:06:32.341750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.057 [2024-07-26 09:06:32.341775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.057 qpair failed and we were unable to recover it. 00:33:14.057 [2024-07-26 09:06:32.341945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.057 [2024-07-26 09:06:32.341973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.057 qpair failed and we were unable to recover it. 00:33:14.057 [2024-07-26 09:06:32.342113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.057 [2024-07-26 09:06:32.342139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.057 qpair failed and we were unable to recover it. 00:33:14.057 [2024-07-26 09:06:32.342255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.057 [2024-07-26 09:06:32.342280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.057 qpair failed and we were unable to recover it. 00:33:14.057 [2024-07-26 09:06:32.342453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.057 [2024-07-26 09:06:32.342478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.057 qpair failed and we were unable to recover it. 00:33:14.057 [2024-07-26 09:06:32.342620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.057 [2024-07-26 09:06:32.342645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.057 qpair failed and we were unable to recover it. 00:33:14.057 [2024-07-26 09:06:32.342802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.057 [2024-07-26 09:06:32.342827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.057 qpair failed and we were unable to recover it. 00:33:14.057 [2024-07-26 09:06:32.342968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.057 [2024-07-26 09:06:32.342993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.057 qpair failed and we were unable to recover it. 00:33:14.057 [2024-07-26 09:06:32.343161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.057 [2024-07-26 09:06:32.343186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.057 qpair failed and we were unable to recover it. 00:33:14.057 [2024-07-26 09:06:32.343331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.057 [2024-07-26 09:06:32.343373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.057 qpair failed and we were unable to recover it. 00:33:14.057 [2024-07-26 09:06:32.343535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.057 [2024-07-26 09:06:32.343561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.057 qpair failed and we were unable to recover it. 00:33:14.057 [2024-07-26 09:06:32.343705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.057 [2024-07-26 09:06:32.343730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.057 qpair failed and we were unable to recover it. 00:33:14.057 [2024-07-26 09:06:32.343895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.057 [2024-07-26 09:06:32.343923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.057 qpair failed and we were unable to recover it. 00:33:14.057 [2024-07-26 09:06:32.344097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.057 [2024-07-26 09:06:32.344125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.057 qpair failed and we were unable to recover it. 00:33:14.057 [2024-07-26 09:06:32.344239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.057 [2024-07-26 09:06:32.344264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.057 qpair failed and we were unable to recover it. 00:33:14.057 [2024-07-26 09:06:32.344410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.057 [2024-07-26 09:06:32.344452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.057 qpair failed and we were unable to recover it. 00:33:14.057 [2024-07-26 09:06:32.344596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.057 [2024-07-26 09:06:32.344624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.057 qpair failed and we were unable to recover it. 00:33:14.057 [2024-07-26 09:06:32.344767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.057 [2024-07-26 09:06:32.344792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.057 qpair failed and we were unable to recover it. 00:33:14.057 [2024-07-26 09:06:32.344957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.057 [2024-07-26 09:06:32.344982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.057 qpair failed and we were unable to recover it. 00:33:14.057 [2024-07-26 09:06:32.345113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.057 [2024-07-26 09:06:32.345142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.057 qpair failed and we were unable to recover it. 00:33:14.057 [2024-07-26 09:06:32.345315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.057 [2024-07-26 09:06:32.345341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.057 qpair failed and we were unable to recover it. 00:33:14.057 [2024-07-26 09:06:32.345464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.057 [2024-07-26 09:06:32.345489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.057 qpair failed and we were unable to recover it. 00:33:14.057 [2024-07-26 09:06:32.345638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.057 [2024-07-26 09:06:32.345664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.057 qpair failed and we were unable to recover it. 00:33:14.057 [2024-07-26 09:06:32.345804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.057 [2024-07-26 09:06:32.345830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.057 qpair failed and we were unable to recover it. 00:33:14.057 [2024-07-26 09:06:32.345954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.057 [2024-07-26 09:06:32.345979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.057 qpair failed and we were unable to recover it. 00:33:14.057 [2024-07-26 09:06:32.346129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.057 [2024-07-26 09:06:32.346154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.057 qpair failed and we were unable to recover it. 00:33:14.057 [2024-07-26 09:06:32.346306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.057 [2024-07-26 09:06:32.346331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.057 qpair failed and we were unable to recover it. 00:33:14.057 [2024-07-26 09:06:32.346499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.057 [2024-07-26 09:06:32.346540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.057 qpair failed and we were unable to recover it. 00:33:14.057 [2024-07-26 09:06:32.346735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.057 [2024-07-26 09:06:32.346760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.057 qpair failed and we were unable to recover it. 00:33:14.057 [2024-07-26 09:06:32.346907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.057 [2024-07-26 09:06:32.346932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.057 qpair failed and we were unable to recover it. 00:33:14.057 [2024-07-26 09:06:32.347043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.057 [2024-07-26 09:06:32.347073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.057 qpair failed and we were unable to recover it. 00:33:14.057 [2024-07-26 09:06:32.347185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.057 [2024-07-26 09:06:32.347210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.057 qpair failed and we were unable to recover it. 00:33:14.057 [2024-07-26 09:06:32.347359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.057 [2024-07-26 09:06:32.347384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.057 qpair failed and we were unable to recover it. 00:33:14.057 [2024-07-26 09:06:32.347493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.057 [2024-07-26 09:06:32.347538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.057 qpair failed and we were unable to recover it. 00:33:14.057 [2024-07-26 09:06:32.347659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.057 [2024-07-26 09:06:32.347687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.057 qpair failed and we were unable to recover it. 00:33:14.057 [2024-07-26 09:06:32.347873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.057 [2024-07-26 09:06:32.347898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.057 qpair failed and we were unable to recover it. 00:33:14.057 [2024-07-26 09:06:32.348026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.057 [2024-07-26 09:06:32.348052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.057 qpair failed and we were unable to recover it. 00:33:14.057 [2024-07-26 09:06:32.348213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.058 [2024-07-26 09:06:32.348239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.058 qpair failed and we were unable to recover it. 00:33:14.058 [2024-07-26 09:06:32.348360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.058 [2024-07-26 09:06:32.348385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.058 qpair failed and we were unable to recover it. 00:33:14.058 [2024-07-26 09:06:32.348496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.058 [2024-07-26 09:06:32.348522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.058 qpair failed and we were unable to recover it. 00:33:14.058 [2024-07-26 09:06:32.348647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.058 [2024-07-26 09:06:32.348672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.058 qpair failed and we were unable to recover it. 00:33:14.058 [2024-07-26 09:06:32.348781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.058 [2024-07-26 09:06:32.348806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.058 qpair failed and we were unable to recover it. 00:33:14.058 [2024-07-26 09:06:32.348956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.058 [2024-07-26 09:06:32.348982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.058 qpair failed and we were unable to recover it. 00:33:14.058 [2024-07-26 09:06:32.349132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.058 [2024-07-26 09:06:32.349158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.058 qpair failed and we were unable to recover it. 00:33:14.058 [2024-07-26 09:06:32.349299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.058 [2024-07-26 09:06:32.349324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.058 qpair failed and we were unable to recover it. 00:33:14.058 [2024-07-26 09:06:32.349444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.058 [2024-07-26 09:06:32.349470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.058 qpair failed and we were unable to recover it. 00:33:14.058 [2024-07-26 09:06:32.349621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.058 [2024-07-26 09:06:32.349649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.058 qpair failed and we were unable to recover it. 00:33:14.058 [2024-07-26 09:06:32.349817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.058 [2024-07-26 09:06:32.349843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.058 qpair failed and we were unable to recover it. 00:33:14.058 [2024-07-26 09:06:32.349984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.058 [2024-07-26 09:06:32.350026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.058 qpair failed and we were unable to recover it. 00:33:14.058 [2024-07-26 09:06:32.350218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.058 [2024-07-26 09:06:32.350247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.058 qpair failed and we were unable to recover it. 00:33:14.058 [2024-07-26 09:06:32.350395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.058 [2024-07-26 09:06:32.350421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.058 qpair failed and we were unable to recover it. 00:33:14.058 [2024-07-26 09:06:32.350596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.058 [2024-07-26 09:06:32.350639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.058 qpair failed and we were unable to recover it. 00:33:14.058 [2024-07-26 09:06:32.350802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.058 [2024-07-26 09:06:32.350832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.058 qpair failed and we were unable to recover it. 00:33:14.058 [2024-07-26 09:06:32.350998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.058 [2024-07-26 09:06:32.351024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.058 qpair failed and we were unable to recover it. 00:33:14.058 [2024-07-26 09:06:32.351174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.058 [2024-07-26 09:06:32.351200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.058 qpair failed and we were unable to recover it. 00:33:14.058 [2024-07-26 09:06:32.351361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.058 [2024-07-26 09:06:32.351389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.058 qpair failed and we were unable to recover it. 00:33:14.058 [2024-07-26 09:06:32.351530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.058 [2024-07-26 09:06:32.351556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.058 qpair failed and we were unable to recover it. 00:33:14.058 [2024-07-26 09:06:32.351684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.058 [2024-07-26 09:06:32.351709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.058 qpair failed and we were unable to recover it. 00:33:14.058 [2024-07-26 09:06:32.351859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.058 [2024-07-26 09:06:32.351901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.058 qpair failed and we were unable to recover it. 00:33:14.058 [2024-07-26 09:06:32.352037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.058 [2024-07-26 09:06:32.352070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.058 qpair failed and we were unable to recover it. 00:33:14.058 [2024-07-26 09:06:32.352206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.058 [2024-07-26 09:06:32.352251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.058 qpair failed and we were unable to recover it. 00:33:14.058 [2024-07-26 09:06:32.352421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.058 [2024-07-26 09:06:32.352447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.058 qpair failed and we were unable to recover it. 00:33:14.058 [2024-07-26 09:06:32.352593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.058 [2024-07-26 09:06:32.352618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.058 qpair failed and we were unable to recover it. 00:33:14.058 [2024-07-26 09:06:32.352782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.058 [2024-07-26 09:06:32.352810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.058 qpair failed and we were unable to recover it. 00:33:14.058 [2024-07-26 09:06:32.352996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.058 [2024-07-26 09:06:32.353024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.058 qpair failed and we were unable to recover it. 00:33:14.058 [2024-07-26 09:06:32.353193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.058 [2024-07-26 09:06:32.353219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.058 qpair failed and we were unable to recover it. 00:33:14.058 [2024-07-26 09:06:32.353362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.058 [2024-07-26 09:06:32.353387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.058 qpair failed and we were unable to recover it. 00:33:14.058 [2024-07-26 09:06:32.353528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.058 [2024-07-26 09:06:32.353556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.058 qpair failed and we were unable to recover it. 00:33:14.058 [2024-07-26 09:06:32.353745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.058 [2024-07-26 09:06:32.353771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.058 qpair failed and we were unable to recover it. 00:33:14.058 [2024-07-26 09:06:32.353912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.058 [2024-07-26 09:06:32.353937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.058 qpair failed and we were unable to recover it. 00:33:14.058 [2024-07-26 09:06:32.354103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.058 [2024-07-26 09:06:32.354132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.058 qpair failed and we were unable to recover it. 00:33:14.058 [2024-07-26 09:06:32.354321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.058 [2024-07-26 09:06:32.354346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.058 qpair failed and we were unable to recover it. 00:33:14.058 [2024-07-26 09:06:32.354493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.058 [2024-07-26 09:06:32.354518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.058 qpair failed and we were unable to recover it. 00:33:14.058 [2024-07-26 09:06:32.354702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.058 [2024-07-26 09:06:32.354730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.058 qpair failed and we were unable to recover it. 00:33:14.058 [2024-07-26 09:06:32.354888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.058 [2024-07-26 09:06:32.354917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.058 qpair failed and we were unable to recover it. 00:33:14.058 [2024-07-26 09:06:32.355079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.058 [2024-07-26 09:06:32.355121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.058 qpair failed and we were unable to recover it. 00:33:14.058 [2024-07-26 09:06:32.355268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.058 [2024-07-26 09:06:32.355293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.058 qpair failed and we were unable to recover it. 00:33:14.058 [2024-07-26 09:06:32.355465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.058 [2024-07-26 09:06:32.355490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.058 qpair failed and we were unable to recover it. 00:33:14.058 [2024-07-26 09:06:32.355692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.058 [2024-07-26 09:06:32.355721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.058 qpair failed and we were unable to recover it. 00:33:14.058 [2024-07-26 09:06:32.355864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.058 [2024-07-26 09:06:32.355890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.058 qpair failed and we were unable to recover it. 00:33:14.058 [2024-07-26 09:06:32.356030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.058 [2024-07-26 09:06:32.356056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.058 qpair failed and we were unable to recover it. 00:33:14.058 [2024-07-26 09:06:32.356226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.058 [2024-07-26 09:06:32.356255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.058 qpair failed and we were unable to recover it. 00:33:14.058 [2024-07-26 09:06:32.356396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.058 [2024-07-26 09:06:32.356424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.058 qpair failed and we were unable to recover it. 00:33:14.058 [2024-07-26 09:06:32.356592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.058 [2024-07-26 09:06:32.356618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.058 qpair failed and we were unable to recover it. 00:33:14.058 [2024-07-26 09:06:32.356728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.058 [2024-07-26 09:06:32.356768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.058 qpair failed and we were unable to recover it. 00:33:14.058 [2024-07-26 09:06:32.356953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.058 [2024-07-26 09:06:32.356981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.058 qpair failed and we were unable to recover it. 00:33:14.058 [2024-07-26 09:06:32.357166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.058 [2024-07-26 09:06:32.357192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.058 qpair failed and we were unable to recover it. 00:33:14.058 [2024-07-26 09:06:32.357335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.058 [2024-07-26 09:06:32.357360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.058 qpair failed and we were unable to recover it. 00:33:14.058 [2024-07-26 09:06:32.357560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.058 [2024-07-26 09:06:32.357589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.058 qpair failed and we were unable to recover it. 00:33:14.058 [2024-07-26 09:06:32.357783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.058 [2024-07-26 09:06:32.357809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.058 qpair failed and we were unable to recover it. 00:33:14.058 [2024-07-26 09:06:32.357944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.058 [2024-07-26 09:06:32.357972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.058 qpair failed and we were unable to recover it. 00:33:14.059 [2024-07-26 09:06:32.358121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.059 [2024-07-26 09:06:32.358147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.059 qpair failed and we were unable to recover it. 00:33:14.059 [2024-07-26 09:06:32.358291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.059 [2024-07-26 09:06:32.358317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.059 qpair failed and we were unable to recover it. 00:33:14.059 [2024-07-26 09:06:32.358473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.059 [2024-07-26 09:06:32.358501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.059 qpair failed and we were unable to recover it. 00:33:14.059 [2024-07-26 09:06:32.358655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.059 [2024-07-26 09:06:32.358683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.059 qpair failed and we were unable to recover it. 00:33:14.059 [2024-07-26 09:06:32.358870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.059 [2024-07-26 09:06:32.358895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.059 qpair failed and we were unable to recover it. 00:33:14.059 [2024-07-26 09:06:32.359020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.059 [2024-07-26 09:06:32.359049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.059 qpair failed and we were unable to recover it. 00:33:14.059 [2024-07-26 09:06:32.359216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.059 [2024-07-26 09:06:32.359244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.059 qpair failed and we were unable to recover it. 00:33:14.059 [2024-07-26 09:06:32.359444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.059 [2024-07-26 09:06:32.359470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.059 qpair failed and we were unable to recover it. 00:33:14.059 [2024-07-26 09:06:32.359629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.059 [2024-07-26 09:06:32.359658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.059 qpair failed and we were unable to recover it. 00:33:14.059 [2024-07-26 09:06:32.359812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.059 [2024-07-26 09:06:32.359840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.059 qpair failed and we were unable to recover it. 00:33:14.059 [2024-07-26 09:06:32.359975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.059 [2024-07-26 09:06:32.360004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.059 qpair failed and we were unable to recover it. 00:33:14.059 [2024-07-26 09:06:32.360148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.059 [2024-07-26 09:06:32.360191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.059 qpair failed and we were unable to recover it. 00:33:14.059 [2024-07-26 09:06:32.360348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.059 [2024-07-26 09:06:32.360374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.059 qpair failed and we were unable to recover it. 00:33:14.059 [2024-07-26 09:06:32.360518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.059 [2024-07-26 09:06:32.360543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.059 qpair failed and we were unable to recover it. 00:33:14.059 [2024-07-26 09:06:32.360705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.059 [2024-07-26 09:06:32.360733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.059 qpair failed and we were unable to recover it. 00:33:14.059 [2024-07-26 09:06:32.360882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.059 [2024-07-26 09:06:32.360907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.059 qpair failed and we were unable to recover it. 00:33:14.059 [2024-07-26 09:06:32.361096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.059 [2024-07-26 09:06:32.361122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.059 qpair failed and we were unable to recover it. 00:33:14.059 [2024-07-26 09:06:32.361272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.059 [2024-07-26 09:06:32.361297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.059 qpair failed and we were unable to recover it. 00:33:14.059 [2024-07-26 09:06:32.361507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.059 [2024-07-26 09:06:32.361532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.059 qpair failed and we were unable to recover it. 00:33:14.059 [2024-07-26 09:06:32.361706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.059 [2024-07-26 09:06:32.361732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.059 qpair failed and we were unable to recover it. 00:33:14.059 [2024-07-26 09:06:32.361897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.059 [2024-07-26 09:06:32.361925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.059 qpair failed and we were unable to recover it. 00:33:14.059 [2024-07-26 09:06:32.362090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.059 [2024-07-26 09:06:32.362119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.059 qpair failed and we were unable to recover it. 00:33:14.059 [2024-07-26 09:06:32.362288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.059 [2024-07-26 09:06:32.362313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.059 qpair failed and we were unable to recover it. 00:33:14.059 [2024-07-26 09:06:32.362477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.059 [2024-07-26 09:06:32.362505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.059 qpair failed and we were unable to recover it. 00:33:14.059 [2024-07-26 09:06:32.362640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.059 [2024-07-26 09:06:32.362669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.059 qpair failed and we were unable to recover it. 00:33:14.059 [2024-07-26 09:06:32.362832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.059 [2024-07-26 09:06:32.362857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.059 qpair failed and we were unable to recover it. 00:33:14.059 [2024-07-26 09:06:32.363021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.059 [2024-07-26 09:06:32.363049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.059 qpair failed and we were unable to recover it. 00:33:14.059 [2024-07-26 09:06:32.363214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.059 [2024-07-26 09:06:32.363243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.059 qpair failed and we were unable to recover it. 00:33:14.059 [2024-07-26 09:06:32.363430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.059 [2024-07-26 09:06:32.363455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.059 qpair failed and we were unable to recover it. 00:33:14.059 [2024-07-26 09:06:32.363603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.059 [2024-07-26 09:06:32.363628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.059 qpair failed and we were unable to recover it. 00:33:14.059 [2024-07-26 09:06:32.363770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.059 [2024-07-26 09:06:32.363811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.059 qpair failed and we were unable to recover it. 00:33:14.059 [2024-07-26 09:06:32.363968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.059 [2024-07-26 09:06:32.363993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.059 qpair failed and we were unable to recover it. 00:33:14.059 [2024-07-26 09:06:32.364114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.059 [2024-07-26 09:06:32.364161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.059 qpair failed and we were unable to recover it. 00:33:14.059 [2024-07-26 09:06:32.364347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.059 [2024-07-26 09:06:32.364375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.059 qpair failed and we were unable to recover it. 00:33:14.059 [2024-07-26 09:06:32.364520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.059 [2024-07-26 09:06:32.364545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.059 qpair failed and we were unable to recover it. 00:33:14.059 [2024-07-26 09:06:32.364691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.059 [2024-07-26 09:06:32.364716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.059 qpair failed and we were unable to recover it. 00:33:14.059 [2024-07-26 09:06:32.364864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.059 [2024-07-26 09:06:32.364908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.059 qpair failed and we were unable to recover it. 00:33:14.059 [2024-07-26 09:06:32.365047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.059 [2024-07-26 09:06:32.365092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.059 qpair failed and we were unable to recover it. 00:33:14.059 [2024-07-26 09:06:32.365255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.059 [2024-07-26 09:06:32.365283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.059 qpair failed and we were unable to recover it. 00:33:14.059 [2024-07-26 09:06:32.365442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.059 [2024-07-26 09:06:32.365470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.059 qpair failed and we were unable to recover it. 00:33:14.059 [2024-07-26 09:06:32.365606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.059 [2024-07-26 09:06:32.365632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.059 qpair failed and we were unable to recover it. 00:33:14.059 [2024-07-26 09:06:32.365775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.059 [2024-07-26 09:06:32.365801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.059 qpair failed and we were unable to recover it. 00:33:14.059 [2024-07-26 09:06:32.365950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.059 [2024-07-26 09:06:32.365978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.059 qpair failed and we were unable to recover it. 00:33:14.059 [2024-07-26 09:06:32.366166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.059 [2024-07-26 09:06:32.366192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.059 qpair failed and we were unable to recover it. 00:33:14.059 [2024-07-26 09:06:32.366388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.059 [2024-07-26 09:06:32.366416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.059 qpair failed and we were unable to recover it. 00:33:14.059 [2024-07-26 09:06:32.366604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.059 [2024-07-26 09:06:32.366632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.059 qpair failed and we were unable to recover it. 00:33:14.059 [2024-07-26 09:06:32.366795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.059 [2024-07-26 09:06:32.366821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.059 qpair failed and we were unable to recover it. 00:33:14.059 [2024-07-26 09:06:32.366971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.059 [2024-07-26 09:06:32.366996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.059 qpair failed and we were unable to recover it. 00:33:14.059 [2024-07-26 09:06:32.367153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.059 [2024-07-26 09:06:32.367179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.059 qpair failed and we were unable to recover it. 00:33:14.059 [2024-07-26 09:06:32.367358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.059 [2024-07-26 09:06:32.367384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.059 qpair failed and we were unable to recover it. 00:33:14.059 [2024-07-26 09:06:32.367525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.059 [2024-07-26 09:06:32.367553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.059 qpair failed and we were unable to recover it. 00:33:14.059 [2024-07-26 09:06:32.367719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.059 [2024-07-26 09:06:32.367747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.059 qpair failed and we were unable to recover it. 00:33:14.059 [2024-07-26 09:06:32.367895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.059 [2024-07-26 09:06:32.367921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.059 qpair failed and we were unable to recover it. 00:33:14.059 [2024-07-26 09:06:32.368070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.059 [2024-07-26 09:06:32.368116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.059 qpair failed and we were unable to recover it. 00:33:14.059 [2024-07-26 09:06:32.368278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.059 [2024-07-26 09:06:32.368307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.059 qpair failed and we were unable to recover it. 00:33:14.059 [2024-07-26 09:06:32.368475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.059 [2024-07-26 09:06:32.368501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.059 qpair failed and we were unable to recover it. 00:33:14.059 [2024-07-26 09:06:32.368644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.059 [2024-07-26 09:06:32.368686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.059 qpair failed and we were unable to recover it. 00:33:14.059 [2024-07-26 09:06:32.368842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.059 [2024-07-26 09:06:32.368870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.059 qpair failed and we were unable to recover it. 00:33:14.059 [2024-07-26 09:06:32.369022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.059 [2024-07-26 09:06:32.369050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.059 qpair failed and we were unable to recover it. 00:33:14.059 [2024-07-26 09:06:32.369222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.059 [2024-07-26 09:06:32.369248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.059 qpair failed and we were unable to recover it. 00:33:14.059 [2024-07-26 09:06:32.369412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.059 [2024-07-26 09:06:32.369440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.059 qpair failed and we were unable to recover it. 00:33:14.059 [2024-07-26 09:06:32.369631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.059 [2024-07-26 09:06:32.369656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.059 qpair failed and we were unable to recover it. 00:33:14.059 [2024-07-26 09:06:32.369784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.059 [2024-07-26 09:06:32.369809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.059 qpair failed and we were unable to recover it. 00:33:14.059 [2024-07-26 09:06:32.369953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.059 [2024-07-26 09:06:32.369979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.059 qpair failed and we were unable to recover it. 00:33:14.060 [2024-07-26 09:06:32.370135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.060 [2024-07-26 09:06:32.370162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.060 qpair failed and we were unable to recover it. 00:33:14.060 [2024-07-26 09:06:32.370316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.060 [2024-07-26 09:06:32.370342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.060 qpair failed and we were unable to recover it. 00:33:14.060 [2024-07-26 09:06:32.370492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.060 [2024-07-26 09:06:32.370534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.060 qpair failed and we were unable to recover it. 00:33:14.060 [2024-07-26 09:06:32.370706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.060 [2024-07-26 09:06:32.370731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.060 qpair failed and we were unable to recover it. 00:33:14.060 [2024-07-26 09:06:32.370885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.060 [2024-07-26 09:06:32.370913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.060 qpair failed and we were unable to recover it. 00:33:14.060 [2024-07-26 09:06:32.371072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.060 [2024-07-26 09:06:32.371097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.060 qpair failed and we were unable to recover it. 00:33:14.060 [2024-07-26 09:06:32.371253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.060 [2024-07-26 09:06:32.371279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.060 qpair failed and we were unable to recover it. 00:33:14.060 [2024-07-26 09:06:32.371427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.060 [2024-07-26 09:06:32.371452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.060 qpair failed and we were unable to recover it. 00:33:14.060 [2024-07-26 09:06:32.371565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.060 [2024-07-26 09:06:32.371590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.060 qpair failed and we were unable to recover it. 00:33:14.060 [2024-07-26 09:06:32.371734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.060 [2024-07-26 09:06:32.371760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.060 qpair failed and we were unable to recover it. 00:33:14.060 [2024-07-26 09:06:32.371899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.060 [2024-07-26 09:06:32.371924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.060 qpair failed and we were unable to recover it. 00:33:14.060 [2024-07-26 09:06:32.372071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.060 [2024-07-26 09:06:32.372117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.060 qpair failed and we were unable to recover it. 00:33:14.060 [2024-07-26 09:06:32.372274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.060 [2024-07-26 09:06:32.372299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.060 qpair failed and we were unable to recover it. 00:33:14.060 [2024-07-26 09:06:32.372422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.060 [2024-07-26 09:06:32.372448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.060 qpair failed and we were unable to recover it. 00:33:14.060 [2024-07-26 09:06:32.372590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.060 [2024-07-26 09:06:32.372619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.060 qpair failed and we were unable to recover it. 00:33:14.060 [2024-07-26 09:06:32.372739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.060 [2024-07-26 09:06:32.372764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.060 qpair failed and we were unable to recover it. 00:33:14.060 [2024-07-26 09:06:32.372910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.060 [2024-07-26 09:06:32.372952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.060 qpair failed and we were unable to recover it. 00:33:14.060 [2024-07-26 09:06:32.373116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.060 [2024-07-26 09:06:32.373142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.060 qpair failed and we were unable to recover it. 00:33:14.060 [2024-07-26 09:06:32.373296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.060 [2024-07-26 09:06:32.373321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.060 qpair failed and we were unable to recover it. 00:33:14.060 [2024-07-26 09:06:32.373466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.060 [2024-07-26 09:06:32.373491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.060 qpair failed and we were unable to recover it. 00:33:14.060 [2024-07-26 09:06:32.373657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.060 [2024-07-26 09:06:32.373685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.060 qpair failed and we were unable to recover it. 00:33:14.060 [2024-07-26 09:06:32.373867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.060 [2024-07-26 09:06:32.373892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.060 qpair failed and we were unable to recover it. 00:33:14.060 [2024-07-26 09:06:32.374034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.060 [2024-07-26 09:06:32.374070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.060 qpair failed and we were unable to recover it. 00:33:14.060 [2024-07-26 09:06:32.374229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.060 [2024-07-26 09:06:32.374254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.060 qpair failed and we were unable to recover it. 00:33:14.060 [2024-07-26 09:06:32.374369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.060 [2024-07-26 09:06:32.374394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.060 qpair failed and we were unable to recover it. 00:33:14.060 [2024-07-26 09:06:32.374542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.060 [2024-07-26 09:06:32.374567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.060 qpair failed and we were unable to recover it. 00:33:14.060 [2024-07-26 09:06:32.374751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.060 [2024-07-26 09:06:32.374777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.060 qpair failed and we were unable to recover it. 00:33:14.060 [2024-07-26 09:06:32.374921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.060 [2024-07-26 09:06:32.374946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.060 qpair failed and we were unable to recover it. 00:33:14.060 [2024-07-26 09:06:32.375079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.060 [2024-07-26 09:06:32.375105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.060 qpair failed and we were unable to recover it. 00:33:14.060 [2024-07-26 09:06:32.375250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.060 [2024-07-26 09:06:32.375276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.060 qpair failed and we were unable to recover it. 00:33:14.060 [2024-07-26 09:06:32.375423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.060 [2024-07-26 09:06:32.375448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.060 qpair failed and we were unable to recover it. 00:33:14.060 [2024-07-26 09:06:32.375597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.060 [2024-07-26 09:06:32.375639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.060 qpair failed and we were unable to recover it. 00:33:14.060 [2024-07-26 09:06:32.375774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.060 [2024-07-26 09:06:32.375803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.060 qpair failed and we were unable to recover it. 00:33:14.060 [2024-07-26 09:06:32.375937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.060 [2024-07-26 09:06:32.375963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.060 qpair failed and we were unable to recover it. 00:33:14.060 [2024-07-26 09:06:32.376091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.060 [2024-07-26 09:06:32.376118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.060 qpair failed and we were unable to recover it. 00:33:14.060 [2024-07-26 09:06:32.376289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.060 [2024-07-26 09:06:32.376330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.060 qpair failed and we were unable to recover it. 00:33:14.060 [2024-07-26 09:06:32.376472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.060 [2024-07-26 09:06:32.376497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.060 qpair failed and we were unable to recover it. 00:33:14.060 [2024-07-26 09:06:32.376618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.060 [2024-07-26 09:06:32.376643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.060 qpair failed and we were unable to recover it. 00:33:14.060 [2024-07-26 09:06:32.376785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.060 [2024-07-26 09:06:32.376810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.060 qpair failed and we were unable to recover it. 00:33:14.060 [2024-07-26 09:06:32.376953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.060 [2024-07-26 09:06:32.376978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.060 qpair failed and we were unable to recover it. 00:33:14.060 [2024-07-26 09:06:32.377175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.060 [2024-07-26 09:06:32.377205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.060 qpair failed and we were unable to recover it. 00:33:14.060 [2024-07-26 09:06:32.377383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.060 [2024-07-26 09:06:32.377412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.060 qpair failed and we were unable to recover it. 00:33:14.060 [2024-07-26 09:06:32.377555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.060 [2024-07-26 09:06:32.377580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.060 qpair failed and we were unable to recover it. 00:33:14.060 [2024-07-26 09:06:32.377703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.060 [2024-07-26 09:06:32.377728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.060 qpair failed and we were unable to recover it. 00:33:14.060 [2024-07-26 09:06:32.377899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.060 [2024-07-26 09:06:32.377941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.060 qpair failed and we were unable to recover it. 00:33:14.060 [2024-07-26 09:06:32.378113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.060 [2024-07-26 09:06:32.378139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.060 qpair failed and we were unable to recover it. 00:33:14.060 [2024-07-26 09:06:32.378264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.060 [2024-07-26 09:06:32.378290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.060 qpair failed and we were unable to recover it. 00:33:14.060 [2024-07-26 09:06:32.378435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.060 [2024-07-26 09:06:32.378460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.060 qpair failed and we were unable to recover it. 00:33:14.060 [2024-07-26 09:06:32.378614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.060 [2024-07-26 09:06:32.378639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.060 qpair failed and we were unable to recover it. 00:33:14.060 [2024-07-26 09:06:32.378813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.060 [2024-07-26 09:06:32.378838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.060 qpair failed and we were unable to recover it. 00:33:14.060 [2024-07-26 09:06:32.379012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.060 [2024-07-26 09:06:32.379042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.060 qpair failed and we were unable to recover it. 00:33:14.060 [2024-07-26 09:06:32.379220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.060 [2024-07-26 09:06:32.379245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.060 qpair failed and we were unable to recover it. 00:33:14.060 [2024-07-26 09:06:32.379391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.060 [2024-07-26 09:06:32.379432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.060 qpair failed and we were unable to recover it. 00:33:14.060 [2024-07-26 09:06:32.379600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.060 [2024-07-26 09:06:32.379625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.060 qpair failed and we were unable to recover it. 00:33:14.060 [2024-07-26 09:06:32.379772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.060 [2024-07-26 09:06:32.379797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.060 qpair failed and we were unable to recover it. 00:33:14.060 [2024-07-26 09:06:32.379957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.060 [2024-07-26 09:06:32.379985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.060 qpair failed and we were unable to recover it. 00:33:14.060 [2024-07-26 09:06:32.380152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.060 [2024-07-26 09:06:32.380182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.060 qpair failed and we were unable to recover it. 00:33:14.060 [2024-07-26 09:06:32.380354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.060 [2024-07-26 09:06:32.380380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.060 qpair failed and we were unable to recover it. 00:33:14.060 [2024-07-26 09:06:32.380530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.060 [2024-07-26 09:06:32.380555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.060 qpair failed and we were unable to recover it. 00:33:14.060 [2024-07-26 09:06:32.380722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.061 [2024-07-26 09:06:32.380748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.061 qpair failed and we were unable to recover it. 00:33:14.061 [2024-07-26 09:06:32.380926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.061 [2024-07-26 09:06:32.380951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.061 qpair failed and we were unable to recover it. 00:33:14.061 [2024-07-26 09:06:32.381119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.061 [2024-07-26 09:06:32.381148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.061 qpair failed and we were unable to recover it. 00:33:14.061 [2024-07-26 09:06:32.381308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.061 [2024-07-26 09:06:32.381336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.061 qpair failed and we were unable to recover it. 00:33:14.061 [2024-07-26 09:06:32.381504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.061 [2024-07-26 09:06:32.381529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.061 qpair failed and we were unable to recover it. 00:33:14.061 [2024-07-26 09:06:32.381653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.061 [2024-07-26 09:06:32.381678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.061 qpair failed and we were unable to recover it. 00:33:14.061 [2024-07-26 09:06:32.381823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.061 [2024-07-26 09:06:32.381848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.061 qpair failed and we were unable to recover it. 00:33:14.061 [2024-07-26 09:06:32.382004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.061 [2024-07-26 09:06:32.382030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.061 qpair failed and we were unable to recover it. 00:33:14.061 [2024-07-26 09:06:32.382209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.061 [2024-07-26 09:06:32.382252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.061 qpair failed and we were unable to recover it. 00:33:14.061 [2024-07-26 09:06:32.382410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.061 [2024-07-26 09:06:32.382438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.061 qpair failed and we were unable to recover it. 00:33:14.061 [2024-07-26 09:06:32.382636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.061 [2024-07-26 09:06:32.382661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.061 qpair failed and we were unable to recover it. 00:33:14.061 [2024-07-26 09:06:32.382824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.061 [2024-07-26 09:06:32.382851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.061 qpair failed and we were unable to recover it. 00:33:14.061 [2024-07-26 09:06:32.383011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.061 [2024-07-26 09:06:32.383039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.061 qpair failed and we were unable to recover it. 00:33:14.061 [2024-07-26 09:06:32.383191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.061 [2024-07-26 09:06:32.383216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.061 qpair failed and we were unable to recover it. 00:33:14.061 [2024-07-26 09:06:32.383353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.061 [2024-07-26 09:06:32.383378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.061 qpair failed and we were unable to recover it. 00:33:14.061 [2024-07-26 09:06:32.383548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.061 [2024-07-26 09:06:32.383576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.061 qpair failed and we were unable to recover it. 00:33:14.061 [2024-07-26 09:06:32.383779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.061 [2024-07-26 09:06:32.383804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.061 qpair failed and we were unable to recover it. 00:33:14.061 [2024-07-26 09:06:32.383988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.061 [2024-07-26 09:06:32.384016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.061 qpair failed and we were unable to recover it. 00:33:14.061 [2024-07-26 09:06:32.384202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.061 [2024-07-26 09:06:32.384229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.061 qpair failed and we were unable to recover it. 00:33:14.061 [2024-07-26 09:06:32.384415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.061 [2024-07-26 09:06:32.384440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.061 qpair failed and we were unable to recover it. 00:33:14.061 [2024-07-26 09:06:32.384593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.061 [2024-07-26 09:06:32.384618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.061 qpair failed and we were unable to recover it. 00:33:14.061 [2024-07-26 09:06:32.384768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.061 [2024-07-26 09:06:32.384810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.061 qpair failed and we were unable to recover it. 00:33:14.061 [2024-07-26 09:06:32.385035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.061 [2024-07-26 09:06:32.385106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.061 qpair failed and we were unable to recover it. 00:33:14.061 [2024-07-26 09:06:32.385254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.061 [2024-07-26 09:06:32.385284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.061 qpair failed and we were unable to recover it. 00:33:14.061 [2024-07-26 09:06:32.385446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.061 [2024-07-26 09:06:32.385474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.061 qpair failed and we were unable to recover it. 00:33:14.061 [2024-07-26 09:06:32.385665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.061 [2024-07-26 09:06:32.385690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.061 qpair failed and we were unable to recover it. 00:33:14.061 [2024-07-26 09:06:32.385858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.061 [2024-07-26 09:06:32.385885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.061 qpair failed and we were unable to recover it. 00:33:14.061 [2024-07-26 09:06:32.386073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.061 [2024-07-26 09:06:32.386102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.061 qpair failed and we were unable to recover it. 00:33:14.061 [2024-07-26 09:06:32.386298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.061 [2024-07-26 09:06:32.386323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.061 qpair failed and we were unable to recover it. 00:33:14.061 [2024-07-26 09:06:32.386455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.061 [2024-07-26 09:06:32.386483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.061 qpair failed and we were unable to recover it. 00:33:14.061 [2024-07-26 09:06:32.386658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.061 [2024-07-26 09:06:32.386687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.061 qpair failed and we were unable to recover it. 00:33:14.061 [2024-07-26 09:06:32.386820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.061 [2024-07-26 09:06:32.386845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.061 qpair failed and we were unable to recover it. 00:33:14.061 [2024-07-26 09:06:32.387033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.061 [2024-07-26 09:06:32.387066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.061 qpair failed and we were unable to recover it. 00:33:14.061 [2024-07-26 09:06:32.387231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.061 [2024-07-26 09:06:32.387259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.061 qpair failed and we were unable to recover it. 00:33:14.061 [2024-07-26 09:06:32.387402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.061 [2024-07-26 09:06:32.387428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.061 qpair failed and we were unable to recover it. 00:33:14.061 [2024-07-26 09:06:32.387598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.061 [2024-07-26 09:06:32.387623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.061 qpair failed and we were unable to recover it. 00:33:14.061 [2024-07-26 09:06:32.387819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.061 [2024-07-26 09:06:32.387843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.061 qpair failed and we were unable to recover it. 00:33:14.061 [2024-07-26 09:06:32.388015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.061 [2024-07-26 09:06:32.388040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.061 qpair failed and we were unable to recover it. 00:33:14.061 [2024-07-26 09:06:32.388262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.061 [2024-07-26 09:06:32.388304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.061 qpair failed and we were unable to recover it. 00:33:14.061 [2024-07-26 09:06:32.388474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.061 [2024-07-26 09:06:32.388504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.061 qpair failed and we were unable to recover it. 00:33:14.061 [2024-07-26 09:06:32.388648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.061 [2024-07-26 09:06:32.388674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.061 qpair failed and we were unable to recover it. 00:33:14.061 [2024-07-26 09:06:32.388893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.061 [2024-07-26 09:06:32.388947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.061 qpair failed and we were unable to recover it. 00:33:14.061 [2024-07-26 09:06:32.389130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.061 [2024-07-26 09:06:32.389159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.061 qpair failed and we were unable to recover it. 00:33:14.061 [2024-07-26 09:06:32.389329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.061 [2024-07-26 09:06:32.389355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.061 qpair failed and we were unable to recover it. 00:33:14.061 [2024-07-26 09:06:32.389500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.061 [2024-07-26 09:06:32.389526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.061 qpair failed and we were unable to recover it. 00:33:14.061 [2024-07-26 09:06:32.389676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.061 [2024-07-26 09:06:32.389719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.061 qpair failed and we were unable to recover it. 00:33:14.061 [2024-07-26 09:06:32.389881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.061 [2024-07-26 09:06:32.389907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.061 qpair failed and we were unable to recover it. 00:33:14.061 [2024-07-26 09:06:32.390075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.061 [2024-07-26 09:06:32.390105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.061 qpair failed and we were unable to recover it. 00:33:14.061 [2024-07-26 09:06:32.390268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.061 [2024-07-26 09:06:32.390294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.061 qpair failed and we were unable to recover it. 00:33:14.061 [2024-07-26 09:06:32.390442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.061 [2024-07-26 09:06:32.390468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.061 qpair failed and we were unable to recover it. 00:33:14.061 [2024-07-26 09:06:32.390621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.061 [2024-07-26 09:06:32.390652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.061 qpair failed and we were unable to recover it. 00:33:14.061 [2024-07-26 09:06:32.390840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.061 [2024-07-26 09:06:32.390868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.061 qpair failed and we were unable to recover it. 00:33:14.061 [2024-07-26 09:06:32.391033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.061 [2024-07-26 09:06:32.391064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.061 qpair failed and we were unable to recover it. 00:33:14.061 [2024-07-26 09:06:32.391206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.061 [2024-07-26 09:06:32.391232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.061 qpair failed and we were unable to recover it. 00:33:14.061 [2024-07-26 09:06:32.391373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.061 [2024-07-26 09:06:32.391401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.061 qpair failed and we were unable to recover it. 00:33:14.061 [2024-07-26 09:06:32.391572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.061 [2024-07-26 09:06:32.391598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.061 qpair failed and we were unable to recover it. 00:33:14.061 [2024-07-26 09:06:32.391761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.061 [2024-07-26 09:06:32.391789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.061 qpair failed and we were unable to recover it. 00:33:14.061 [2024-07-26 09:06:32.391947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.061 [2024-07-26 09:06:32.391975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.061 qpair failed and we were unable to recover it. 00:33:14.061 [2024-07-26 09:06:32.392140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.061 [2024-07-26 09:06:32.392167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.061 qpair failed and we were unable to recover it. 00:33:14.061 [2024-07-26 09:06:32.392285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.061 [2024-07-26 09:06:32.392310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.061 qpair failed and we were unable to recover it. 00:33:14.061 [2024-07-26 09:06:32.392484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.061 [2024-07-26 09:06:32.392512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.061 qpair failed and we were unable to recover it. 00:33:14.061 [2024-07-26 09:06:32.392675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.061 [2024-07-26 09:06:32.392701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.061 qpair failed and we were unable to recover it. 00:33:14.061 [2024-07-26 09:06:32.392858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.061 [2024-07-26 09:06:32.392886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.061 qpair failed and we were unable to recover it. 00:33:14.061 [2024-07-26 09:06:32.393023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.061 [2024-07-26 09:06:32.393051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.061 qpair failed and we were unable to recover it. 00:33:14.061 [2024-07-26 09:06:32.393205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.062 [2024-07-26 09:06:32.393232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.062 qpair failed and we were unable to recover it. 00:33:14.062 [2024-07-26 09:06:32.393380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.062 [2024-07-26 09:06:32.393406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.062 qpair failed and we were unable to recover it. 00:33:14.062 [2024-07-26 09:06:32.393521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.062 [2024-07-26 09:06:32.393546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.062 qpair failed and we were unable to recover it. 00:33:14.062 [2024-07-26 09:06:32.393692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.062 [2024-07-26 09:06:32.393717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.062 qpair failed and we were unable to recover it. 00:33:14.062 [2024-07-26 09:06:32.393831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.062 [2024-07-26 09:06:32.393873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.062 qpair failed and we were unable to recover it. 00:33:14.062 [2024-07-26 09:06:32.394057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.062 [2024-07-26 09:06:32.394093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.062 qpair failed and we were unable to recover it. 00:33:14.062 [2024-07-26 09:06:32.394259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.062 [2024-07-26 09:06:32.394286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.062 qpair failed and we were unable to recover it. 00:33:14.062 [2024-07-26 09:06:32.394407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.062 [2024-07-26 09:06:32.394433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.062 qpair failed and we were unable to recover it. 00:33:14.062 [2024-07-26 09:06:32.394577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.062 [2024-07-26 09:06:32.394603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.062 qpair failed and we were unable to recover it. 00:33:14.062 [2024-07-26 09:06:32.394754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.062 [2024-07-26 09:06:32.394779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.062 qpair failed and we were unable to recover it. 00:33:14.062 [2024-07-26 09:06:32.394977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.062 [2024-07-26 09:06:32.395005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.062 qpair failed and we were unable to recover it. 00:33:14.062 [2024-07-26 09:06:32.395162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.062 [2024-07-26 09:06:32.395191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.062 qpair failed and we were unable to recover it. 00:33:14.062 [2024-07-26 09:06:32.395378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.062 [2024-07-26 09:06:32.395403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.062 qpair failed and we were unable to recover it. 00:33:14.062 [2024-07-26 09:06:32.395565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.062 [2024-07-26 09:06:32.395593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.062 qpair failed and we were unable to recover it. 00:33:14.062 [2024-07-26 09:06:32.395725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.062 [2024-07-26 09:06:32.395755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.062 qpair failed and we were unable to recover it. 00:33:14.062 [2024-07-26 09:06:32.395900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.062 [2024-07-26 09:06:32.395925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.062 qpair failed and we were unable to recover it. 00:33:14.062 [2024-07-26 09:06:32.396070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.062 [2024-07-26 09:06:32.396097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.062 qpair failed and we were unable to recover it. 00:33:14.062 [2024-07-26 09:06:32.396239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.062 [2024-07-26 09:06:32.396268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.062 qpair failed and we were unable to recover it. 00:33:14.062 [2024-07-26 09:06:32.396469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.062 [2024-07-26 09:06:32.396495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.062 qpair failed and we were unable to recover it. 00:33:14.062 [2024-07-26 09:06:32.396718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.062 [2024-07-26 09:06:32.396782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.062 qpair failed and we were unable to recover it. 00:33:14.062 [2024-07-26 09:06:32.396945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.062 [2024-07-26 09:06:32.396973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.062 qpair failed and we were unable to recover it. 00:33:14.062 [2024-07-26 09:06:32.397169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.062 [2024-07-26 09:06:32.397196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.062 qpair failed and we were unable to recover it. 00:33:14.062 [2024-07-26 09:06:32.397327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.062 [2024-07-26 09:06:32.397355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.062 qpair failed and we were unable to recover it. 00:33:14.062 [2024-07-26 09:06:32.397494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.062 [2024-07-26 09:06:32.397523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.062 qpair failed and we were unable to recover it. 00:33:14.062 [2024-07-26 09:06:32.397680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.062 [2024-07-26 09:06:32.397705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.062 qpair failed and we were unable to recover it. 00:33:14.062 [2024-07-26 09:06:32.397820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.062 [2024-07-26 09:06:32.397846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.062 qpair failed and we were unable to recover it. 00:33:14.062 [2024-07-26 09:06:32.398011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.062 [2024-07-26 09:06:32.398043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.062 qpair failed and we were unable to recover it. 00:33:14.062 [2024-07-26 09:06:32.398195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.062 [2024-07-26 09:06:32.398220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.062 qpair failed and we were unable to recover it. 00:33:14.062 [2024-07-26 09:06:32.398362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.062 [2024-07-26 09:06:32.398387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.062 qpair failed and we were unable to recover it. 00:33:14.062 [2024-07-26 09:06:32.398558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.062 [2024-07-26 09:06:32.398584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.062 qpair failed and we were unable to recover it. 00:33:14.062 [2024-07-26 09:06:32.398706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.062 [2024-07-26 09:06:32.398733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.062 qpair failed and we were unable to recover it. 00:33:14.062 [2024-07-26 09:06:32.398903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.062 [2024-07-26 09:06:32.398944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.062 qpair failed and we were unable to recover it. 00:33:14.062 [2024-07-26 09:06:32.399087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.062 [2024-07-26 09:06:32.399128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.062 qpair failed and we were unable to recover it. 00:33:14.062 [2024-07-26 09:06:32.399293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.062 [2024-07-26 09:06:32.399319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.062 qpair failed and we were unable to recover it. 00:33:14.062 [2024-07-26 09:06:32.399483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.062 [2024-07-26 09:06:32.399511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.062 qpair failed and we were unable to recover it. 00:33:14.062 [2024-07-26 09:06:32.399644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.062 [2024-07-26 09:06:32.399672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.062 qpair failed and we were unable to recover it. 00:33:14.062 [2024-07-26 09:06:32.399820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.062 [2024-07-26 09:06:32.399846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.062 qpair failed and we were unable to recover it. 00:33:14.062 [2024-07-26 09:06:32.399959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.062 [2024-07-26 09:06:32.399984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.062 qpair failed and we were unable to recover it. 00:33:14.062 [2024-07-26 09:06:32.400147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.062 [2024-07-26 09:06:32.400176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.062 qpair failed and we were unable to recover it. 00:33:14.062 [2024-07-26 09:06:32.400347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.062 [2024-07-26 09:06:32.400373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.062 qpair failed and we were unable to recover it. 00:33:14.062 [2024-07-26 09:06:32.400493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.062 [2024-07-26 09:06:32.400535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.062 qpair failed and we were unable to recover it. 00:33:14.062 [2024-07-26 09:06:32.400703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.062 [2024-07-26 09:06:32.400728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.062 qpair failed and we were unable to recover it. 00:33:14.062 [2024-07-26 09:06:32.400875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.062 [2024-07-26 09:06:32.400900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.062 qpair failed and we were unable to recover it. 00:33:14.062 [2024-07-26 09:06:32.401045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.062 [2024-07-26 09:06:32.401093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.062 qpair failed and we were unable to recover it. 00:33:14.062 [2024-07-26 09:06:32.401242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.062 [2024-07-26 09:06:32.401268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.062 qpair failed and we were unable to recover it. 00:33:14.062 [2024-07-26 09:06:32.401412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.062 [2024-07-26 09:06:32.401437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.062 qpair failed and we were unable to recover it. 00:33:14.062 [2024-07-26 09:06:32.401597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.062 [2024-07-26 09:06:32.401626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.062 qpair failed and we were unable to recover it. 00:33:14.062 [2024-07-26 09:06:32.401786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.062 [2024-07-26 09:06:32.401816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.062 qpair failed and we were unable to recover it. 00:33:14.062 [2024-07-26 09:06:32.401987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.062 [2024-07-26 09:06:32.402012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.062 qpair failed and we were unable to recover it. 00:33:14.062 [2024-07-26 09:06:32.402157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.062 [2024-07-26 09:06:32.402184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.062 qpair failed and we were unable to recover it. 00:33:14.062 [2024-07-26 09:06:32.402315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.062 [2024-07-26 09:06:32.402341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.062 qpair failed and we were unable to recover it. 00:33:14.062 [2024-07-26 09:06:32.402482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.062 [2024-07-26 09:06:32.402509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.062 qpair failed and we were unable to recover it. 00:33:14.062 [2024-07-26 09:06:32.402698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.062 [2024-07-26 09:06:32.402726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.062 qpair failed and we were unable to recover it. 00:33:14.062 [2024-07-26 09:06:32.402892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.062 [2024-07-26 09:06:32.402921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.062 qpair failed and we were unable to recover it. 00:33:14.062 [2024-07-26 09:06:32.403069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.062 [2024-07-26 09:06:32.403095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.062 qpair failed and we were unable to recover it. 00:33:14.062 [2024-07-26 09:06:32.403266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.062 [2024-07-26 09:06:32.403291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.062 qpair failed and we were unable to recover it. 00:33:14.062 [2024-07-26 09:06:32.403465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.062 [2024-07-26 09:06:32.403494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.062 qpair failed and we were unable to recover it. 00:33:14.062 [2024-07-26 09:06:32.403658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.062 [2024-07-26 09:06:32.403683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.062 qpair failed and we were unable to recover it. 00:33:14.062 [2024-07-26 09:06:32.403847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.062 [2024-07-26 09:06:32.403876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.062 qpair failed and we were unable to recover it. 00:33:14.062 [2024-07-26 09:06:32.404002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.062 [2024-07-26 09:06:32.404030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.062 qpair failed and we were unable to recover it. 00:33:14.062 [2024-07-26 09:06:32.404221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.062 [2024-07-26 09:06:32.404248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.062 qpair failed and we were unable to recover it. 00:33:14.062 [2024-07-26 09:06:32.404387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.062 [2024-07-26 09:06:32.404416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.062 qpair failed and we were unable to recover it. 00:33:14.062 [2024-07-26 09:06:32.404603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.063 [2024-07-26 09:06:32.404631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.063 qpair failed and we were unable to recover it. 00:33:14.063 [2024-07-26 09:06:32.404834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.063 [2024-07-26 09:06:32.404859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.063 qpair failed and we were unable to recover it. 00:33:14.063 [2024-07-26 09:06:32.404995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.063 [2024-07-26 09:06:32.405023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.063 qpair failed and we were unable to recover it. 00:33:14.063 [2024-07-26 09:06:32.405196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.063 [2024-07-26 09:06:32.405222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.063 qpair failed and we were unable to recover it. 00:33:14.063 [2024-07-26 09:06:32.405370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.063 [2024-07-26 09:06:32.405401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.063 qpair failed and we were unable to recover it. 00:33:14.063 [2024-07-26 09:06:32.405594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.063 [2024-07-26 09:06:32.405623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.063 qpair failed and we were unable to recover it. 00:33:14.063 [2024-07-26 09:06:32.405762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.063 [2024-07-26 09:06:32.405788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.063 qpair failed and we were unable to recover it. 00:33:14.063 [2024-07-26 09:06:32.405961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.063 [2024-07-26 09:06:32.405987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.063 qpair failed and we were unable to recover it. 00:33:14.063 [2024-07-26 09:06:32.406111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.063 [2024-07-26 09:06:32.406138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.063 qpair failed and we were unable to recover it. 00:33:14.063 [2024-07-26 09:06:32.406281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.063 [2024-07-26 09:06:32.406307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.063 qpair failed and we were unable to recover it. 00:33:14.063 [2024-07-26 09:06:32.406484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.063 [2024-07-26 09:06:32.406510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.063 qpair failed and we were unable to recover it. 00:33:14.063 [2024-07-26 09:06:32.406699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.063 [2024-07-26 09:06:32.406728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.063 qpair failed and we were unable to recover it. 00:33:14.063 [2024-07-26 09:06:32.406888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.063 [2024-07-26 09:06:32.406916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.063 qpair failed and we were unable to recover it. 00:33:14.063 [2024-07-26 09:06:32.407056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.063 [2024-07-26 09:06:32.407088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.063 qpair failed and we were unable to recover it. 00:33:14.063 [2024-07-26 09:06:32.407235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.063 [2024-07-26 09:06:32.407261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.063 qpair failed and we were unable to recover it. 00:33:14.063 [2024-07-26 09:06:32.407406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.063 [2024-07-26 09:06:32.407449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.063 qpair failed and we were unable to recover it. 00:33:14.063 [2024-07-26 09:06:32.407589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.063 [2024-07-26 09:06:32.407614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.063 qpair failed and we were unable to recover it. 00:33:14.063 [2024-07-26 09:06:32.407799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.063 [2024-07-26 09:06:32.407827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.063 qpair failed and we were unable to recover it. 00:33:14.063 [2024-07-26 09:06:32.407963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.063 [2024-07-26 09:06:32.407991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.063 qpair failed and we were unable to recover it. 00:33:14.063 [2024-07-26 09:06:32.408163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.063 [2024-07-26 09:06:32.408189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.063 qpair failed and we were unable to recover it. 00:33:14.063 [2024-07-26 09:06:32.408335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.063 [2024-07-26 09:06:32.408362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.063 qpair failed and we were unable to recover it. 00:33:14.063 [2024-07-26 09:06:32.408506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.063 [2024-07-26 09:06:32.408548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.063 qpair failed and we were unable to recover it. 00:33:14.063 [2024-07-26 09:06:32.408689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.063 [2024-07-26 09:06:32.408715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.063 qpair failed and we were unable to recover it. 00:33:14.063 [2024-07-26 09:06:32.408860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.063 [2024-07-26 09:06:32.408885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.063 qpair failed and we were unable to recover it. 00:33:14.063 [2024-07-26 09:06:32.409026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.063 [2024-07-26 09:06:32.409055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.063 qpair failed and we were unable to recover it. 00:33:14.063 [2024-07-26 09:06:32.409232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.063 [2024-07-26 09:06:32.409257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.063 qpair failed and we were unable to recover it. 00:33:14.063 [2024-07-26 09:06:32.409424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.063 [2024-07-26 09:06:32.409467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.063 qpair failed and we were unable to recover it. 00:33:14.063 [2024-07-26 09:06:32.409604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.063 [2024-07-26 09:06:32.409634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.063 qpair failed and we were unable to recover it. 00:33:14.063 [2024-07-26 09:06:32.409767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.063 [2024-07-26 09:06:32.409792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.063 qpair failed and we were unable to recover it. 00:33:14.063 [2024-07-26 09:06:32.409934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.063 [2024-07-26 09:06:32.409959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.063 qpair failed and we were unable to recover it. 00:33:14.063 [2024-07-26 09:06:32.410134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.063 [2024-07-26 09:06:32.410163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.063 qpair failed and we were unable to recover it. 00:33:14.063 [2024-07-26 09:06:32.410341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.063 [2024-07-26 09:06:32.410366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.063 qpair failed and we were unable to recover it. 00:33:14.063 [2024-07-26 09:06:32.410538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.063 [2024-07-26 09:06:32.410563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.063 qpair failed and we were unable to recover it. 00:33:14.063 [2024-07-26 09:06:32.410744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.063 [2024-07-26 09:06:32.410770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.063 qpair failed and we were unable to recover it. 00:33:14.063 [2024-07-26 09:06:32.410976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.063 [2024-07-26 09:06:32.411004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.063 qpair failed and we were unable to recover it. 00:33:14.063 [2024-07-26 09:06:32.411174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.063 [2024-07-26 09:06:32.411200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.063 qpair failed and we were unable to recover it. 00:33:14.063 [2024-07-26 09:06:32.411395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.063 [2024-07-26 09:06:32.411424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.063 qpair failed and we were unable to recover it. 00:33:14.063 [2024-07-26 09:06:32.411624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.063 [2024-07-26 09:06:32.411649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.063 qpair failed and we were unable to recover it. 00:33:14.063 [2024-07-26 09:06:32.411809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.063 [2024-07-26 09:06:32.411837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.063 qpair failed and we were unable to recover it. 00:33:14.063 [2024-07-26 09:06:32.411999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.063 [2024-07-26 09:06:32.412027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.063 qpair failed and we were unable to recover it. 00:33:14.063 [2024-07-26 09:06:32.412170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.063 [2024-07-26 09:06:32.412197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.063 qpair failed and we were unable to recover it. 00:33:14.063 [2024-07-26 09:06:32.412362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.063 [2024-07-26 09:06:32.412387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.063 qpair failed and we were unable to recover it. 00:33:14.063 [2024-07-26 09:06:32.412566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.063 [2024-07-26 09:06:32.412591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.063 qpair failed and we were unable to recover it. 00:33:14.063 [2024-07-26 09:06:32.412740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.063 [2024-07-26 09:06:32.412765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.063 qpair failed and we were unable to recover it. 00:33:14.063 [2024-07-26 09:06:32.412921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.063 [2024-07-26 09:06:32.412954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.063 qpair failed and we were unable to recover it. 00:33:14.063 [2024-07-26 09:06:32.413114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.063 [2024-07-26 09:06:32.413143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.063 qpair failed and we were unable to recover it. 00:33:14.063 [2024-07-26 09:06:32.413291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.063 [2024-07-26 09:06:32.413317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.063 qpair failed and we were unable to recover it. 00:33:14.063 [2024-07-26 09:06:32.413456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.063 [2024-07-26 09:06:32.413498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.063 qpair failed and we were unable to recover it. 00:33:14.063 [2024-07-26 09:06:32.413658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.063 [2024-07-26 09:06:32.413686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.063 qpair failed and we were unable to recover it. 00:33:14.063 [2024-07-26 09:06:32.413827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.063 [2024-07-26 09:06:32.413854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.063 qpair failed and we were unable to recover it. 00:33:14.063 [2024-07-26 09:06:32.413979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.063 [2024-07-26 09:06:32.414006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.063 qpair failed and we were unable to recover it. 00:33:14.063 [2024-07-26 09:06:32.414176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.063 [2024-07-26 09:06:32.414202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.064 qpair failed and we were unable to recover it. 00:33:14.064 [2024-07-26 09:06:32.414318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.064 [2024-07-26 09:06:32.414345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.064 qpair failed and we were unable to recover it. 00:33:14.064 [2024-07-26 09:06:32.414457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.064 [2024-07-26 09:06:32.414483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.064 qpair failed and we were unable to recover it. 00:33:14.064 [2024-07-26 09:06:32.414627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.064 [2024-07-26 09:06:32.414653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.064 qpair failed and we were unable to recover it. 00:33:14.064 [2024-07-26 09:06:32.414831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.064 [2024-07-26 09:06:32.414857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.064 qpair failed and we were unable to recover it. 00:33:14.064 [2024-07-26 09:06:32.415023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.064 [2024-07-26 09:06:32.415051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.064 qpair failed and we were unable to recover it. 00:33:14.064 [2024-07-26 09:06:32.415185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.064 [2024-07-26 09:06:32.415214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.064 qpair failed and we were unable to recover it. 00:33:14.064 [2024-07-26 09:06:32.415382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.064 [2024-07-26 09:06:32.415409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.064 qpair failed and we were unable to recover it. 00:33:14.064 [2024-07-26 09:06:32.415523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.064 [2024-07-26 09:06:32.415549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.064 qpair failed and we were unable to recover it. 00:33:14.064 [2024-07-26 09:06:32.415718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.064 [2024-07-26 09:06:32.415747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.064 qpair failed and we were unable to recover it. 00:33:14.064 [2024-07-26 09:06:32.415881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.064 [2024-07-26 09:06:32.415907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.064 qpair failed and we were unable to recover it. 00:33:14.064 [2024-07-26 09:06:32.416052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.064 [2024-07-26 09:06:32.416083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.064 qpair failed and we were unable to recover it. 00:33:14.064 [2024-07-26 09:06:32.416256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.064 [2024-07-26 09:06:32.416285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.064 qpair failed and we were unable to recover it. 00:33:14.064 [2024-07-26 09:06:32.416457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.064 [2024-07-26 09:06:32.416482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.064 qpair failed and we were unable to recover it. 00:33:14.064 [2024-07-26 09:06:32.416590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.064 [2024-07-26 09:06:32.416616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.064 qpair failed and we were unable to recover it. 00:33:14.064 [2024-07-26 09:06:32.416788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.064 [2024-07-26 09:06:32.416817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.064 qpair failed and we were unable to recover it. 00:33:14.064 [2024-07-26 09:06:32.417025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.064 [2024-07-26 09:06:32.417053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.064 qpair failed and we were unable to recover it. 00:33:14.064 [2024-07-26 09:06:32.417225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.064 [2024-07-26 09:06:32.417251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.064 qpair failed and we were unable to recover it. 00:33:14.064 [2024-07-26 09:06:32.417423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.064 [2024-07-26 09:06:32.417452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.064 qpair failed and we were unable to recover it. 00:33:14.064 [2024-07-26 09:06:32.417594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.064 [2024-07-26 09:06:32.417619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.064 qpair failed and we were unable to recover it. 00:33:14.064 [2024-07-26 09:06:32.417766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.064 [2024-07-26 09:06:32.417792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.064 qpair failed and we were unable to recover it. 00:33:14.064 [2024-07-26 09:06:32.417937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.064 [2024-07-26 09:06:32.417962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.064 qpair failed and we were unable to recover it. 00:33:14.064 [2024-07-26 09:06:32.418149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.064 [2024-07-26 09:06:32.418175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.064 qpair failed and we were unable to recover it. 00:33:14.064 [2024-07-26 09:06:32.418329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.064 [2024-07-26 09:06:32.418355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.064 qpair failed and we were unable to recover it. 00:33:14.064 [2024-07-26 09:06:32.418517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.064 [2024-07-26 09:06:32.418546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.064 qpair failed and we were unable to recover it. 00:33:14.064 [2024-07-26 09:06:32.418687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.064 [2024-07-26 09:06:32.418713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.064 qpair failed and we were unable to recover it. 00:33:14.064 [2024-07-26 09:06:32.418827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.064 [2024-07-26 09:06:32.418853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.064 qpair failed and we were unable to recover it. 00:33:14.064 [2024-07-26 09:06:32.418986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.064 [2024-07-26 09:06:32.419014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.064 qpair failed and we were unable to recover it. 00:33:14.064 [2024-07-26 09:06:32.419192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.064 [2024-07-26 09:06:32.419219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.064 qpair failed and we were unable to recover it. 00:33:14.064 [2024-07-26 09:06:32.419370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.064 [2024-07-26 09:06:32.419396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.064 qpair failed and we were unable to recover it. 00:33:14.064 [2024-07-26 09:06:32.419591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.064 [2024-07-26 09:06:32.419619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.064 qpair failed and we were unable to recover it. 00:33:14.064 [2024-07-26 09:06:32.419783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.064 [2024-07-26 09:06:32.419809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.064 qpair failed and we were unable to recover it. 00:33:14.064 [2024-07-26 09:06:32.419954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.064 [2024-07-26 09:06:32.419995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.064 qpair failed and we were unable to recover it. 00:33:14.064 [2024-07-26 09:06:32.420133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.064 [2024-07-26 09:06:32.420166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.064 qpair failed and we were unable to recover it. 00:33:14.064 [2024-07-26 09:06:32.420313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.064 [2024-07-26 09:06:32.420338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.064 qpair failed and we were unable to recover it. 00:33:14.064 [2024-07-26 09:06:32.420452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.064 [2024-07-26 09:06:32.420477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.064 qpair failed and we were unable to recover it. 00:33:14.064 [2024-07-26 09:06:32.420659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.064 [2024-07-26 09:06:32.420687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.064 qpair failed and we were unable to recover it. 00:33:14.064 [2024-07-26 09:06:32.420877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.064 [2024-07-26 09:06:32.420903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.064 qpair failed and we were unable to recover it. 00:33:14.064 [2024-07-26 09:06:32.421079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.064 [2024-07-26 09:06:32.421108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.064 qpair failed and we were unable to recover it. 00:33:14.064 [2024-07-26 09:06:32.421264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.064 [2024-07-26 09:06:32.421292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.064 qpair failed and we were unable to recover it. 00:33:14.064 [2024-07-26 09:06:32.421443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.064 [2024-07-26 09:06:32.421468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.064 qpair failed and we were unable to recover it. 00:33:14.064 [2024-07-26 09:06:32.421614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.064 [2024-07-26 09:06:32.421639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.064 qpair failed and we were unable to recover it. 00:33:14.064 [2024-07-26 09:06:32.421770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.064 [2024-07-26 09:06:32.421798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.064 qpair failed and we were unable to recover it. 00:33:14.064 [2024-07-26 09:06:32.421960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.064 [2024-07-26 09:06:32.421987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.064 qpair failed and we were unable to recover it. 00:33:14.064 [2024-07-26 09:06:32.422143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.064 [2024-07-26 09:06:32.422172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.064 qpair failed and we were unable to recover it. 00:33:14.064 [2024-07-26 09:06:32.422336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.064 [2024-07-26 09:06:32.422364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.064 qpair failed and we were unable to recover it. 00:33:14.064 [2024-07-26 09:06:32.422501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.064 [2024-07-26 09:06:32.422527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.064 qpair failed and we were unable to recover it. 00:33:14.064 [2024-07-26 09:06:32.422707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.064 [2024-07-26 09:06:32.422750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.064 qpair failed and we were unable to recover it. 00:33:14.064 [2024-07-26 09:06:32.422922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.064 [2024-07-26 09:06:32.422948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.064 qpair failed and we were unable to recover it. 00:33:14.064 [2024-07-26 09:06:32.423073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.064 [2024-07-26 09:06:32.423100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.064 qpair failed and we were unable to recover it. 00:33:14.064 [2024-07-26 09:06:32.423218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.064 [2024-07-26 09:06:32.423243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.064 qpair failed and we were unable to recover it. 00:33:14.064 [2024-07-26 09:06:32.423364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.064 [2024-07-26 09:06:32.423390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.064 qpair failed and we were unable to recover it. 00:33:14.064 [2024-07-26 09:06:32.423540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.064 [2024-07-26 09:06:32.423566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.064 qpair failed and we were unable to recover it. 00:33:14.064 [2024-07-26 09:06:32.423687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.064 [2024-07-26 09:06:32.423713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.064 qpair failed and we were unable to recover it. 00:33:14.064 [2024-07-26 09:06:32.423912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.064 [2024-07-26 09:06:32.423940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.064 qpair failed and we were unable to recover it. 00:33:14.064 [2024-07-26 09:06:32.424139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.064 [2024-07-26 09:06:32.424165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.064 qpair failed and we were unable to recover it. 00:33:14.064 [2024-07-26 09:06:32.424286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.064 [2024-07-26 09:06:32.424312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.064 qpair failed and we were unable to recover it. 00:33:14.064 [2024-07-26 09:06:32.424459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.064 [2024-07-26 09:06:32.424485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.064 qpair failed and we were unable to recover it. 00:33:14.064 [2024-07-26 09:06:32.424655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.064 [2024-07-26 09:06:32.424680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.064 qpair failed and we were unable to recover it. 00:33:14.064 [2024-07-26 09:06:32.424790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.064 [2024-07-26 09:06:32.424816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.064 qpair failed and we were unable to recover it. 00:33:14.064 [2024-07-26 09:06:32.424935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.064 [2024-07-26 09:06:32.424960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.064 qpair failed and we were unable to recover it. 00:33:14.064 [2024-07-26 09:06:32.425131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.064 [2024-07-26 09:06:32.425157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.064 qpair failed and we were unable to recover it. 00:33:14.064 [2024-07-26 09:06:32.425318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.064 [2024-07-26 09:06:32.425346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.064 qpair failed and we were unable to recover it. 00:33:14.064 [2024-07-26 09:06:32.425504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.065 [2024-07-26 09:06:32.425532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.065 qpair failed and we were unable to recover it. 00:33:14.065 [2024-07-26 09:06:32.425705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.065 [2024-07-26 09:06:32.425730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.065 qpair failed and we were unable to recover it. 00:33:14.065 [2024-07-26 09:06:32.425876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.065 [2024-07-26 09:06:32.425902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.065 qpair failed and we were unable to recover it. 00:33:14.065 [2024-07-26 09:06:32.426049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.065 [2024-07-26 09:06:32.426109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.065 qpair failed and we were unable to recover it. 00:33:14.065 [2024-07-26 09:06:32.426277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.065 [2024-07-26 09:06:32.426303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.065 qpair failed and we were unable to recover it. 00:33:14.065 [2024-07-26 09:06:32.426453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.065 [2024-07-26 09:06:32.426479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.065 qpair failed and we were unable to recover it. 00:33:14.065 [2024-07-26 09:06:32.426622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.065 [2024-07-26 09:06:32.426648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.065 qpair failed and we were unable to recover it. 00:33:14.065 [2024-07-26 09:06:32.426797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.065 [2024-07-26 09:06:32.426822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.065 qpair failed and we were unable to recover it. 00:33:14.065 [2024-07-26 09:06:32.426979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.065 [2024-07-26 09:06:32.427008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.065 qpair failed and we were unable to recover it. 00:33:14.065 [2024-07-26 09:06:32.427151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.065 [2024-07-26 09:06:32.427177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.065 qpair failed and we were unable to recover it. 00:33:14.065 [2024-07-26 09:06:32.427328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.065 [2024-07-26 09:06:32.427359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.065 qpair failed and we were unable to recover it. 00:33:14.065 [2024-07-26 09:06:32.427528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.065 [2024-07-26 09:06:32.427556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.065 qpair failed and we were unable to recover it. 00:33:14.065 [2024-07-26 09:06:32.427723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.065 [2024-07-26 09:06:32.427751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.065 qpair failed and we were unable to recover it. 00:33:14.065 [2024-07-26 09:06:32.427917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.065 [2024-07-26 09:06:32.427943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.065 qpair failed and we were unable to recover it. 00:33:14.065 [2024-07-26 09:06:32.428140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.065 [2024-07-26 09:06:32.428169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.065 qpair failed and we were unable to recover it. 00:33:14.065 [2024-07-26 09:06:32.428301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.065 [2024-07-26 09:06:32.428330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.065 qpair failed and we were unable to recover it. 00:33:14.065 [2024-07-26 09:06:32.428492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.065 [2024-07-26 09:06:32.428517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.065 qpair failed and we were unable to recover it. 00:33:14.065 [2024-07-26 09:06:32.428634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.065 [2024-07-26 09:06:32.428660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.065 qpair failed and we were unable to recover it. 00:33:14.065 [2024-07-26 09:06:32.428803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.065 [2024-07-26 09:06:32.428828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.065 qpair failed and we were unable to recover it. 00:33:14.065 [2024-07-26 09:06:32.429018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.065 [2024-07-26 09:06:32.429044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.065 qpair failed and we were unable to recover it. 00:33:14.065 [2024-07-26 09:06:32.429217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.065 [2024-07-26 09:06:32.429246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.065 qpair failed and we were unable to recover it. 00:33:14.065 [2024-07-26 09:06:32.429373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.065 [2024-07-26 09:06:32.429402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.065 qpair failed and we were unable to recover it. 00:33:14.065 [2024-07-26 09:06:32.429567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.065 [2024-07-26 09:06:32.429592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.065 qpair failed and we were unable to recover it. 00:33:14.065 [2024-07-26 09:06:32.429764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.065 [2024-07-26 09:06:32.429806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.065 qpair failed and we were unable to recover it. 00:33:14.065 [2024-07-26 09:06:32.430001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.065 [2024-07-26 09:06:32.430029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.065 qpair failed and we were unable to recover it. 00:33:14.065 [2024-07-26 09:06:32.430236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.065 [2024-07-26 09:06:32.430263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.065 qpair failed and we were unable to recover it. 00:33:14.065 [2024-07-26 09:06:32.430401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.065 [2024-07-26 09:06:32.430431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.065 qpair failed and we were unable to recover it. 00:33:14.065 [2024-07-26 09:06:32.430602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.065 [2024-07-26 09:06:32.430629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.065 qpair failed and we were unable to recover it. 00:33:14.065 [2024-07-26 09:06:32.430776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.065 [2024-07-26 09:06:32.430802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.065 qpair failed and we were unable to recover it. 00:33:14.065 [2024-07-26 09:06:32.430966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.065 [2024-07-26 09:06:32.430994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.065 qpair failed and we were unable to recover it. 00:33:14.065 [2024-07-26 09:06:32.431156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.065 [2024-07-26 09:06:32.431185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.065 qpair failed and we were unable to recover it. 00:33:14.065 [2024-07-26 09:06:32.431326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.065 [2024-07-26 09:06:32.431352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.065 qpair failed and we were unable to recover it. 00:33:14.065 [2024-07-26 09:06:32.431499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.065 [2024-07-26 09:06:32.431525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.065 qpair failed and we were unable to recover it. 00:33:14.065 [2024-07-26 09:06:32.431667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.065 [2024-07-26 09:06:32.431692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.065 qpair failed and we were unable to recover it. 00:33:14.065 [2024-07-26 09:06:32.431831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.065 [2024-07-26 09:06:32.431857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.065 qpair failed and we were unable to recover it. 00:33:14.065 [2024-07-26 09:06:32.432015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.065 [2024-07-26 09:06:32.432044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.065 qpair failed and we were unable to recover it. 00:33:14.065 [2024-07-26 09:06:32.432212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.065 [2024-07-26 09:06:32.432240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.065 qpair failed and we were unable to recover it. 00:33:14.065 [2024-07-26 09:06:32.432415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.065 [2024-07-26 09:06:32.432440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.065 qpair failed and we were unable to recover it. 00:33:14.065 [2024-07-26 09:06:32.432558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.065 [2024-07-26 09:06:32.432584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.065 qpair failed and we were unable to recover it. 00:33:14.065 [2024-07-26 09:06:32.432757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.065 [2024-07-26 09:06:32.432800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.065 qpair failed and we were unable to recover it. 00:33:14.065 [2024-07-26 09:06:32.432960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.065 [2024-07-26 09:06:32.432986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.065 qpair failed and we were unable to recover it. 00:33:14.065 [2024-07-26 09:06:32.433153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.065 [2024-07-26 09:06:32.433182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.065 qpair failed and we were unable to recover it. 00:33:14.065 [2024-07-26 09:06:32.433317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.065 [2024-07-26 09:06:32.433347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.065 qpair failed and we were unable to recover it. 00:33:14.065 [2024-07-26 09:06:32.433510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.065 [2024-07-26 09:06:32.433536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.065 qpair failed and we were unable to recover it. 00:33:14.065 [2024-07-26 09:06:32.433681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.065 [2024-07-26 09:06:32.433726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.065 qpair failed and we were unable to recover it. 00:33:14.065 [2024-07-26 09:06:32.433865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.065 [2024-07-26 09:06:32.433891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.065 qpair failed and we were unable to recover it. 00:33:14.065 [2024-07-26 09:06:32.434089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.065 [2024-07-26 09:06:32.434118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.065 qpair failed and we were unable to recover it. 00:33:14.065 [2024-07-26 09:06:32.434255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.065 [2024-07-26 09:06:32.434282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.065 qpair failed and we were unable to recover it. 00:33:14.065 [2024-07-26 09:06:32.434478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.065 [2024-07-26 09:06:32.434506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.065 qpair failed and we were unable to recover it. 00:33:14.065 [2024-07-26 09:06:32.434650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.065 [2024-07-26 09:06:32.434676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.065 qpair failed and we were unable to recover it. 00:33:14.065 [2024-07-26 09:06:32.434790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.065 [2024-07-26 09:06:32.434820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.065 qpair failed and we were unable to recover it. 00:33:14.065 [2024-07-26 09:06:32.435027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.065 [2024-07-26 09:06:32.435052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.065 qpair failed and we were unable to recover it. 00:33:14.065 [2024-07-26 09:06:32.435205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.065 [2024-07-26 09:06:32.435231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.065 qpair failed and we were unable to recover it. 00:33:14.065 [2024-07-26 09:06:32.435399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.065 [2024-07-26 09:06:32.435427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.065 qpair failed and we were unable to recover it. 00:33:14.065 [2024-07-26 09:06:32.435550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.065 [2024-07-26 09:06:32.435579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.065 qpair failed and we were unable to recover it. 00:33:14.065 [2024-07-26 09:06:32.435741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.065 [2024-07-26 09:06:32.435767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.065 qpair failed and we were unable to recover it. 00:33:14.065 [2024-07-26 09:06:32.435915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.065 [2024-07-26 09:06:32.435941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.065 qpair failed and we were unable to recover it. 00:33:14.065 [2024-07-26 09:06:32.436088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.065 [2024-07-26 09:06:32.436114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.065 qpair failed and we were unable to recover it. 00:33:14.065 [2024-07-26 09:06:32.436261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.065 [2024-07-26 09:06:32.436286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.065 qpair failed and we were unable to recover it. 00:33:14.065 [2024-07-26 09:06:32.436425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.065 [2024-07-26 09:06:32.436450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.065 qpair failed and we were unable to recover it. 00:33:14.065 [2024-07-26 09:06:32.436569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.065 [2024-07-26 09:06:32.436595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.065 qpair failed and we were unable to recover it. 00:33:14.065 [2024-07-26 09:06:32.436702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.065 [2024-07-26 09:06:32.436727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.065 qpair failed and we were unable to recover it. 00:33:14.065 [2024-07-26 09:06:32.436851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.065 [2024-07-26 09:06:32.436878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.065 qpair failed and we were unable to recover it. 00:33:14.065 [2024-07-26 09:06:32.437049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.066 [2024-07-26 09:06:32.437084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.066 qpair failed and we were unable to recover it. 00:33:14.066 [2024-07-26 09:06:32.437235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.066 [2024-07-26 09:06:32.437261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.066 qpair failed and we were unable to recover it. 00:33:14.066 [2024-07-26 09:06:32.437390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.066 [2024-07-26 09:06:32.437415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.066 qpair failed and we were unable to recover it. 00:33:14.066 [2024-07-26 09:06:32.437639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.066 [2024-07-26 09:06:32.437664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.066 qpair failed and we were unable to recover it. 00:33:14.066 [2024-07-26 09:06:32.437779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.066 [2024-07-26 09:06:32.437804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.066 qpair failed and we were unable to recover it. 00:33:14.066 [2024-07-26 09:06:32.437946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.066 [2024-07-26 09:06:32.437972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.066 qpair failed and we were unable to recover it. 00:33:14.066 [2024-07-26 09:06:32.438178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.066 [2024-07-26 09:06:32.438206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.066 qpair failed and we were unable to recover it. 00:33:14.066 [2024-07-26 09:06:32.438365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.066 [2024-07-26 09:06:32.438391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.066 qpair failed and we were unable to recover it. 00:33:14.066 [2024-07-26 09:06:32.438532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.066 [2024-07-26 09:06:32.438558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.066 qpair failed and we were unable to recover it. 00:33:14.066 [2024-07-26 09:06:32.438710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.066 [2024-07-26 09:06:32.438737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.066 qpair failed and we were unable to recover it. 00:33:14.066 [2024-07-26 09:06:32.438904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.066 [2024-07-26 09:06:32.438932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.066 qpair failed and we were unable to recover it. 00:33:14.066 [2024-07-26 09:06:32.439054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.066 [2024-07-26 09:06:32.439104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.066 qpair failed and we were unable to recover it. 00:33:14.066 [2024-07-26 09:06:32.439232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.066 [2024-07-26 09:06:32.439257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.066 qpair failed and we were unable to recover it. 00:33:14.066 [2024-07-26 09:06:32.439412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.066 [2024-07-26 09:06:32.439438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.066 qpair failed and we were unable to recover it. 00:33:14.066 [2024-07-26 09:06:32.439606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.066 [2024-07-26 09:06:32.439635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.066 qpair failed and we were unable to recover it. 00:33:14.066 [2024-07-26 09:06:32.439792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.066 [2024-07-26 09:06:32.439821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.066 qpair failed and we were unable to recover it. 00:33:14.066 [2024-07-26 09:06:32.439954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.066 [2024-07-26 09:06:32.439979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.066 qpair failed and we were unable to recover it. 00:33:14.066 [2024-07-26 09:06:32.440119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.066 [2024-07-26 09:06:32.440145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.066 qpair failed and we were unable to recover it. 00:33:14.066 [2024-07-26 09:06:32.440317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.066 [2024-07-26 09:06:32.440345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.066 qpair failed and we were unable to recover it. 00:33:14.066 [2024-07-26 09:06:32.440536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.066 [2024-07-26 09:06:32.440562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.066 qpair failed and we were unable to recover it. 00:33:14.066 [2024-07-26 09:06:32.440677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.066 [2024-07-26 09:06:32.440719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.066 qpair failed and we were unable to recover it. 00:33:14.066 [2024-07-26 09:06:32.440885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.066 [2024-07-26 09:06:32.440914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.066 qpair failed and we were unable to recover it. 00:33:14.066 [2024-07-26 09:06:32.441078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.066 [2024-07-26 09:06:32.441110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.066 qpair failed and we were unable to recover it. 00:33:14.066 [2024-07-26 09:06:32.441230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.066 [2024-07-26 09:06:32.441256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.066 qpair failed and we were unable to recover it. 00:33:14.066 [2024-07-26 09:06:32.441427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.066 [2024-07-26 09:06:32.441469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.066 qpair failed and we were unable to recover it. 00:33:14.066 [2024-07-26 09:06:32.441638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.066 [2024-07-26 09:06:32.441664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.066 qpair failed and we were unable to recover it. 00:33:14.066 [2024-07-26 09:06:32.441787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.066 [2024-07-26 09:06:32.441812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.066 qpair failed and we were unable to recover it. 00:33:14.066 [2024-07-26 09:06:32.441953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.066 [2024-07-26 09:06:32.441982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.066 qpair failed and we were unable to recover it. 00:33:14.066 [2024-07-26 09:06:32.442114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.066 [2024-07-26 09:06:32.442141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.066 qpair failed and we were unable to recover it. 00:33:14.066 [2024-07-26 09:06:32.442283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.066 [2024-07-26 09:06:32.442309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.066 qpair failed and we were unable to recover it. 00:33:14.066 [2024-07-26 09:06:32.442456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.066 [2024-07-26 09:06:32.442482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.066 qpair failed and we were unable to recover it. 00:33:14.066 [2024-07-26 09:06:32.442598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.066 [2024-07-26 09:06:32.442623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.066 qpair failed and we were unable to recover it. 00:33:14.066 [2024-07-26 09:06:32.442774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.066 [2024-07-26 09:06:32.442817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.066 qpair failed and we were unable to recover it. 00:33:14.066 [2024-07-26 09:06:32.442971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.066 [2024-07-26 09:06:32.442999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.066 qpair failed and we were unable to recover it. 00:33:14.066 [2024-07-26 09:06:32.443129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.066 [2024-07-26 09:06:32.443155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.066 qpair failed and we were unable to recover it. 00:33:14.066 [2024-07-26 09:06:32.443333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.066 [2024-07-26 09:06:32.443358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.066 qpair failed and we were unable to recover it. 00:33:14.066 [2024-07-26 09:06:32.443505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.066 [2024-07-26 09:06:32.443533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.066 qpair failed and we were unable to recover it. 00:33:14.066 [2024-07-26 09:06:32.443695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.066 [2024-07-26 09:06:32.443721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.066 qpair failed and we were unable to recover it. 00:33:14.066 [2024-07-26 09:06:32.443841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.066 [2024-07-26 09:06:32.443882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.066 qpair failed and we were unable to recover it. 00:33:14.066 [2024-07-26 09:06:32.444056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.066 [2024-07-26 09:06:32.444088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.066 qpair failed and we were unable to recover it. 00:33:14.066 [2024-07-26 09:06:32.444235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.066 [2024-07-26 09:06:32.444262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.066 qpair failed and we were unable to recover it. 00:33:14.066 [2024-07-26 09:06:32.444423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.066 [2024-07-26 09:06:32.444448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.066 qpair failed and we were unable to recover it. 00:33:14.066 [2024-07-26 09:06:32.444595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.066 [2024-07-26 09:06:32.444620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.066 qpair failed and we were unable to recover it. 00:33:14.066 [2024-07-26 09:06:32.444764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.066 [2024-07-26 09:06:32.444789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.066 qpair failed and we were unable to recover it. 00:33:14.066 [2024-07-26 09:06:32.444901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.066 [2024-07-26 09:06:32.444942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.066 qpair failed and we were unable to recover it. 00:33:14.066 [2024-07-26 09:06:32.445110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.066 [2024-07-26 09:06:32.445136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.066 qpair failed and we were unable to recover it. 00:33:14.066 [2024-07-26 09:06:32.445252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.066 [2024-07-26 09:06:32.445279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.066 qpair failed and we were unable to recover it. 00:33:14.066 [2024-07-26 09:06:32.445429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.066 [2024-07-26 09:06:32.445454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.066 qpair failed and we were unable to recover it. 00:33:14.066 [2024-07-26 09:06:32.445636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.066 [2024-07-26 09:06:32.445661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.066 qpair failed and we were unable to recover it. 00:33:14.066 [2024-07-26 09:06:32.445777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.066 [2024-07-26 09:06:32.445804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.066 qpair failed and we were unable to recover it. 00:33:14.066 [2024-07-26 09:06:32.445973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.066 [2024-07-26 09:06:32.446014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.066 qpair failed and we were unable to recover it. 00:33:14.066 [2024-07-26 09:06:32.446173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.066 [2024-07-26 09:06:32.446203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.066 qpair failed and we were unable to recover it. 00:33:14.066 [2024-07-26 09:06:32.446367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.066 [2024-07-26 09:06:32.446394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.066 qpair failed and we were unable to recover it. 00:33:14.066 [2024-07-26 09:06:32.446545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.066 [2024-07-26 09:06:32.446587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.066 qpair failed and we were unable to recover it. 00:33:14.066 [2024-07-26 09:06:32.446780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.066 [2024-07-26 09:06:32.446809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.066 qpair failed and we were unable to recover it. 00:33:14.066 [2024-07-26 09:06:32.446943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.066 [2024-07-26 09:06:32.446969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.066 qpair failed and we were unable to recover it. 00:33:14.067 [2024-07-26 09:06:32.447081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.067 [2024-07-26 09:06:32.447119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.067 qpair failed and we were unable to recover it. 00:33:14.067 [2024-07-26 09:06:32.447258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.067 [2024-07-26 09:06:32.447287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.067 qpair failed and we were unable to recover it. 00:33:14.067 [2024-07-26 09:06:32.447451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.067 [2024-07-26 09:06:32.447476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.067 qpair failed and we were unable to recover it. 00:33:14.067 [2024-07-26 09:06:32.447628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.067 [2024-07-26 09:06:32.447654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.067 qpair failed and we were unable to recover it. 00:33:14.067 [2024-07-26 09:06:32.447768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.067 [2024-07-26 09:06:32.447794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.067 qpair failed and we were unable to recover it. 00:33:14.067 [2024-07-26 09:06:32.447941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.067 [2024-07-26 09:06:32.447969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.067 qpair failed and we were unable to recover it. 00:33:14.067 [2024-07-26 09:06:32.448132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.067 [2024-07-26 09:06:32.448161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.067 qpair failed and we were unable to recover it. 00:33:14.067 [2024-07-26 09:06:32.448341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.067 [2024-07-26 09:06:32.448371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.067 qpair failed and we were unable to recover it. 00:33:14.067 [2024-07-26 09:06:32.448530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.067 [2024-07-26 09:06:32.448555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.067 qpair failed and we were unable to recover it. 00:33:14.067 [2024-07-26 09:06:32.448746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.067 [2024-07-26 09:06:32.448774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.067 qpair failed and we were unable to recover it. 00:33:14.067 [2024-07-26 09:06:32.448935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.067 [2024-07-26 09:06:32.448965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.067 qpair failed and we were unable to recover it. 00:33:14.067 [2024-07-26 09:06:32.449157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.067 [2024-07-26 09:06:32.449183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.067 qpair failed and we were unable to recover it. 00:33:14.067 [2024-07-26 09:06:32.449395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.067 [2024-07-26 09:06:32.449424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.067 qpair failed and we were unable to recover it. 00:33:14.067 [2024-07-26 09:06:32.449588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.067 [2024-07-26 09:06:32.449613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.067 qpair failed and we were unable to recover it. 00:33:14.067 [2024-07-26 09:06:32.449766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.067 [2024-07-26 09:06:32.449792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.067 qpair failed and we were unable to recover it. 00:33:14.067 [2024-07-26 09:06:32.449908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.067 [2024-07-26 09:06:32.449952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.067 qpair failed and we were unable to recover it. 00:33:14.067 [2024-07-26 09:06:32.450109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.067 [2024-07-26 09:06:32.450136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.067 qpair failed and we were unable to recover it. 00:33:14.067 [2024-07-26 09:06:32.450281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.067 [2024-07-26 09:06:32.450308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.067 qpair failed and we were unable to recover it. 00:33:14.067 [2024-07-26 09:06:32.450451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.067 [2024-07-26 09:06:32.450492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.067 qpair failed and we were unable to recover it. 00:33:14.067 [2024-07-26 09:06:32.450664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.067 [2024-07-26 09:06:32.450690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.067 qpair failed and we were unable to recover it. 00:33:14.067 [2024-07-26 09:06:32.450860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.067 [2024-07-26 09:06:32.450886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.067 qpair failed and we were unable to recover it. 00:33:14.067 [2024-07-26 09:06:32.451038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.067 [2024-07-26 09:06:32.451069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.067 qpair failed and we were unable to recover it. 00:33:14.067 [2024-07-26 09:06:32.451232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.067 [2024-07-26 09:06:32.451261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.067 qpair failed and we were unable to recover it. 00:33:14.067 [2024-07-26 09:06:32.451445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.067 [2024-07-26 09:06:32.451470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.067 qpair failed and we were unable to recover it. 00:33:14.067 [2024-07-26 09:06:32.451633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.067 [2024-07-26 09:06:32.451661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.067 qpair failed and we were unable to recover it. 00:33:14.067 [2024-07-26 09:06:32.451794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.067 [2024-07-26 09:06:32.451822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.067 qpair failed and we were unable to recover it. 00:33:14.067 [2024-07-26 09:06:32.451983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.067 [2024-07-26 09:06:32.452009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.067 qpair failed and we were unable to recover it. 00:33:14.067 [2024-07-26 09:06:32.452177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.067 [2024-07-26 09:06:32.452203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.067 qpair failed and we were unable to recover it. 00:33:14.067 [2024-07-26 09:06:32.452328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.067 [2024-07-26 09:06:32.452354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.067 qpair failed and we were unable to recover it. 00:33:14.067 [2024-07-26 09:06:32.452500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.067 [2024-07-26 09:06:32.452526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.067 qpair failed and we were unable to recover it. 00:33:14.067 [2024-07-26 09:06:32.452671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.067 [2024-07-26 09:06:32.452712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.067 qpair failed and we were unable to recover it. 00:33:14.067 [2024-07-26 09:06:32.452870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.067 [2024-07-26 09:06:32.452898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.067 qpair failed and we were unable to recover it. 00:33:14.067 [2024-07-26 09:06:32.453057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.067 [2024-07-26 09:06:32.453090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.067 qpair failed and we were unable to recover it. 00:33:14.067 [2024-07-26 09:06:32.453254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.067 [2024-07-26 09:06:32.453284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.067 qpair failed and we were unable to recover it. 00:33:14.067 [2024-07-26 09:06:32.453411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.067 [2024-07-26 09:06:32.453439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.067 qpair failed and we were unable to recover it. 00:33:14.067 [2024-07-26 09:06:32.453606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.067 [2024-07-26 09:06:32.453631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.067 qpair failed and we were unable to recover it. 00:33:14.067 [2024-07-26 09:06:32.453778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.067 [2024-07-26 09:06:32.453821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.067 qpair failed and we were unable to recover it. 00:33:14.067 [2024-07-26 09:06:32.453980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.067 [2024-07-26 09:06:32.454009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.067 qpair failed and we were unable to recover it. 00:33:14.067 [2024-07-26 09:06:32.454152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.067 [2024-07-26 09:06:32.454182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.067 qpair failed and we were unable to recover it. 00:33:14.067 [2024-07-26 09:06:32.454301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.067 [2024-07-26 09:06:32.454327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.067 qpair failed and we were unable to recover it. 00:33:14.067 [2024-07-26 09:06:32.454493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.067 [2024-07-26 09:06:32.454519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.067 qpair failed and we were unable to recover it. 00:33:14.067 [2024-07-26 09:06:32.454640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.067 [2024-07-26 09:06:32.454666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.067 qpair failed and we were unable to recover it. 00:33:14.067 [2024-07-26 09:06:32.454806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.067 [2024-07-26 09:06:32.454849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.067 qpair failed and we were unable to recover it. 00:33:14.067 [2024-07-26 09:06:32.454979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.067 [2024-07-26 09:06:32.455008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.067 qpair failed and we were unable to recover it. 00:33:14.067 [2024-07-26 09:06:32.455188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.067 [2024-07-26 09:06:32.455214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.067 qpair failed and we were unable to recover it. 00:33:14.067 [2024-07-26 09:06:32.455353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.067 [2024-07-26 09:06:32.455378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.067 qpair failed and we were unable to recover it. 00:33:14.067 [2024-07-26 09:06:32.455500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.067 [2024-07-26 09:06:32.455526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.067 qpair failed and we were unable to recover it. 00:33:14.067 [2024-07-26 09:06:32.455642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.067 [2024-07-26 09:06:32.455667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.067 qpair failed and we were unable to recover it. 00:33:14.067 [2024-07-26 09:06:32.455835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.067 [2024-07-26 09:06:32.455861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.067 qpair failed and we were unable to recover it. 00:33:14.067 [2024-07-26 09:06:32.456002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.067 [2024-07-26 09:06:32.456030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.067 qpair failed and we were unable to recover it. 00:33:14.067 [2024-07-26 09:06:32.456228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.067 [2024-07-26 09:06:32.456254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.067 qpair failed and we were unable to recover it. 00:33:14.067 [2024-07-26 09:06:32.456390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.067 [2024-07-26 09:06:32.456419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.067 qpair failed and we were unable to recover it. 00:33:14.067 [2024-07-26 09:06:32.456561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.067 [2024-07-26 09:06:32.456590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.067 qpair failed and we were unable to recover it. 00:33:14.067 [2024-07-26 09:06:32.456733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.067 [2024-07-26 09:06:32.456758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.067 qpair failed and we were unable to recover it. 00:33:14.067 [2024-07-26 09:06:32.456872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.067 [2024-07-26 09:06:32.456898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.067 qpair failed and we were unable to recover it. 00:33:14.067 [2024-07-26 09:06:32.457036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.067 [2024-07-26 09:06:32.457072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.067 qpair failed and we were unable to recover it. 00:33:14.067 [2024-07-26 09:06:32.457266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.067 [2024-07-26 09:06:32.457292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.067 qpair failed and we were unable to recover it. 00:33:14.067 [2024-07-26 09:06:32.457407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.067 [2024-07-26 09:06:32.457434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.067 qpair failed and we were unable to recover it. 00:33:14.067 [2024-07-26 09:06:32.457616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.067 [2024-07-26 09:06:32.457658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.067 qpair failed and we were unable to recover it. 00:33:14.067 [2024-07-26 09:06:32.457804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.067 [2024-07-26 09:06:32.457829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.067 qpair failed and we were unable to recover it. 00:33:14.067 [2024-07-26 09:06:32.458001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.067 [2024-07-26 09:06:32.458026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.067 qpair failed and we were unable to recover it. 00:33:14.067 [2024-07-26 09:06:32.458173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.067 [2024-07-26 09:06:32.458199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.067 qpair failed and we were unable to recover it. 00:33:14.067 [2024-07-26 09:06:32.458371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.068 [2024-07-26 09:06:32.458397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.068 qpair failed and we were unable to recover it. 00:33:14.068 [2024-07-26 09:06:32.458515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.068 [2024-07-26 09:06:32.458540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.068 qpair failed and we were unable to recover it. 00:33:14.068 [2024-07-26 09:06:32.458683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.068 [2024-07-26 09:06:32.458710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.068 qpair failed and we were unable to recover it. 00:33:14.068 [2024-07-26 09:06:32.458835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.068 [2024-07-26 09:06:32.458860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.068 qpair failed and we were unable to recover it. 00:33:14.068 [2024-07-26 09:06:32.459045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.068 [2024-07-26 09:06:32.459080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.068 qpair failed and we were unable to recover it. 00:33:14.068 [2024-07-26 09:06:32.459216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.068 [2024-07-26 09:06:32.459244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.068 qpair failed and we were unable to recover it. 00:33:14.068 [2024-07-26 09:06:32.459390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.068 [2024-07-26 09:06:32.459416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.068 qpair failed and we were unable to recover it. 00:33:14.068 [2024-07-26 09:06:32.459557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.068 [2024-07-26 09:06:32.459582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.068 qpair failed and we were unable to recover it. 00:33:14.068 [2024-07-26 09:06:32.459752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.068 [2024-07-26 09:06:32.459780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.068 qpair failed and we were unable to recover it. 00:33:14.068 [2024-07-26 09:06:32.459970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.068 [2024-07-26 09:06:32.459995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.068 qpair failed and we were unable to recover it. 00:33:14.068 [2024-07-26 09:06:32.460156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.068 [2024-07-26 09:06:32.460185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.068 qpair failed and we were unable to recover it. 00:33:14.068 [2024-07-26 09:06:32.460319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.068 [2024-07-26 09:06:32.460348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.068 qpair failed and we were unable to recover it. 00:33:14.068 [2024-07-26 09:06:32.460513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.068 [2024-07-26 09:06:32.460538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.068 qpair failed and we were unable to recover it. 00:33:14.068 [2024-07-26 09:06:32.460703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.068 [2024-07-26 09:06:32.460732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.068 qpair failed and we were unable to recover it. 00:33:14.068 [2024-07-26 09:06:32.460868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.068 [2024-07-26 09:06:32.460897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.068 qpair failed and we were unable to recover it. 00:33:14.068 [2024-07-26 09:06:32.461040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.068 [2024-07-26 09:06:32.461073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.068 qpair failed and we were unable to recover it. 00:33:14.068 [2024-07-26 09:06:32.461215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.068 [2024-07-26 09:06:32.461245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.068 qpair failed and we were unable to recover it. 00:33:14.068 [2024-07-26 09:06:32.461388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.068 [2024-07-26 09:06:32.461413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.068 qpair failed and we were unable to recover it. 00:33:14.068 [2024-07-26 09:06:32.461583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.068 [2024-07-26 09:06:32.461608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.068 qpair failed and we were unable to recover it. 00:33:14.068 [2024-07-26 09:06:32.461725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.068 [2024-07-26 09:06:32.461750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.068 qpair failed and we were unable to recover it. 00:33:14.068 [2024-07-26 09:06:32.461870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.068 [2024-07-26 09:06:32.461895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.068 qpair failed and we were unable to recover it. 00:33:14.068 [2024-07-26 09:06:32.462038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.068 [2024-07-26 09:06:32.462082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.068 qpair failed and we were unable to recover it. 00:33:14.068 [2024-07-26 09:06:32.462242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.068 [2024-07-26 09:06:32.462268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.068 qpair failed and we were unable to recover it. 00:33:14.068 [2024-07-26 09:06:32.462412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.068 [2024-07-26 09:06:32.462454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.068 qpair failed and we were unable to recover it. 00:33:14.068 [2024-07-26 09:06:32.462597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.068 [2024-07-26 09:06:32.462622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.068 qpair failed and we were unable to recover it. 00:33:14.068 [2024-07-26 09:06:32.462766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.068 [2024-07-26 09:06:32.462791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.068 qpair failed and we were unable to recover it. 00:33:14.068 [2024-07-26 09:06:32.462967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.068 [2024-07-26 09:06:32.462995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.068 qpair failed and we were unable to recover it. 00:33:14.068 [2024-07-26 09:06:32.463131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.068 [2024-07-26 09:06:32.463157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.068 qpair failed and we were unable to recover it. 00:33:14.068 [2024-07-26 09:06:32.463280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.068 [2024-07-26 09:06:32.463305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.068 qpair failed and we were unable to recover it. 00:33:14.068 [2024-07-26 09:06:32.463450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.068 [2024-07-26 09:06:32.463475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.068 qpair failed and we were unable to recover it. 00:33:14.068 [2024-07-26 09:06:32.463624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.068 [2024-07-26 09:06:32.463649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.068 qpair failed and we were unable to recover it. 00:33:14.068 [2024-07-26 09:06:32.463837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.068 [2024-07-26 09:06:32.463866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.068 qpair failed and we were unable to recover it. 00:33:14.068 [2024-07-26 09:06:32.464036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.068 [2024-07-26 09:06:32.464066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.068 qpair failed and we were unable to recover it. 00:33:14.068 [2024-07-26 09:06:32.464213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.068 [2024-07-26 09:06:32.464238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.068 qpair failed and we were unable to recover it. 00:33:14.068 [2024-07-26 09:06:32.464426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.068 [2024-07-26 09:06:32.464455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.068 qpair failed and we were unable to recover it. 00:33:14.068 [2024-07-26 09:06:32.464586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.068 [2024-07-26 09:06:32.464613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.068 qpair failed and we were unable to recover it. 00:33:14.068 [2024-07-26 09:06:32.464761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.068 [2024-07-26 09:06:32.464787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.068 qpair failed and we were unable to recover it. 00:33:14.068 [2024-07-26 09:06:32.464935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.068 [2024-07-26 09:06:32.464962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.068 qpair failed and we were unable to recover it. 00:33:14.068 [2024-07-26 09:06:32.465131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.068 [2024-07-26 09:06:32.465160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.068 qpair failed and we were unable to recover it. 00:33:14.068 [2024-07-26 09:06:32.465322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.068 [2024-07-26 09:06:32.465347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.068 qpair failed and we were unable to recover it. 00:33:14.068 [2024-07-26 09:06:32.465468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.068 [2024-07-26 09:06:32.465509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.068 qpair failed and we were unable to recover it. 00:33:14.068 [2024-07-26 09:06:32.465680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.068 [2024-07-26 09:06:32.465705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.068 qpair failed and we were unable to recover it. 00:33:14.068 [2024-07-26 09:06:32.465854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.068 [2024-07-26 09:06:32.465880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.068 qpair failed and we were unable to recover it. 00:33:14.068 [2024-07-26 09:06:32.466053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.068 [2024-07-26 09:06:32.466089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.068 qpair failed and we were unable to recover it. 00:33:14.068 [2024-07-26 09:06:32.466245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.068 [2024-07-26 09:06:32.466274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.068 qpair failed and we were unable to recover it. 00:33:14.068 [2024-07-26 09:06:32.466405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.068 [2024-07-26 09:06:32.466430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.068 qpair failed and we were unable to recover it. 00:33:14.068 [2024-07-26 09:06:32.466575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.068 [2024-07-26 09:06:32.466602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.068 qpair failed and we were unable to recover it. 00:33:14.068 [2024-07-26 09:06:32.466782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.068 [2024-07-26 09:06:32.466811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.068 qpair failed and we were unable to recover it. 00:33:14.068 [2024-07-26 09:06:32.466971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.068 [2024-07-26 09:06:32.466996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.068 qpair failed and we were unable to recover it. 00:33:14.068 [2024-07-26 09:06:32.467191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.068 [2024-07-26 09:06:32.467220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.068 qpair failed and we were unable to recover it. 00:33:14.068 [2024-07-26 09:06:32.467386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.068 [2024-07-26 09:06:32.467413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.068 qpair failed and we were unable to recover it. 00:33:14.068 [2024-07-26 09:06:32.467563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.068 [2024-07-26 09:06:32.467588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.068 qpair failed and we were unable to recover it. 00:33:14.068 [2024-07-26 09:06:32.467778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.068 [2024-07-26 09:06:32.467807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.068 qpair failed and we were unable to recover it. 00:33:14.068 [2024-07-26 09:06:32.467938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.068 [2024-07-26 09:06:32.467967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.068 qpair failed and we were unable to recover it. 00:33:14.068 [2024-07-26 09:06:32.468135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.068 [2024-07-26 09:06:32.468161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.068 qpair failed and we were unable to recover it. 00:33:14.068 [2024-07-26 09:06:32.468287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.068 [2024-07-26 09:06:32.468313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.068 qpair failed and we were unable to recover it. 00:33:14.068 [2024-07-26 09:06:32.468463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.068 [2024-07-26 09:06:32.468492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.068 qpair failed and we were unable to recover it. 00:33:14.068 [2024-07-26 09:06:32.468648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.068 [2024-07-26 09:06:32.468673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.068 qpair failed and we were unable to recover it. 00:33:14.068 [2024-07-26 09:06:32.468823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.068 [2024-07-26 09:06:32.468864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.068 qpair failed and we were unable to recover it. 00:33:14.068 [2024-07-26 09:06:32.469021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.068 [2024-07-26 09:06:32.469049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.068 qpair failed and we were unable to recover it. 00:33:14.068 [2024-07-26 09:06:32.469194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.068 [2024-07-26 09:06:32.469220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.068 qpair failed and we were unable to recover it. 00:33:14.068 [2024-07-26 09:06:32.469341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.068 [2024-07-26 09:06:32.469366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.068 qpair failed and we were unable to recover it. 00:33:14.068 [2024-07-26 09:06:32.469535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.068 [2024-07-26 09:06:32.469563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.068 qpair failed and we were unable to recover it. 00:33:14.068 [2024-07-26 09:06:32.469705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.068 [2024-07-26 09:06:32.469732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.068 qpair failed and we were unable to recover it. 00:33:14.069 [2024-07-26 09:06:32.469878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.069 [2024-07-26 09:06:32.469903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.069 qpair failed and we were unable to recover it. 00:33:14.069 [2024-07-26 09:06:32.470051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.069 [2024-07-26 09:06:32.470097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.069 qpair failed and we were unable to recover it. 00:33:14.069 [2024-07-26 09:06:32.470239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.069 [2024-07-26 09:06:32.470265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.069 qpair failed and we were unable to recover it. 00:33:14.069 [2024-07-26 09:06:32.470410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.069 [2024-07-26 09:06:32.470436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.069 qpair failed and we were unable to recover it. 00:33:14.069 [2024-07-26 09:06:32.470609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.069 [2024-07-26 09:06:32.470637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.069 qpair failed and we were unable to recover it. 00:33:14.069 [2024-07-26 09:06:32.470801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.069 [2024-07-26 09:06:32.470827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.069 qpair failed and we were unable to recover it. 00:33:14.069 [2024-07-26 09:06:32.470978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.069 [2024-07-26 09:06:32.471004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.069 qpair failed and we were unable to recover it. 00:33:14.069 [2024-07-26 09:06:32.471155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.069 [2024-07-26 09:06:32.471199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.069 qpair failed and we were unable to recover it. 00:33:14.069 [2024-07-26 09:06:32.471333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.069 [2024-07-26 09:06:32.471359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.069 qpair failed and we were unable to recover it. 00:33:14.069 [2024-07-26 09:06:32.471530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.069 [2024-07-26 09:06:32.471571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.069 qpair failed and we were unable to recover it. 00:33:14.069 [2024-07-26 09:06:32.471696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.069 [2024-07-26 09:06:32.471726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.069 qpair failed and we were unable to recover it. 00:33:14.069 [2024-07-26 09:06:32.471889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.069 [2024-07-26 09:06:32.471914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.069 qpair failed and we were unable to recover it. 00:33:14.069 [2024-07-26 09:06:32.472033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.069 [2024-07-26 09:06:32.472080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.069 qpair failed and we were unable to recover it. 00:33:14.069 [2024-07-26 09:06:32.472248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.069 [2024-07-26 09:06:32.472276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.069 qpair failed and we were unable to recover it. 00:33:14.069 [2024-07-26 09:06:32.472469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.069 [2024-07-26 09:06:32.472494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.069 qpair failed and we were unable to recover it. 00:33:14.069 [2024-07-26 09:06:32.472652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.069 [2024-07-26 09:06:32.472680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.069 qpair failed and we were unable to recover it. 00:33:14.069 [2024-07-26 09:06:32.472814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.069 [2024-07-26 09:06:32.472842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.069 qpair failed and we were unable to recover it. 00:33:14.069 [2024-07-26 09:06:32.473010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.069 [2024-07-26 09:06:32.473035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.069 qpair failed and we were unable to recover it. 00:33:14.069 [2024-07-26 09:06:32.473185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.069 [2024-07-26 09:06:32.473212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.069 qpair failed and we were unable to recover it. 00:33:14.069 [2024-07-26 09:06:32.473331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.069 [2024-07-26 09:06:32.473357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.069 qpair failed and we were unable to recover it. 00:33:14.069 [2024-07-26 09:06:32.473500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.069 [2024-07-26 09:06:32.473526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.069 qpair failed and we were unable to recover it. 00:33:14.069 [2024-07-26 09:06:32.473690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.069 [2024-07-26 09:06:32.473718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.069 qpair failed and we were unable to recover it. 00:33:14.069 [2024-07-26 09:06:32.473849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.069 [2024-07-26 09:06:32.473877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.069 qpair failed and we were unable to recover it. 00:33:14.069 [2024-07-26 09:06:32.474043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.069 [2024-07-26 09:06:32.474076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.069 qpair failed and we were unable to recover it. 00:33:14.069 [2024-07-26 09:06:32.474240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.069 [2024-07-26 09:06:32.474268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.069 qpair failed and we were unable to recover it. 00:33:14.069 [2024-07-26 09:06:32.474395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.069 [2024-07-26 09:06:32.474423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.069 qpair failed and we were unable to recover it. 00:33:14.069 [2024-07-26 09:06:32.474559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.069 [2024-07-26 09:06:32.474586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.069 qpair failed and we were unable to recover it. 00:33:14.069 [2024-07-26 09:06:32.474728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.069 [2024-07-26 09:06:32.474753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.069 qpair failed and we were unable to recover it. 00:33:14.069 [2024-07-26 09:06:32.474919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.069 [2024-07-26 09:06:32.474949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.069 qpair failed and we were unable to recover it. 00:33:14.069 [2024-07-26 09:06:32.475124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.069 [2024-07-26 09:06:32.475159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.069 qpair failed and we were unable to recover it. 00:33:14.069 [2024-07-26 09:06:32.475279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.069 [2024-07-26 09:06:32.475305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.069 qpair failed and we were unable to recover it. 00:33:14.069 [2024-07-26 09:06:32.475488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.069 [2024-07-26 09:06:32.475515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.069 qpair failed and we were unable to recover it. 00:33:14.069 [2024-07-26 09:06:32.475679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.069 [2024-07-26 09:06:32.475708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.069 qpair failed and we were unable to recover it. 00:33:14.069 [2024-07-26 09:06:32.475835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.069 [2024-07-26 09:06:32.475861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.069 qpair failed and we were unable to recover it. 00:33:14.069 [2024-07-26 09:06:32.476006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.069 [2024-07-26 09:06:32.476031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.069 qpair failed and we were unable to recover it. 00:33:14.069 [2024-07-26 09:06:32.476187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.069 [2024-07-26 09:06:32.476213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.069 qpair failed and we were unable to recover it. 00:33:14.069 [2024-07-26 09:06:32.476331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.069 [2024-07-26 09:06:32.476357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.069 qpair failed and we were unable to recover it. 00:33:14.069 [2024-07-26 09:06:32.476496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.069 [2024-07-26 09:06:32.476521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.069 qpair failed and we were unable to recover it. 00:33:14.069 [2024-07-26 09:06:32.476664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.069 [2024-07-26 09:06:32.476689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.069 qpair failed and we were unable to recover it. 00:33:14.069 [2024-07-26 09:06:32.476832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.069 [2024-07-26 09:06:32.476858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.069 qpair failed and we were unable to recover it. 00:33:14.069 [2024-07-26 09:06:32.477000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.069 [2024-07-26 09:06:32.477042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.069 qpair failed and we were unable to recover it. 00:33:14.069 [2024-07-26 09:06:32.477245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.069 [2024-07-26 09:06:32.477271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.069 qpair failed and we were unable to recover it. 00:33:14.352 [2024-07-26 09:06:32.477434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.352 [2024-07-26 09:06:32.477464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.352 qpair failed and we were unable to recover it. 00:33:14.352 [2024-07-26 09:06:32.477630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.352 [2024-07-26 09:06:32.477659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.352 qpair failed and we were unable to recover it. 00:33:14.352 [2024-07-26 09:06:32.477803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.352 [2024-07-26 09:06:32.477828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.352 qpair failed and we were unable to recover it. 00:33:14.352 [2024-07-26 09:06:32.477968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.352 [2024-07-26 09:06:32.477993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.352 qpair failed and we were unable to recover it. 00:33:14.352 [2024-07-26 09:06:32.478191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.352 [2024-07-26 09:06:32.478220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.352 qpair failed and we were unable to recover it. 00:33:14.352 [2024-07-26 09:06:32.478362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.352 [2024-07-26 09:06:32.478388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.352 qpair failed and we were unable to recover it. 00:33:14.352 [2024-07-26 09:06:32.478532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.352 [2024-07-26 09:06:32.478574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.352 qpair failed and we were unable to recover it. 00:33:14.352 [2024-07-26 09:06:32.478707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.352 [2024-07-26 09:06:32.478735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.352 qpair failed and we were unable to recover it. 00:33:14.352 [2024-07-26 09:06:32.478885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.352 [2024-07-26 09:06:32.478911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.352 qpair failed and we were unable to recover it. 00:33:14.352 [2024-07-26 09:06:32.479068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.352 [2024-07-26 09:06:32.479094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.352 qpair failed and we were unable to recover it. 00:33:14.352 [2024-07-26 09:06:32.479233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.352 [2024-07-26 09:06:32.479261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.352 qpair failed and we were unable to recover it. 00:33:14.352 [2024-07-26 09:06:32.479423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.352 [2024-07-26 09:06:32.479449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.352 qpair failed and we were unable to recover it. 00:33:14.352 [2024-07-26 09:06:32.479595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.352 [2024-07-26 09:06:32.479620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.352 qpair failed and we were unable to recover it. 00:33:14.352 [2024-07-26 09:06:32.479827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.352 [2024-07-26 09:06:32.479853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.352 qpair failed and we were unable to recover it. 00:33:14.352 [2024-07-26 09:06:32.479975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.352 [2024-07-26 09:06:32.480000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.352 qpair failed and we were unable to recover it. 00:33:14.352 [2024-07-26 09:06:32.480144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.352 [2024-07-26 09:06:32.480171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.352 qpair failed and we were unable to recover it. 00:33:14.352 [2024-07-26 09:06:32.480308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.352 [2024-07-26 09:06:32.480336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.352 qpair failed and we were unable to recover it. 00:33:14.352 [2024-07-26 09:06:32.480514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.352 [2024-07-26 09:06:32.480539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.352 qpair failed and we were unable to recover it. 00:33:14.352 [2024-07-26 09:06:32.480653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.352 [2024-07-26 09:06:32.480696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.352 qpair failed and we were unable to recover it. 00:33:14.352 [2024-07-26 09:06:32.480849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.352 [2024-07-26 09:06:32.480877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.352 qpair failed and we were unable to recover it. 00:33:14.352 [2024-07-26 09:06:32.481030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.352 [2024-07-26 09:06:32.481065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.352 qpair failed and we were unable to recover it. 00:33:14.352 [2024-07-26 09:06:32.481228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.352 [2024-07-26 09:06:32.481253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.352 qpair failed and we were unable to recover it. 00:33:14.352 [2024-07-26 09:06:32.481446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.352 [2024-07-26 09:06:32.481474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.352 qpair failed and we were unable to recover it. 00:33:14.352 [2024-07-26 09:06:32.481617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.352 [2024-07-26 09:06:32.481643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.352 qpair failed and we were unable to recover it. 00:33:14.352 [2024-07-26 09:06:32.481813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.352 [2024-07-26 09:06:32.481838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.352 qpair failed and we were unable to recover it. 00:33:14.352 [2024-07-26 09:06:32.482010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.352 [2024-07-26 09:06:32.482038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.352 qpair failed and we were unable to recover it. 00:33:14.352 [2024-07-26 09:06:32.482208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.352 [2024-07-26 09:06:32.482233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.352 qpair failed and we were unable to recover it. 00:33:14.352 [2024-07-26 09:06:32.482403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.352 [2024-07-26 09:06:32.482428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.352 qpair failed and we were unable to recover it. 00:33:14.352 [2024-07-26 09:06:32.482595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.352 [2024-07-26 09:06:32.482623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.352 qpair failed and we were unable to recover it. 00:33:14.352 [2024-07-26 09:06:32.482766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.352 [2024-07-26 09:06:32.482793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.352 qpair failed and we were unable to recover it. 00:33:14.352 [2024-07-26 09:06:32.482946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.352 [2024-07-26 09:06:32.482975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.352 qpair failed and we were unable to recover it. 00:33:14.352 [2024-07-26 09:06:32.483089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.352 [2024-07-26 09:06:32.483115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.352 qpair failed and we were unable to recover it. 00:33:14.352 [2024-07-26 09:06:32.483292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.352 [2024-07-26 09:06:32.483318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.352 qpair failed and we were unable to recover it. 00:33:14.352 [2024-07-26 09:06:32.483461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.352 [2024-07-26 09:06:32.483503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.352 qpair failed and we were unable to recover it. 00:33:14.352 [2024-07-26 09:06:32.483660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.352 [2024-07-26 09:06:32.483685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.352 qpair failed and we were unable to recover it. 00:33:14.352 [2024-07-26 09:06:32.483830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.352 [2024-07-26 09:06:32.483856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.352 qpair failed and we were unable to recover it. 00:33:14.352 [2024-07-26 09:06:32.483978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.352 [2024-07-26 09:06:32.484003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.352 qpair failed and we were unable to recover it. 00:33:14.352 [2024-07-26 09:06:32.484133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.352 [2024-07-26 09:06:32.484159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.352 qpair failed and we were unable to recover it. 00:33:14.352 [2024-07-26 09:06:32.484354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.352 [2024-07-26 09:06:32.484379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.352 qpair failed and we were unable to recover it. 00:33:14.352 [2024-07-26 09:06:32.484524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.352 [2024-07-26 09:06:32.484550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.352 qpair failed and we were unable to recover it. 00:33:14.352 [2024-07-26 09:06:32.484718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.352 [2024-07-26 09:06:32.484746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.352 qpair failed and we were unable to recover it. 00:33:14.352 [2024-07-26 09:06:32.484892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.352 [2024-07-26 09:06:32.484918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.352 qpair failed and we were unable to recover it. 00:33:14.352 [2024-07-26 09:06:32.485090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.352 [2024-07-26 09:06:32.485116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.352 qpair failed and we were unable to recover it. 00:33:14.353 [2024-07-26 09:06:32.485285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.353 [2024-07-26 09:06:32.485313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.353 qpair failed and we were unable to recover it. 00:33:14.353 [2024-07-26 09:06:32.485455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.353 [2024-07-26 09:06:32.485481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.353 qpair failed and we were unable to recover it. 00:33:14.353 [2024-07-26 09:06:32.485627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.353 [2024-07-26 09:06:32.485668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.353 qpair failed and we were unable to recover it. 00:33:14.353 [2024-07-26 09:06:32.485865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.353 [2024-07-26 09:06:32.485891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.353 qpair failed and we were unable to recover it. 00:33:14.353 [2024-07-26 09:06:32.486037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.353 [2024-07-26 09:06:32.486069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.353 qpair failed and we were unable to recover it. 00:33:14.353 [2024-07-26 09:06:32.486215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.353 [2024-07-26 09:06:32.486240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.353 qpair failed and we were unable to recover it. 00:33:14.353 [2024-07-26 09:06:32.486389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.353 [2024-07-26 09:06:32.486417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.353 qpair failed and we were unable to recover it. 00:33:14.353 [2024-07-26 09:06:32.486583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.353 [2024-07-26 09:06:32.486609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.353 qpair failed and we were unable to recover it. 00:33:14.353 [2024-07-26 09:06:32.486801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.353 [2024-07-26 09:06:32.486829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.353 qpair failed and we were unable to recover it. 00:33:14.353 [2024-07-26 09:06:32.486988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.353 [2024-07-26 09:06:32.487016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.353 qpair failed and we were unable to recover it. 00:33:14.353 [2024-07-26 09:06:32.487195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.353 [2024-07-26 09:06:32.487220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.353 qpair failed and we were unable to recover it. 00:33:14.353 [2024-07-26 09:06:32.487372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.353 [2024-07-26 09:06:32.487398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.353 qpair failed and we were unable to recover it. 00:33:14.353 [2024-07-26 09:06:32.487541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.353 [2024-07-26 09:06:32.487566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.353 qpair failed and we were unable to recover it. 00:33:14.353 [2024-07-26 09:06:32.487716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.353 [2024-07-26 09:06:32.487741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.353 qpair failed and we were unable to recover it. 00:33:14.353 [2024-07-26 09:06:32.487936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.353 [2024-07-26 09:06:32.487965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.353 qpair failed and we were unable to recover it. 00:33:14.353 [2024-07-26 09:06:32.488136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.353 [2024-07-26 09:06:32.488162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.353 qpair failed and we were unable to recover it. 00:33:14.353 [2024-07-26 09:06:32.488337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.353 [2024-07-26 09:06:32.488362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.353 qpair failed and we were unable to recover it. 00:33:14.353 [2024-07-26 09:06:32.488484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.353 [2024-07-26 09:06:32.488510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.353 qpair failed and we were unable to recover it. 00:33:14.353 [2024-07-26 09:06:32.488651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.353 [2024-07-26 09:06:32.488675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.353 qpair failed and we were unable to recover it. 00:33:14.353 [2024-07-26 09:06:32.488860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.353 [2024-07-26 09:06:32.488885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.353 qpair failed and we were unable to recover it. 00:33:14.353 [2024-07-26 09:06:32.489048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.353 [2024-07-26 09:06:32.489084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.353 qpair failed and we were unable to recover it. 00:33:14.353 [2024-07-26 09:06:32.489242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.353 [2024-07-26 09:06:32.489270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.353 qpair failed and we were unable to recover it. 00:33:14.353 [2024-07-26 09:06:32.489436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.353 [2024-07-26 09:06:32.489461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.353 qpair failed and we were unable to recover it. 00:33:14.353 [2024-07-26 09:06:32.489624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.353 [2024-07-26 09:06:32.489652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.353 qpair failed and we were unable to recover it. 00:33:14.353 [2024-07-26 09:06:32.489812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.353 [2024-07-26 09:06:32.489840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.353 qpair failed and we were unable to recover it. 00:33:14.353 [2024-07-26 09:06:32.489986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.353 [2024-07-26 09:06:32.490011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.353 qpair failed and we were unable to recover it. 00:33:14.353 [2024-07-26 09:06:32.490153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.353 [2024-07-26 09:06:32.490178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.353 qpair failed and we were unable to recover it. 00:33:14.353 [2024-07-26 09:06:32.490326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.353 [2024-07-26 09:06:32.490371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.353 qpair failed and we were unable to recover it. 00:33:14.353 [2024-07-26 09:06:32.490521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.353 [2024-07-26 09:06:32.490547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.353 qpair failed and we were unable to recover it. 00:33:14.353 [2024-07-26 09:06:32.490718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.353 [2024-07-26 09:06:32.490743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.353 qpair failed and we were unable to recover it. 00:33:14.353 [2024-07-26 09:06:32.490872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.353 [2024-07-26 09:06:32.490900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.353 qpair failed and we were unable to recover it. 00:33:14.353 [2024-07-26 09:06:32.491042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.353 [2024-07-26 09:06:32.491076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.353 qpair failed and we were unable to recover it. 00:33:14.353 [2024-07-26 09:06:32.491216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.353 [2024-07-26 09:06:32.491258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.353 qpair failed and we were unable to recover it. 00:33:14.353 [2024-07-26 09:06:32.491409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.353 [2024-07-26 09:06:32.491437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.353 qpair failed and we were unable to recover it. 00:33:14.353 [2024-07-26 09:06:32.491595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.353 [2024-07-26 09:06:32.491621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.353 qpair failed and we were unable to recover it. 00:33:14.353 [2024-07-26 09:06:32.491739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.353 [2024-07-26 09:06:32.491764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.353 qpair failed and we were unable to recover it. 00:33:14.353 [2024-07-26 09:06:32.491904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.353 [2024-07-26 09:06:32.491934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.353 qpair failed and we were unable to recover it. 00:33:14.353 [2024-07-26 09:06:32.492101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.353 [2024-07-26 09:06:32.492127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.353 qpair failed and we were unable to recover it. 00:33:14.353 [2024-07-26 09:06:32.492235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.353 [2024-07-26 09:06:32.492261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.353 qpair failed and we were unable to recover it. 00:33:14.353 [2024-07-26 09:06:32.492412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.353 [2024-07-26 09:06:32.492441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.353 qpair failed and we were unable to recover it. 00:33:14.353 [2024-07-26 09:06:32.492600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.353 [2024-07-26 09:06:32.492627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.353 qpair failed and we were unable to recover it. 00:33:14.353 [2024-07-26 09:06:32.492749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.353 [2024-07-26 09:06:32.492775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.353 qpair failed and we were unable to recover it. 00:33:14.353 [2024-07-26 09:06:32.492896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.353 [2024-07-26 09:06:32.492922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.353 qpair failed and we were unable to recover it. 00:33:14.353 [2024-07-26 09:06:32.493042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.353 [2024-07-26 09:06:32.493076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.353 qpair failed and we were unable to recover it. 00:33:14.353 [2024-07-26 09:06:32.493224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.353 [2024-07-26 09:06:32.493249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.353 qpair failed and we were unable to recover it. 00:33:14.353 [2024-07-26 09:06:32.493411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.353 [2024-07-26 09:06:32.493439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.353 qpair failed and we were unable to recover it. 00:33:14.353 [2024-07-26 09:06:32.493600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.353 [2024-07-26 09:06:32.493626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.353 qpair failed and we were unable to recover it. 00:33:14.353 [2024-07-26 09:06:32.493803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.353 [2024-07-26 09:06:32.493831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.353 qpair failed and we were unable to recover it. 00:33:14.353 [2024-07-26 09:06:32.493967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.353 [2024-07-26 09:06:32.493995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.353 qpair failed and we were unable to recover it. 00:33:14.353 [2024-07-26 09:06:32.494168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.353 [2024-07-26 09:06:32.494195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.353 qpair failed and we were unable to recover it. 00:33:14.353 [2024-07-26 09:06:32.494310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.353 [2024-07-26 09:06:32.494336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.353 qpair failed and we were unable to recover it. 00:33:14.353 [2024-07-26 09:06:32.494496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.353 [2024-07-26 09:06:32.494521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.353 qpair failed and we were unable to recover it. 00:33:14.353 [2024-07-26 09:06:32.494666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.353 [2024-07-26 09:06:32.494692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.353 qpair failed and we were unable to recover it. 00:33:14.353 [2024-07-26 09:06:32.494805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.353 [2024-07-26 09:06:32.494848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.353 qpair failed and we were unable to recover it. 00:33:14.353 [2024-07-26 09:06:32.495027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.353 [2024-07-26 09:06:32.495055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.353 qpair failed and we were unable to recover it. 00:33:14.353 [2024-07-26 09:06:32.495261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.353 [2024-07-26 09:06:32.495286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.353 qpair failed and we were unable to recover it. 00:33:14.353 [2024-07-26 09:06:32.495447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.353 [2024-07-26 09:06:32.495475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.353 qpair failed and we were unable to recover it. 00:33:14.353 [2024-07-26 09:06:32.495640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.353 [2024-07-26 09:06:32.495669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.353 qpair failed and we were unable to recover it. 00:33:14.353 [2024-07-26 09:06:32.495832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.353 [2024-07-26 09:06:32.495857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.353 qpair failed and we were unable to recover it. 00:33:14.353 [2024-07-26 09:06:32.496048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.353 [2024-07-26 09:06:32.496085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.353 qpair failed and we were unable to recover it. 00:33:14.353 [2024-07-26 09:06:32.496239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.353 [2024-07-26 09:06:32.496268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.353 qpair failed and we were unable to recover it. 00:33:14.353 [2024-07-26 09:06:32.496451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.354 [2024-07-26 09:06:32.496477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.354 qpair failed and we were unable to recover it. 00:33:14.354 [2024-07-26 09:06:32.496597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.354 [2024-07-26 09:06:32.496640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.354 qpair failed and we were unable to recover it. 00:33:14.354 [2024-07-26 09:06:32.496790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.354 [2024-07-26 09:06:32.496818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.354 qpair failed and we were unable to recover it. 00:33:14.354 [2024-07-26 09:06:32.497010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.354 [2024-07-26 09:06:32.497036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.354 qpair failed and we were unable to recover it. 00:33:14.354 [2024-07-26 09:06:32.497157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.354 [2024-07-26 09:06:32.497184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.354 qpair failed and we were unable to recover it. 00:33:14.354 [2024-07-26 09:06:32.497336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.354 [2024-07-26 09:06:32.497362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.354 qpair failed and we were unable to recover it. 00:33:14.354 [2024-07-26 09:06:32.497503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.354 [2024-07-26 09:06:32.497532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.354 qpair failed and we were unable to recover it. 00:33:14.354 [2024-07-26 09:06:32.497692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.354 [2024-07-26 09:06:32.497720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.354 qpair failed and we were unable to recover it. 00:33:14.354 [2024-07-26 09:06:32.497883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.354 [2024-07-26 09:06:32.497912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.354 qpair failed and we were unable to recover it. 00:33:14.354 [2024-07-26 09:06:32.498109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.354 [2024-07-26 09:06:32.498135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.354 qpair failed and we were unable to recover it. 00:33:14.354 [2024-07-26 09:06:32.498262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.354 [2024-07-26 09:06:32.498288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.354 qpair failed and we were unable to recover it. 00:33:14.354 [2024-07-26 09:06:32.498438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.354 [2024-07-26 09:06:32.498463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.354 qpair failed and we were unable to recover it. 00:33:14.354 [2024-07-26 09:06:32.498610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.354 [2024-07-26 09:06:32.498635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.354 qpair failed and we were unable to recover it. 00:33:14.354 [2024-07-26 09:06:32.498786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.354 [2024-07-26 09:06:32.498811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.354 qpair failed and we were unable to recover it. 00:33:14.354 [2024-07-26 09:06:32.498977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.354 [2024-07-26 09:06:32.499005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.354 qpair failed and we were unable to recover it. 00:33:14.354 [2024-07-26 09:06:32.499174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.354 [2024-07-26 09:06:32.499200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.354 qpair failed and we were unable to recover it. 00:33:14.354 [2024-07-26 09:06:32.499337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.354 [2024-07-26 09:06:32.499380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.354 qpair failed and we were unable to recover it. 00:33:14.354 [2024-07-26 09:06:32.499505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.354 [2024-07-26 09:06:32.499547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.354 qpair failed and we were unable to recover it. 00:33:14.354 [2024-07-26 09:06:32.499701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.354 [2024-07-26 09:06:32.499727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.354 qpair failed and we were unable to recover it. 00:33:14.354 [2024-07-26 09:06:32.499891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.354 [2024-07-26 09:06:32.499920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.354 qpair failed and we were unable to recover it. 00:33:14.354 [2024-07-26 09:06:32.500098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.354 [2024-07-26 09:06:32.500124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.354 qpair failed and we were unable to recover it. 00:33:14.354 [2024-07-26 09:06:32.500292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.354 [2024-07-26 09:06:32.500317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.354 qpair failed and we were unable to recover it. 00:33:14.354 [2024-07-26 09:06:32.500475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.354 [2024-07-26 09:06:32.500503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.354 qpair failed and we were unable to recover it. 00:33:14.354 [2024-07-26 09:06:32.500689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.354 [2024-07-26 09:06:32.500718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.354 qpair failed and we were unable to recover it. 00:33:14.354 [2024-07-26 09:06:32.500879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.354 [2024-07-26 09:06:32.500907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.354 qpair failed and we were unable to recover it. 00:33:14.354 [2024-07-26 09:06:32.501102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.354 [2024-07-26 09:06:32.501128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.354 qpair failed and we were unable to recover it. 00:33:14.354 [2024-07-26 09:06:32.501246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.354 [2024-07-26 09:06:32.501272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.354 qpair failed and we were unable to recover it. 00:33:14.354 [2024-07-26 09:06:32.501394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.354 [2024-07-26 09:06:32.501420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.354 qpair failed and we were unable to recover it. 00:33:14.354 [2024-07-26 09:06:32.501570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.354 [2024-07-26 09:06:32.501596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.354 qpair failed and we were unable to recover it. 00:33:14.354 [2024-07-26 09:06:32.501767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.354 [2024-07-26 09:06:32.501796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.354 qpair failed and we were unable to recover it. 00:33:14.354 [2024-07-26 09:06:32.501937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.354 [2024-07-26 09:06:32.501964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.354 qpair failed and we were unable to recover it. 00:33:14.354 [2024-07-26 09:06:32.502116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.354 [2024-07-26 09:06:32.502142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.354 qpair failed and we were unable to recover it. 00:33:14.354 [2024-07-26 09:06:32.502289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.354 [2024-07-26 09:06:32.502336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.354 qpair failed and we were unable to recover it. 00:33:14.354 [2024-07-26 09:06:32.502485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.354 [2024-07-26 09:06:32.502511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.354 qpair failed and we were unable to recover it. 00:33:14.354 [2024-07-26 09:06:32.502647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.354 [2024-07-26 09:06:32.502673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.354 qpair failed and we were unable to recover it. 00:33:14.354 [2024-07-26 09:06:32.502834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.354 [2024-07-26 09:06:32.502862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.354 qpair failed and we were unable to recover it. 00:33:14.354 [2024-07-26 09:06:32.502995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.354 [2024-07-26 09:06:32.503021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.354 qpair failed and we were unable to recover it. 00:33:14.354 [2024-07-26 09:06:32.503154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.354 [2024-07-26 09:06:32.503181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.354 qpair failed and we were unable to recover it. 00:33:14.354 [2024-07-26 09:06:32.503350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.354 [2024-07-26 09:06:32.503378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.354 qpair failed and we were unable to recover it. 00:33:14.354 [2024-07-26 09:06:32.503520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.354 [2024-07-26 09:06:32.503545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.354 qpair failed and we were unable to recover it. 00:33:14.354 [2024-07-26 09:06:32.503691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.354 [2024-07-26 09:06:32.503716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.354 qpair failed and we were unable to recover it. 00:33:14.354 [2024-07-26 09:06:32.503884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.354 [2024-07-26 09:06:32.503914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.354 qpair failed and we were unable to recover it. 00:33:14.354 [2024-07-26 09:06:32.504054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.354 [2024-07-26 09:06:32.504088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.354 qpair failed and we were unable to recover it. 00:33:14.354 [2024-07-26 09:06:32.504215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.354 [2024-07-26 09:06:32.504240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.354 qpair failed and we were unable to recover it. 00:33:14.354 [2024-07-26 09:06:32.504355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.354 [2024-07-26 09:06:32.504380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.354 qpair failed and we were unable to recover it. 00:33:14.354 [2024-07-26 09:06:32.504519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.354 [2024-07-26 09:06:32.504545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.354 qpair failed and we were unable to recover it. 00:33:14.354 [2024-07-26 09:06:32.504687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.354 [2024-07-26 09:06:32.504717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.354 qpair failed and we were unable to recover it. 00:33:14.354 [2024-07-26 09:06:32.504833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.354 [2024-07-26 09:06:32.504858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.354 qpair failed and we were unable to recover it. 00:33:14.354 [2024-07-26 09:06:32.505037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.354 [2024-07-26 09:06:32.505070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.354 qpair failed and we were unable to recover it. 00:33:14.354 [2024-07-26 09:06:32.505225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.354 [2024-07-26 09:06:32.505251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.354 qpair failed and we were unable to recover it. 00:33:14.354 [2024-07-26 09:06:32.505398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.354 [2024-07-26 09:06:32.505424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.354 qpair failed and we were unable to recover it. 00:33:14.354 [2024-07-26 09:06:32.505549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.354 [2024-07-26 09:06:32.505575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.354 qpair failed and we were unable to recover it. 00:33:14.354 [2024-07-26 09:06:32.505684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.354 [2024-07-26 09:06:32.505709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.354 qpair failed and we were unable to recover it. 00:33:14.354 [2024-07-26 09:06:32.505894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.354 [2024-07-26 09:06:32.505936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.354 qpair failed and we were unable to recover it. 00:33:14.354 [2024-07-26 09:06:32.506088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.354 [2024-07-26 09:06:32.506117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.354 qpair failed and we were unable to recover it. 00:33:14.354 [2024-07-26 09:06:32.506284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.354 [2024-07-26 09:06:32.506310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.354 qpair failed and we were unable to recover it. 00:33:14.354 [2024-07-26 09:06:32.506448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.354 [2024-07-26 09:06:32.506476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.354 qpair failed and we were unable to recover it. 00:33:14.354 [2024-07-26 09:06:32.506643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.354 [2024-07-26 09:06:32.506669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.354 qpair failed and we were unable to recover it. 00:33:14.354 [2024-07-26 09:06:32.506861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.354 [2024-07-26 09:06:32.506889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.354 qpair failed and we were unable to recover it. 00:33:14.354 [2024-07-26 09:06:32.507041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.355 [2024-07-26 09:06:32.507084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.355 qpair failed and we were unable to recover it. 00:33:14.355 [2024-07-26 09:06:32.507256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.355 [2024-07-26 09:06:32.507282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.355 qpair failed and we were unable to recover it. 00:33:14.355 [2024-07-26 09:06:32.507404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.355 [2024-07-26 09:06:32.507429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.355 qpair failed and we were unable to recover it. 00:33:14.355 [2024-07-26 09:06:32.507608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.355 [2024-07-26 09:06:32.507636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.355 qpair failed and we were unable to recover it. 00:33:14.355 [2024-07-26 09:06:32.507798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.355 [2024-07-26 09:06:32.507824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.355 qpair failed and we were unable to recover it. 00:33:14.355 [2024-07-26 09:06:32.507944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.355 [2024-07-26 09:06:32.507973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.355 qpair failed and we were unable to recover it. 00:33:14.355 [2024-07-26 09:06:32.508146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.355 [2024-07-26 09:06:32.508190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.355 qpair failed and we were unable to recover it. 00:33:14.355 [2024-07-26 09:06:32.508386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.355 [2024-07-26 09:06:32.508412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.355 qpair failed and we were unable to recover it. 00:33:14.355 [2024-07-26 09:06:32.508580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.355 [2024-07-26 09:06:32.508609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.355 qpair failed and we were unable to recover it. 00:33:14.355 [2024-07-26 09:06:32.508738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.355 [2024-07-26 09:06:32.508778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.355 qpair failed and we were unable to recover it. 00:33:14.355 [2024-07-26 09:06:32.508917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.355 [2024-07-26 09:06:32.508943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.355 qpair failed and we were unable to recover it. 00:33:14.355 [2024-07-26 09:06:32.509085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.355 [2024-07-26 09:06:32.509129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.355 qpair failed and we were unable to recover it. 00:33:14.355 [2024-07-26 09:06:32.509266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.355 [2024-07-26 09:06:32.509293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.355 qpair failed and we were unable to recover it. 00:33:14.355 [2024-07-26 09:06:32.509441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.355 [2024-07-26 09:06:32.509467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.355 qpair failed and we were unable to recover it. 00:33:14.355 [2024-07-26 09:06:32.509583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.355 [2024-07-26 09:06:32.509609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.355 qpair failed and we were unable to recover it. 00:33:14.355 [2024-07-26 09:06:32.509807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.355 [2024-07-26 09:06:32.509832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.355 qpair failed and we were unable to recover it. 00:33:14.355 [2024-07-26 09:06:32.509979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.355 [2024-07-26 09:06:32.510004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.355 qpair failed and we were unable to recover it. 00:33:14.355 [2024-07-26 09:06:32.510143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.355 [2024-07-26 09:06:32.510169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.355 qpair failed and we were unable to recover it. 00:33:14.355 [2024-07-26 09:06:32.510290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.355 [2024-07-26 09:06:32.510317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.355 qpair failed and we were unable to recover it. 00:33:14.355 [2024-07-26 09:06:32.510436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.355 [2024-07-26 09:06:32.510462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.355 qpair failed and we were unable to recover it. 00:33:14.355 [2024-07-26 09:06:32.510655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.355 [2024-07-26 09:06:32.510683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.355 qpair failed and we were unable to recover it. 00:33:14.355 [2024-07-26 09:06:32.510842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.355 [2024-07-26 09:06:32.510870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.355 qpair failed and we were unable to recover it. 00:33:14.355 [2024-07-26 09:06:32.511065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.355 [2024-07-26 09:06:32.511094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.355 qpair failed and we were unable to recover it. 00:33:14.355 [2024-07-26 09:06:32.511226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.355 [2024-07-26 09:06:32.511251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.355 qpair failed and we were unable to recover it. 00:33:14.355 [2024-07-26 09:06:32.511401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.355 [2024-07-26 09:06:32.511426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.355 qpair failed and we were unable to recover it. 00:33:14.355 [2024-07-26 09:06:32.511586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.355 [2024-07-26 09:06:32.511611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.355 qpair failed and we were unable to recover it. 00:33:14.355 [2024-07-26 09:06:32.511781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.355 [2024-07-26 09:06:32.511806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.355 qpair failed and we were unable to recover it. 00:33:14.355 [2024-07-26 09:06:32.511949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.355 [2024-07-26 09:06:32.511982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.355 qpair failed and we were unable to recover it. 00:33:14.355 [2024-07-26 09:06:32.512180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.355 [2024-07-26 09:06:32.512206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.355 qpair failed and we were unable to recover it. 00:33:14.355 [2024-07-26 09:06:32.512403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.355 [2024-07-26 09:06:32.512429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.355 qpair failed and we were unable to recover it. 00:33:14.355 [2024-07-26 09:06:32.512620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.355 [2024-07-26 09:06:32.512648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.355 qpair failed and we were unable to recover it. 00:33:14.355 [2024-07-26 09:06:32.512817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.355 [2024-07-26 09:06:32.512842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.355 qpair failed and we were unable to recover it. 00:33:14.355 [2024-07-26 09:06:32.513004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.355 [2024-07-26 09:06:32.513032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.355 qpair failed and we were unable to recover it. 00:33:14.355 [2024-07-26 09:06:32.513198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.355 [2024-07-26 09:06:32.513227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.355 qpair failed and we were unable to recover it. 00:33:14.355 [2024-07-26 09:06:32.513359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.355 [2024-07-26 09:06:32.513386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.355 qpair failed and we were unable to recover it. 00:33:14.355 [2024-07-26 09:06:32.513558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.355 [2024-07-26 09:06:32.513599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.355 qpair failed and we were unable to recover it. 00:33:14.355 [2024-07-26 09:06:32.513770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.355 [2024-07-26 09:06:32.513799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.355 qpair failed and we were unable to recover it. 00:33:14.355 [2024-07-26 09:06:32.513959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.355 [2024-07-26 09:06:32.513984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.355 qpair failed and we were unable to recover it. 00:33:14.355 [2024-07-26 09:06:32.514105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.355 [2024-07-26 09:06:32.514132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.355 qpair failed and we were unable to recover it. 00:33:14.355 [2024-07-26 09:06:32.514299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.355 [2024-07-26 09:06:32.514344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.355 qpair failed and we were unable to recover it. 00:33:14.355 [2024-07-26 09:06:32.514533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.355 [2024-07-26 09:06:32.514558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.355 qpair failed and we were unable to recover it. 00:33:14.355 [2024-07-26 09:06:32.514705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.355 [2024-07-26 09:06:32.514731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.355 qpair failed and we were unable to recover it. 00:33:14.355 [2024-07-26 09:06:32.514879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.355 [2024-07-26 09:06:32.514905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.355 qpair failed and we were unable to recover it. 00:33:14.355 [2024-07-26 09:06:32.515054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.355 [2024-07-26 09:06:32.515098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.355 qpair failed and we were unable to recover it. 00:33:14.355 [2024-07-26 09:06:32.515256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.355 [2024-07-26 09:06:32.515284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.355 qpair failed and we were unable to recover it. 00:33:14.355 [2024-07-26 09:06:32.515439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.355 [2024-07-26 09:06:32.515468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.355 qpair failed and we were unable to recover it. 00:33:14.355 [2024-07-26 09:06:32.515633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.355 [2024-07-26 09:06:32.515659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.355 qpair failed and we were unable to recover it. 00:33:14.355 [2024-07-26 09:06:32.515809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.355 [2024-07-26 09:06:32.515834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.355 qpair failed and we were unable to recover it. 00:33:14.355 [2024-07-26 09:06:32.515955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.355 [2024-07-26 09:06:32.515982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.355 qpair failed and we were unable to recover it. 00:33:14.355 [2024-07-26 09:06:32.516133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.355 [2024-07-26 09:06:32.516159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.355 qpair failed and we were unable to recover it. 00:33:14.355 [2024-07-26 09:06:32.516336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.355 [2024-07-26 09:06:32.516365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.355 qpair failed and we were unable to recover it. 00:33:14.355 [2024-07-26 09:06:32.516490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.355 [2024-07-26 09:06:32.516519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.355 qpair failed and we were unable to recover it. 00:33:14.355 [2024-07-26 09:06:32.516683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.355 [2024-07-26 09:06:32.516709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.355 qpair failed and we were unable to recover it. 00:33:14.355 [2024-07-26 09:06:32.516870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.355 [2024-07-26 09:06:32.516898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.355 qpair failed and we were unable to recover it. 00:33:14.355 [2024-07-26 09:06:32.517109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.355 [2024-07-26 09:06:32.517135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.355 qpair failed and we were unable to recover it. 00:33:14.355 [2024-07-26 09:06:32.517253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.355 [2024-07-26 09:06:32.517279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.355 qpair failed and we were unable to recover it. 00:33:14.356 [2024-07-26 09:06:32.517390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.356 [2024-07-26 09:06:32.517416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.356 qpair failed and we were unable to recover it. 00:33:14.356 [2024-07-26 09:06:32.517613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.356 [2024-07-26 09:06:32.517641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.356 qpair failed and we were unable to recover it. 00:33:14.356 [2024-07-26 09:06:32.517806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.356 [2024-07-26 09:06:32.517832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.356 qpair failed and we were unable to recover it. 00:33:14.356 [2024-07-26 09:06:32.517991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.356 [2024-07-26 09:06:32.518017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.356 qpair failed and we were unable to recover it. 00:33:14.356 [2024-07-26 09:06:32.518174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.356 [2024-07-26 09:06:32.518200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.356 qpair failed and we were unable to recover it. 00:33:14.356 [2024-07-26 09:06:32.518343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.356 [2024-07-26 09:06:32.518369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.356 qpair failed and we were unable to recover it. 00:33:14.356 [2024-07-26 09:06:32.518481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.356 [2024-07-26 09:06:32.518523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.356 qpair failed and we were unable to recover it. 00:33:14.356 [2024-07-26 09:06:32.518697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.356 [2024-07-26 09:06:32.518724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.356 qpair failed and we were unable to recover it. 00:33:14.356 [2024-07-26 09:06:32.518872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.356 [2024-07-26 09:06:32.518898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.356 qpair failed and we were unable to recover it. 00:33:14.356 [2024-07-26 09:06:32.519057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.356 [2024-07-26 09:06:32.519094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.356 qpair failed and we were unable to recover it. 00:33:14.356 [2024-07-26 09:06:32.519259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.356 [2024-07-26 09:06:32.519288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.356 qpair failed and we were unable to recover it. 00:33:14.356 [2024-07-26 09:06:32.519455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.356 [2024-07-26 09:06:32.519484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.356 qpair failed and we were unable to recover it. 00:33:14.356 [2024-07-26 09:06:32.519612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.356 [2024-07-26 09:06:32.519638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.356 qpair failed and we were unable to recover it. 00:33:14.356 [2024-07-26 09:06:32.519778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.356 [2024-07-26 09:06:32.519804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.356 qpair failed and we were unable to recover it. 00:33:14.356 [2024-07-26 09:06:32.519925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.356 [2024-07-26 09:06:32.519950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.356 qpair failed and we were unable to recover it. 00:33:14.356 [2024-07-26 09:06:32.520072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.356 [2024-07-26 09:06:32.520098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.356 qpair failed and we were unable to recover it. 00:33:14.356 [2024-07-26 09:06:32.520269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.356 [2024-07-26 09:06:32.520298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.356 qpair failed and we were unable to recover it. 00:33:14.356 [2024-07-26 09:06:32.520469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.356 [2024-07-26 09:06:32.520494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.356 qpair failed and we were unable to recover it. 00:33:14.356 [2024-07-26 09:06:32.520619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.356 [2024-07-26 09:06:32.520645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.356 qpair failed and we were unable to recover it. 00:33:14.356 [2024-07-26 09:06:32.520814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.356 [2024-07-26 09:06:32.520856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.356 qpair failed and we were unable to recover it. 00:33:14.356 [2024-07-26 09:06:32.521022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.356 [2024-07-26 09:06:32.521047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.356 qpair failed and we were unable to recover it. 00:33:14.356 [2024-07-26 09:06:32.521179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.356 [2024-07-26 09:06:32.521205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.356 qpair failed and we were unable to recover it. 00:33:14.356 [2024-07-26 09:06:32.521349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.356 [2024-07-26 09:06:32.521375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.356 qpair failed and we were unable to recover it. 00:33:14.356 [2024-07-26 09:06:32.521522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.356 [2024-07-26 09:06:32.521547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.356 qpair failed and we were unable to recover it. 00:33:14.356 [2024-07-26 09:06:32.521664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.356 [2024-07-26 09:06:32.521706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.356 qpair failed and we were unable to recover it. 00:33:14.356 [2024-07-26 09:06:32.521898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.356 [2024-07-26 09:06:32.521926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.356 qpair failed and we were unable to recover it. 00:33:14.356 [2024-07-26 09:06:32.522067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.356 [2024-07-26 09:06:32.522093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.356 qpair failed and we were unable to recover it. 00:33:14.356 [2024-07-26 09:06:32.522237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.356 [2024-07-26 09:06:32.522263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.356 qpair failed and we were unable to recover it. 00:33:14.356 [2024-07-26 09:06:32.522407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.356 [2024-07-26 09:06:32.522435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.356 qpair failed and we were unable to recover it. 00:33:14.356 [2024-07-26 09:06:32.522595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.356 [2024-07-26 09:06:32.522621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.356 qpair failed and we were unable to recover it. 00:33:14.356 [2024-07-26 09:06:32.522771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.356 [2024-07-26 09:06:32.522797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.356 qpair failed and we were unable to recover it. 00:33:14.356 [2024-07-26 09:06:32.522940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.356 [2024-07-26 09:06:32.522983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.356 qpair failed and we were unable to recover it. 00:33:14.356 [2024-07-26 09:06:32.523147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.356 [2024-07-26 09:06:32.523174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.356 qpair failed and we were unable to recover it. 00:33:14.356 [2024-07-26 09:06:32.523299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.356 [2024-07-26 09:06:32.523325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.356 qpair failed and we were unable to recover it. 00:33:14.356 [2024-07-26 09:06:32.523480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.356 [2024-07-26 09:06:32.523506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.356 qpair failed and we were unable to recover it. 00:33:14.356 [2024-07-26 09:06:32.523675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.356 [2024-07-26 09:06:32.523700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.356 qpair failed and we were unable to recover it. 00:33:14.356 [2024-07-26 09:06:32.523899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.356 [2024-07-26 09:06:32.523927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.356 qpair failed and we were unable to recover it. 00:33:14.356 [2024-07-26 09:06:32.524094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.356 [2024-07-26 09:06:32.524123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.356 qpair failed and we were unable to recover it. 00:33:14.356 [2024-07-26 09:06:32.524296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.356 [2024-07-26 09:06:32.524321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.356 qpair failed and we were unable to recover it. 00:33:14.356 [2024-07-26 09:06:32.524446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.356 [2024-07-26 09:06:32.524473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.356 qpair failed and we were unable to recover it. 00:33:14.356 [2024-07-26 09:06:32.524627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.356 [2024-07-26 09:06:32.524652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.356 qpair failed and we were unable to recover it. 00:33:14.356 [2024-07-26 09:06:32.524799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.356 [2024-07-26 09:06:32.524825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.356 qpair failed and we were unable to recover it. 00:33:14.356 [2024-07-26 09:06:32.524946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.356 [2024-07-26 09:06:32.524991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.356 qpair failed and we were unable to recover it. 00:33:14.356 [2024-07-26 09:06:32.525161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.356 [2024-07-26 09:06:32.525186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.356 qpair failed and we were unable to recover it. 00:33:14.356 [2024-07-26 09:06:32.525332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.356 [2024-07-26 09:06:32.525358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.356 qpair failed and we were unable to recover it. 00:33:14.356 [2024-07-26 09:06:32.525515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.356 [2024-07-26 09:06:32.525558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.356 qpair failed and we were unable to recover it. 00:33:14.356 [2024-07-26 09:06:32.525726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.356 [2024-07-26 09:06:32.525755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.356 qpair failed and we were unable to recover it. 00:33:14.356 [2024-07-26 09:06:32.525923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.356 [2024-07-26 09:06:32.525948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.356 qpair failed and we were unable to recover it. 00:33:14.356 [2024-07-26 09:06:32.526111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.356 [2024-07-26 09:06:32.526141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.356 qpair failed and we were unable to recover it. 00:33:14.356 [2024-07-26 09:06:32.526329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.356 [2024-07-26 09:06:32.526357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.356 qpair failed and we were unable to recover it. 00:33:14.356 [2024-07-26 09:06:32.526543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.356 [2024-07-26 09:06:32.526569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.356 qpair failed and we were unable to recover it. 00:33:14.356 [2024-07-26 09:06:32.526684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.356 [2024-07-26 09:06:32.526715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.356 qpair failed and we were unable to recover it. 00:33:14.356 [2024-07-26 09:06:32.526833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.356 [2024-07-26 09:06:32.526859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.356 qpair failed and we were unable to recover it. 00:33:14.356 [2024-07-26 09:06:32.527022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.356 [2024-07-26 09:06:32.527050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.356 qpair failed and we were unable to recover it. 00:33:14.356 [2024-07-26 09:06:32.527229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.356 [2024-07-26 09:06:32.527255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.356 qpair failed and we were unable to recover it. 00:33:14.356 [2024-07-26 09:06:32.527447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.356 [2024-07-26 09:06:32.527476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.356 qpair failed and we were unable to recover it. 00:33:14.356 [2024-07-26 09:06:32.527650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.356 [2024-07-26 09:06:32.527676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.356 qpair failed and we were unable to recover it. 00:33:14.356 [2024-07-26 09:06:32.527826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.356 [2024-07-26 09:06:32.527851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.356 qpair failed and we were unable to recover it. 00:33:14.356 [2024-07-26 09:06:32.527971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.356 [2024-07-26 09:06:32.527996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.356 qpair failed and we were unable to recover it. 00:33:14.356 [2024-07-26 09:06:32.528120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.356 [2024-07-26 09:06:32.528146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.356 qpair failed and we were unable to recover it. 00:33:14.356 [2024-07-26 09:06:32.528288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.356 [2024-07-26 09:06:32.528331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.356 qpair failed and we were unable to recover it. 00:33:14.356 [2024-07-26 09:06:32.528482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.356 [2024-07-26 09:06:32.528511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.356 qpair failed and we were unable to recover it. 00:33:14.356 [2024-07-26 09:06:32.528644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.357 [2024-07-26 09:06:32.528669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.357 qpair failed and we were unable to recover it. 00:33:14.357 [2024-07-26 09:06:32.528838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.357 [2024-07-26 09:06:32.528881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.357 qpair failed and we were unable to recover it. 00:33:14.357 [2024-07-26 09:06:32.529037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.357 [2024-07-26 09:06:32.529071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.357 qpair failed and we were unable to recover it. 00:33:14.357 [2024-07-26 09:06:32.529210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.357 [2024-07-26 09:06:32.529236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.357 qpair failed and we were unable to recover it. 00:33:14.357 [2024-07-26 09:06:32.529412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.357 [2024-07-26 09:06:32.529453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.357 qpair failed and we were unable to recover it. 00:33:14.357 [2024-07-26 09:06:32.529616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.357 [2024-07-26 09:06:32.529644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.357 qpair failed and we were unable to recover it. 00:33:14.357 [2024-07-26 09:06:32.529786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.357 [2024-07-26 09:06:32.529811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.357 qpair failed and we were unable to recover it. 00:33:14.357 [2024-07-26 09:06:32.529960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.357 [2024-07-26 09:06:32.530002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.357 qpair failed and we were unable to recover it. 00:33:14.357 [2024-07-26 09:06:32.530177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.357 [2024-07-26 09:06:32.530203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.357 qpair failed and we were unable to recover it. 00:33:14.357 [2024-07-26 09:06:32.530328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.357 [2024-07-26 09:06:32.530355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.357 qpair failed and we were unable to recover it. 00:33:14.357 [2024-07-26 09:06:32.530499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.357 [2024-07-26 09:06:32.530524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.357 qpair failed and we were unable to recover it. 00:33:14.357 [2024-07-26 09:06:32.530673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.357 [2024-07-26 09:06:32.530698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.357 qpair failed and we were unable to recover it. 00:33:14.357 [2024-07-26 09:06:32.530811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.357 [2024-07-26 09:06:32.530837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.357 qpair failed and we were unable to recover it. 00:33:14.357 [2024-07-26 09:06:32.530977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.357 [2024-07-26 09:06:32.531003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.357 qpair failed and we were unable to recover it. 00:33:14.357 [2024-07-26 09:06:32.531146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.357 [2024-07-26 09:06:32.531172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.357 qpair failed and we were unable to recover it. 00:33:14.357 [2024-07-26 09:06:32.531320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.357 [2024-07-26 09:06:32.531346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.357 qpair failed and we were unable to recover it. 00:33:14.357 [2024-07-26 09:06:32.531491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.357 [2024-07-26 09:06:32.531517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.357 qpair failed and we were unable to recover it. 00:33:14.357 [2024-07-26 09:06:32.531640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.357 [2024-07-26 09:06:32.531665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.357 qpair failed and we were unable to recover it. 00:33:14.357 [2024-07-26 09:06:32.531784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.357 [2024-07-26 09:06:32.531811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.357 qpair failed and we were unable to recover it. 00:33:14.357 [2024-07-26 09:06:32.531983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.357 [2024-07-26 09:06:32.532009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.357 qpair failed and we were unable to recover it. 00:33:14.357 [2024-07-26 09:06:32.532179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.357 [2024-07-26 09:06:32.532205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.357 qpair failed and we were unable to recover it. 00:33:14.357 [2024-07-26 09:06:32.532374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.357 [2024-07-26 09:06:32.532399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.357 qpair failed and we were unable to recover it. 00:33:14.357 [2024-07-26 09:06:32.532544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.357 [2024-07-26 09:06:32.532570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.357 qpair failed and we were unable to recover it. 00:33:14.357 [2024-07-26 09:06:32.532714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.357 [2024-07-26 09:06:32.532739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.357 qpair failed and we were unable to recover it. 00:33:14.357 [2024-07-26 09:06:32.532887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.357 [2024-07-26 09:06:32.532912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.357 qpair failed and we were unable to recover it. 00:33:14.357 [2024-07-26 09:06:32.533034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.357 [2024-07-26 09:06:32.533066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.357 qpair failed and we were unable to recover it. 00:33:14.357 [2024-07-26 09:06:32.533181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.357 [2024-07-26 09:06:32.533206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.357 qpair failed and we were unable to recover it. 00:33:14.357 [2024-07-26 09:06:32.533351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.357 [2024-07-26 09:06:32.533376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.357 qpair failed and we were unable to recover it. 00:33:14.357 [2024-07-26 09:06:32.533545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.357 [2024-07-26 09:06:32.533570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.357 qpair failed and we were unable to recover it. 00:33:14.357 [2024-07-26 09:06:32.533711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.357 [2024-07-26 09:06:32.533741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.357 qpair failed and we were unable to recover it. 00:33:14.357 [2024-07-26 09:06:32.533862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.357 [2024-07-26 09:06:32.533887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.357 qpair failed and we were unable to recover it. 00:33:14.357 [2024-07-26 09:06:32.534065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.357 [2024-07-26 09:06:32.534091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.357 qpair failed and we were unable to recover it. 00:33:14.357 [2024-07-26 09:06:32.534238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.357 [2024-07-26 09:06:32.534263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.357 qpair failed and we were unable to recover it. 00:33:14.357 [2024-07-26 09:06:32.534386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.357 [2024-07-26 09:06:32.534412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.357 qpair failed and we were unable to recover it. 00:33:14.357 [2024-07-26 09:06:32.534587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.357 [2024-07-26 09:06:32.534612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.357 qpair failed and we were unable to recover it. 00:33:14.357 [2024-07-26 09:06:32.534728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.357 [2024-07-26 09:06:32.534753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.357 qpair failed and we were unable to recover it. 00:33:14.357 [2024-07-26 09:06:32.534876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.357 [2024-07-26 09:06:32.534901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.357 qpair failed and we were unable to recover it. 00:33:14.357 [2024-07-26 09:06:32.535046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.357 [2024-07-26 09:06:32.535090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.357 qpair failed and we were unable to recover it. 00:33:14.357 [2024-07-26 09:06:32.535243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.357 [2024-07-26 09:06:32.535270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.357 qpair failed and we were unable to recover it. 00:33:14.357 [2024-07-26 09:06:32.535419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.357 [2024-07-26 09:06:32.535444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.357 qpair failed and we were unable to recover it. 00:33:14.357 [2024-07-26 09:06:32.535565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.357 [2024-07-26 09:06:32.535591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.357 qpair failed and we were unable to recover it. 00:33:14.357 [2024-07-26 09:06:32.535769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.357 [2024-07-26 09:06:32.535795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.357 qpair failed and we were unable to recover it. 00:33:14.357 [2024-07-26 09:06:32.535914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.357 [2024-07-26 09:06:32.535940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.357 qpair failed and we were unable to recover it. 00:33:14.357 [2024-07-26 09:06:32.536086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.357 [2024-07-26 09:06:32.536112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.357 qpair failed and we were unable to recover it. 00:33:14.357 [2024-07-26 09:06:32.536256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.357 [2024-07-26 09:06:32.536282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.357 qpair failed and we were unable to recover it. 00:33:14.357 [2024-07-26 09:06:32.536425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.357 [2024-07-26 09:06:32.536451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.357 qpair failed and we were unable to recover it. 00:33:14.357 [2024-07-26 09:06:32.536595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.357 [2024-07-26 09:06:32.536620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.357 qpair failed and we were unable to recover it. 00:33:14.357 [2024-07-26 09:06:32.536762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.357 [2024-07-26 09:06:32.536787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.357 qpair failed and we were unable to recover it. 00:33:14.357 [2024-07-26 09:06:32.536927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.357 [2024-07-26 09:06:32.536952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.357 qpair failed and we were unable to recover it. 00:33:14.357 [2024-07-26 09:06:32.537096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.357 [2024-07-26 09:06:32.537122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.357 qpair failed and we were unable to recover it. 00:33:14.357 [2024-07-26 09:06:32.537267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.357 [2024-07-26 09:06:32.537292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.357 qpair failed and we were unable to recover it. 00:33:14.357 [2024-07-26 09:06:32.537411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.357 [2024-07-26 09:06:32.537436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.357 qpair failed and we were unable to recover it. 00:33:14.357 [2024-07-26 09:06:32.537545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.357 [2024-07-26 09:06:32.537571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.357 qpair failed and we were unable to recover it. 00:33:14.357 [2024-07-26 09:06:32.537719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.357 [2024-07-26 09:06:32.537744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.357 qpair failed and we were unable to recover it. 00:33:14.357 [2024-07-26 09:06:32.537887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.357 [2024-07-26 09:06:32.537912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.357 qpair failed and we were unable to recover it. 00:33:14.357 [2024-07-26 09:06:32.538067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.357 [2024-07-26 09:06:32.538093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.357 qpair failed and we were unable to recover it. 00:33:14.357 [2024-07-26 09:06:32.538270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.357 [2024-07-26 09:06:32.538295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.358 qpair failed and we were unable to recover it. 00:33:14.358 [2024-07-26 09:06:32.538438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.358 [2024-07-26 09:06:32.538463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.358 qpair failed and we were unable to recover it. 00:33:14.358 [2024-07-26 09:06:32.538608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.358 [2024-07-26 09:06:32.538634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.358 qpair failed and we were unable to recover it. 00:33:14.358 [2024-07-26 09:06:32.538779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.358 [2024-07-26 09:06:32.538804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.358 qpair failed and we were unable to recover it. 00:33:14.358 [2024-07-26 09:06:32.538923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.358 [2024-07-26 09:06:32.538950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.358 qpair failed and we were unable to recover it. 00:33:14.358 [2024-07-26 09:06:32.539089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.358 [2024-07-26 09:06:32.539115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.358 qpair failed and we were unable to recover it. 00:33:14.358 [2024-07-26 09:06:32.539231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.358 [2024-07-26 09:06:32.539256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.358 qpair failed and we were unable to recover it. 00:33:14.358 [2024-07-26 09:06:32.539428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.358 [2024-07-26 09:06:32.539454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.358 qpair failed and we were unable to recover it. 00:33:14.358 [2024-07-26 09:06:32.539604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.358 [2024-07-26 09:06:32.539629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.358 qpair failed and we were unable to recover it. 00:33:14.358 [2024-07-26 09:06:32.539803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.358 [2024-07-26 09:06:32.539829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.358 qpair failed and we were unable to recover it. 00:33:14.358 [2024-07-26 09:06:32.539971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.358 [2024-07-26 09:06:32.539996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.358 qpair failed and we were unable to recover it. 00:33:14.358 [2024-07-26 09:06:32.540118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.358 [2024-07-26 09:06:32.540145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.358 qpair failed and we were unable to recover it. 00:33:14.358 [2024-07-26 09:06:32.540324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.358 [2024-07-26 09:06:32.540350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.358 qpair failed and we were unable to recover it. 00:33:14.358 [2024-07-26 09:06:32.540508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.358 [2024-07-26 09:06:32.540534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.358 qpair failed and we were unable to recover it. 00:33:14.358 [2024-07-26 09:06:32.540659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.358 [2024-07-26 09:06:32.540684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.358 qpair failed and we were unable to recover it. 00:33:14.358 [2024-07-26 09:06:32.540833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.358 [2024-07-26 09:06:32.540858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.358 qpair failed and we were unable to recover it. 00:33:14.358 [2024-07-26 09:06:32.541010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.358 [2024-07-26 09:06:32.541035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.358 qpair failed and we were unable to recover it. 00:33:14.358 [2024-07-26 09:06:32.541161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.358 [2024-07-26 09:06:32.541188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.358 qpair failed and we were unable to recover it. 00:33:14.358 [2024-07-26 09:06:32.541359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.358 [2024-07-26 09:06:32.541385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.358 qpair failed and we were unable to recover it. 00:33:14.358 [2024-07-26 09:06:32.541531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.358 [2024-07-26 09:06:32.541556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.358 qpair failed and we were unable to recover it. 00:33:14.358 [2024-07-26 09:06:32.541724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.358 [2024-07-26 09:06:32.541767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.358 qpair failed and we were unable to recover it. 00:33:14.358 [2024-07-26 09:06:32.541893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.358 [2024-07-26 09:06:32.541921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.358 qpair failed and we were unable to recover it. 00:33:14.358 [2024-07-26 09:06:32.542070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.358 [2024-07-26 09:06:32.542096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.358 qpair failed and we were unable to recover it. 00:33:14.358 [2024-07-26 09:06:32.542266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.358 [2024-07-26 09:06:32.542307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.358 qpair failed and we were unable to recover it. 00:33:14.358 [2024-07-26 09:06:32.542491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.358 [2024-07-26 09:06:32.542520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.358 qpair failed and we were unable to recover it. 00:33:14.358 [2024-07-26 09:06:32.542649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.358 [2024-07-26 09:06:32.542674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.358 qpair failed and we were unable to recover it. 00:33:14.358 [2024-07-26 09:06:32.542783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.358 [2024-07-26 09:06:32.542808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.358 qpair failed and we were unable to recover it. 00:33:14.358 [2024-07-26 09:06:32.542994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.358 [2024-07-26 09:06:32.543019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.358 qpair failed and we were unable to recover it. 00:33:14.358 [2024-07-26 09:06:32.543175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.358 [2024-07-26 09:06:32.543201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.358 qpair failed and we were unable to recover it. 00:33:14.358 [2024-07-26 09:06:32.543394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.358 [2024-07-26 09:06:32.543422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.358 qpair failed and we were unable to recover it. 00:33:14.358 [2024-07-26 09:06:32.543580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.358 [2024-07-26 09:06:32.543608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.358 qpair failed and we were unable to recover it. 00:33:14.358 [2024-07-26 09:06:32.543777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.358 [2024-07-26 09:06:32.543802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.358 qpair failed and we were unable to recover it. 00:33:14.358 [2024-07-26 09:06:32.543971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.358 [2024-07-26 09:06:32.543996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.358 qpair failed and we were unable to recover it. 00:33:14.358 [2024-07-26 09:06:32.544136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.358 [2024-07-26 09:06:32.544166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.358 qpair failed and we were unable to recover it. 00:33:14.358 [2024-07-26 09:06:32.544326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.358 [2024-07-26 09:06:32.544351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.358 qpair failed and we were unable to recover it. 00:33:14.358 [2024-07-26 09:06:32.544494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.358 [2024-07-26 09:06:32.544538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.358 qpair failed and we were unable to recover it. 00:33:14.358 [2024-07-26 09:06:32.544696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.358 [2024-07-26 09:06:32.544725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.358 qpair failed and we were unable to recover it. 00:33:14.358 [2024-07-26 09:06:32.544917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.358 [2024-07-26 09:06:32.544945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.358 qpair failed and we were unable to recover it. 00:33:14.358 [2024-07-26 09:06:32.545077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.358 [2024-07-26 09:06:32.545119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.358 qpair failed and we were unable to recover it. 00:33:14.358 [2024-07-26 09:06:32.545241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.358 [2024-07-26 09:06:32.545266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.358 qpair failed and we were unable to recover it. 00:33:14.358 [2024-07-26 09:06:32.545404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.358 [2024-07-26 09:06:32.545434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.358 qpair failed and we were unable to recover it. 00:33:14.358 [2024-07-26 09:06:32.545561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.358 [2024-07-26 09:06:32.545586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.358 qpair failed and we were unable to recover it. 00:33:14.358 [2024-07-26 09:06:32.545732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.358 [2024-07-26 09:06:32.545757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.358 qpair failed and we were unable to recover it. 00:33:14.358 [2024-07-26 09:06:32.545879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.358 [2024-07-26 09:06:32.545904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.358 qpair failed and we were unable to recover it. 00:33:14.358 [2024-07-26 09:06:32.546090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.358 [2024-07-26 09:06:32.546119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.358 qpair failed and we were unable to recover it. 00:33:14.358 [2024-07-26 09:06:32.546266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.358 [2024-07-26 09:06:32.546293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.358 qpair failed and we were unable to recover it. 00:33:14.358 [2024-07-26 09:06:32.546459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.358 [2024-07-26 09:06:32.546484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.358 qpair failed and we were unable to recover it. 00:33:14.358 [2024-07-26 09:06:32.546635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.358 [2024-07-26 09:06:32.546660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.358 qpair failed and we were unable to recover it. 00:33:14.358 [2024-07-26 09:06:32.546779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.358 [2024-07-26 09:06:32.546804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.358 qpair failed and we were unable to recover it. 00:33:14.358 [2024-07-26 09:06:32.546949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.358 [2024-07-26 09:06:32.546975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.358 qpair failed and we were unable to recover it. 00:33:14.358 [2024-07-26 09:06:32.547127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.358 [2024-07-26 09:06:32.547170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.358 qpair failed and we were unable to recover it. 00:33:14.358 [2024-07-26 09:06:32.547304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.358 [2024-07-26 09:06:32.547333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.358 qpair failed and we were unable to recover it. 00:33:14.358 [2024-07-26 09:06:32.547527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.358 [2024-07-26 09:06:32.547552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.358 qpair failed and we were unable to recover it. 00:33:14.358 [2024-07-26 09:06:32.547670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.358 [2024-07-26 09:06:32.547712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.358 qpair failed and we were unable to recover it. 00:33:14.358 [2024-07-26 09:06:32.547905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.358 [2024-07-26 09:06:32.547934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.358 qpair failed and we were unable to recover it. 00:33:14.358 [2024-07-26 09:06:32.548073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.358 [2024-07-26 09:06:32.548099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.358 qpair failed and we were unable to recover it. 00:33:14.359 [2024-07-26 09:06:32.548246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.359 [2024-07-26 09:06:32.548271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.359 qpair failed and we were unable to recover it. 00:33:14.359 [2024-07-26 09:06:32.548412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.359 [2024-07-26 09:06:32.548441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.359 qpair failed and we were unable to recover it. 00:33:14.359 [2024-07-26 09:06:32.548589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.359 [2024-07-26 09:06:32.548615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.359 qpair failed and we were unable to recover it. 00:33:14.359 [2024-07-26 09:06:32.548793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.359 [2024-07-26 09:06:32.548834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.359 qpair failed and we were unable to recover it. 00:33:14.359 [2024-07-26 09:06:32.548976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.359 [2024-07-26 09:06:32.549002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.359 qpair failed and we were unable to recover it. 00:33:14.359 [2024-07-26 09:06:32.549175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.359 [2024-07-26 09:06:32.549201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.359 qpair failed and we were unable to recover it. 00:33:14.359 [2024-07-26 09:06:32.549370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.359 [2024-07-26 09:06:32.549399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.359 qpair failed and we were unable to recover it. 00:33:14.359 [2024-07-26 09:06:32.549568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.359 [2024-07-26 09:06:32.549594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.359 qpair failed and we were unable to recover it. 00:33:14.359 [2024-07-26 09:06:32.549705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.359 [2024-07-26 09:06:32.549730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.359 qpair failed and we were unable to recover it. 00:33:14.359 [2024-07-26 09:06:32.549872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.359 [2024-07-26 09:06:32.549897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.359 qpair failed and we were unable to recover it. 00:33:14.359 [2024-07-26 09:06:32.550123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.359 [2024-07-26 09:06:32.550149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.359 qpair failed and we were unable to recover it. 00:33:14.359 [2024-07-26 09:06:32.550296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.359 [2024-07-26 09:06:32.550322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.359 qpair failed and we were unable to recover it. 00:33:14.359 [2024-07-26 09:06:32.550443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.359 [2024-07-26 09:06:32.550470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.359 qpair failed and we were unable to recover it. 00:33:14.359 [2024-07-26 09:06:32.550615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.359 [2024-07-26 09:06:32.550640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.359 qpair failed and we were unable to recover it. 00:33:14.359 [2024-07-26 09:06:32.550817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.359 [2024-07-26 09:06:32.550842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.359 qpair failed and we were unable to recover it. 00:33:14.359 [2024-07-26 09:06:32.550962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.359 [2024-07-26 09:06:32.551003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.359 qpair failed and we were unable to recover it. 00:33:14.359 [2024-07-26 09:06:32.551174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.359 [2024-07-26 09:06:32.551201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.359 qpair failed and we were unable to recover it. 00:33:14.359 [2024-07-26 09:06:32.551346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.359 [2024-07-26 09:06:32.551372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.359 qpair failed and we were unable to recover it. 00:33:14.359 [2024-07-26 09:06:32.551559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.359 [2024-07-26 09:06:32.551587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.359 qpair failed and we were unable to recover it. 00:33:14.359 [2024-07-26 09:06:32.551747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.359 [2024-07-26 09:06:32.551776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.359 qpair failed and we were unable to recover it. 00:33:14.359 [2024-07-26 09:06:32.551941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.359 [2024-07-26 09:06:32.551966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.359 qpair failed and we were unable to recover it. 00:33:14.359 [2024-07-26 09:06:32.552094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.359 [2024-07-26 09:06:32.552120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.359 qpair failed and we were unable to recover it. 00:33:14.359 [2024-07-26 09:06:32.552262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.359 [2024-07-26 09:06:32.552287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.359 qpair failed and we were unable to recover it. 00:33:14.359 [2024-07-26 09:06:32.552440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.359 [2024-07-26 09:06:32.552465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.359 qpair failed and we were unable to recover it. 00:33:14.359 [2024-07-26 09:06:32.552575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.359 [2024-07-26 09:06:32.552605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.359 qpair failed and we were unable to recover it. 00:33:14.359 [2024-07-26 09:06:32.552775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.359 [2024-07-26 09:06:32.552803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.359 qpair failed and we were unable to recover it. 00:33:14.359 [2024-07-26 09:06:32.552968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.359 [2024-07-26 09:06:32.552993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.359 qpair failed and we were unable to recover it. 00:33:14.359 [2024-07-26 09:06:32.553136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.359 [2024-07-26 09:06:32.553163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.359 qpair failed and we were unable to recover it. 00:33:14.359 [2024-07-26 09:06:32.553306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.359 [2024-07-26 09:06:32.553346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.359 qpair failed and we were unable to recover it. 00:33:14.359 [2024-07-26 09:06:32.553497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.359 [2024-07-26 09:06:32.553523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.359 qpair failed and we were unable to recover it. 00:33:14.359 [2024-07-26 09:06:32.553663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.359 [2024-07-26 09:06:32.553689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.359 qpair failed and we were unable to recover it. 00:33:14.359 [2024-07-26 09:06:32.553833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.359 [2024-07-26 09:06:32.553859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.359 qpair failed and we were unable to recover it. 00:33:14.359 [2024-07-26 09:06:32.553986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.359 [2024-07-26 09:06:32.554011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.359 qpair failed and we were unable to recover it. 00:33:14.359 [2024-07-26 09:06:32.554138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.359 [2024-07-26 09:06:32.554165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.359 qpair failed and we were unable to recover it. 00:33:14.359 [2024-07-26 09:06:32.554337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.359 [2024-07-26 09:06:32.554379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.359 qpair failed and we were unable to recover it. 00:33:14.359 [2024-07-26 09:06:32.554570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.359 [2024-07-26 09:06:32.554596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.359 qpair failed and we were unable to recover it. 00:33:14.359 [2024-07-26 09:06:32.554759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.359 [2024-07-26 09:06:32.554787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.359 qpair failed and we were unable to recover it. 00:33:14.359 [2024-07-26 09:06:32.554917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.359 [2024-07-26 09:06:32.554945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.359 qpair failed and we were unable to recover it. 00:33:14.359 [2024-07-26 09:06:32.555098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.359 [2024-07-26 09:06:32.555124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.359 qpair failed and we were unable to recover it. 00:33:14.359 [2024-07-26 09:06:32.555238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.359 [2024-07-26 09:06:32.555264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.359 qpair failed and we were unable to recover it. 00:33:14.359 [2024-07-26 09:06:32.555450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.359 [2024-07-26 09:06:32.555478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.359 qpair failed and we were unable to recover it. 00:33:14.359 [2024-07-26 09:06:32.555621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.359 [2024-07-26 09:06:32.555646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.359 qpair failed and we were unable to recover it. 00:33:14.359 [2024-07-26 09:06:32.555755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.359 [2024-07-26 09:06:32.555781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.359 qpair failed and we were unable to recover it. 00:33:14.359 [2024-07-26 09:06:32.555972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.359 [2024-07-26 09:06:32.556000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.359 qpair failed and we were unable to recover it. 00:33:14.359 [2024-07-26 09:06:32.556191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.359 [2024-07-26 09:06:32.556217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.359 qpair failed and we were unable to recover it. 00:33:14.359 [2024-07-26 09:06:32.556380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.359 [2024-07-26 09:06:32.556408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.359 qpair failed and we were unable to recover it. 00:33:14.359 [2024-07-26 09:06:32.556584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.359 [2024-07-26 09:06:32.556610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.359 qpair failed and we were unable to recover it. 00:33:14.359 [2024-07-26 09:06:32.556757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.359 [2024-07-26 09:06:32.556783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.359 qpair failed and we were unable to recover it. 00:33:14.359 [2024-07-26 09:06:32.556900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.359 [2024-07-26 09:06:32.556926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.359 qpair failed and we were unable to recover it. 00:33:14.359 [2024-07-26 09:06:32.557091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.359 [2024-07-26 09:06:32.557133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.359 qpair failed and we were unable to recover it. 00:33:14.359 [2024-07-26 09:06:32.557262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.359 [2024-07-26 09:06:32.557287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.359 qpair failed and we were unable to recover it. 00:33:14.359 [2024-07-26 09:06:32.557415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.359 [2024-07-26 09:06:32.557441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.359 qpair failed and we were unable to recover it. 00:33:14.359 [2024-07-26 09:06:32.557552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.359 [2024-07-26 09:06:32.557577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.359 qpair failed and we were unable to recover it. 00:33:14.359 [2024-07-26 09:06:32.557748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.359 [2024-07-26 09:06:32.557775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.359 qpair failed and we were unable to recover it. 00:33:14.359 [2024-07-26 09:06:32.557894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.359 [2024-07-26 09:06:32.557920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.359 qpair failed and we were unable to recover it. 00:33:14.359 [2024-07-26 09:06:32.558066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.359 [2024-07-26 09:06:32.558092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.359 qpair failed and we were unable to recover it. 00:33:14.359 [2024-07-26 09:06:32.558213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.359 [2024-07-26 09:06:32.558238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.359 qpair failed and we were unable to recover it. 00:33:14.359 [2024-07-26 09:06:32.558426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.359 [2024-07-26 09:06:32.558455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.359 qpair failed and we were unable to recover it. 00:33:14.359 [2024-07-26 09:06:32.558615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.359 [2024-07-26 09:06:32.558643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.359 qpair failed and we were unable to recover it. 00:33:14.360 [2024-07-26 09:06:32.558834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.360 [2024-07-26 09:06:32.558859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.360 qpair failed and we were unable to recover it. 00:33:14.360 [2024-07-26 09:06:32.558992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.360 [2024-07-26 09:06:32.559021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.360 qpair failed and we were unable to recover it. 00:33:14.360 [2024-07-26 09:06:32.559187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.360 [2024-07-26 09:06:32.559216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.360 qpair failed and we were unable to recover it. 00:33:14.360 [2024-07-26 09:06:32.559347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.360 [2024-07-26 09:06:32.559372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.360 qpair failed and we were unable to recover it. 00:33:14.360 [2024-07-26 09:06:32.559497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.360 [2024-07-26 09:06:32.559524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.360 qpair failed and we were unable to recover it. 00:33:14.360 [2024-07-26 09:06:32.559674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.360 [2024-07-26 09:06:32.559704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.360 qpair failed and we were unable to recover it. 00:33:14.360 [2024-07-26 09:06:32.559908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.360 [2024-07-26 09:06:32.559934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.360 qpair failed and we were unable to recover it. 00:33:14.360 [2024-07-26 09:06:32.560050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.360 [2024-07-26 09:06:32.560084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.360 qpair failed and we were unable to recover it. 00:33:14.360 [2024-07-26 09:06:32.560232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.360 [2024-07-26 09:06:32.560258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.360 qpair failed and we were unable to recover it. 00:33:14.360 [2024-07-26 09:06:32.560409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.360 [2024-07-26 09:06:32.560435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.360 qpair failed and we were unable to recover it. 00:33:14.360 [2024-07-26 09:06:32.560584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.360 [2024-07-26 09:06:32.560609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.360 qpair failed and we were unable to recover it. 00:33:14.360 [2024-07-26 09:06:32.560752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.360 [2024-07-26 09:06:32.560793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.360 qpair failed and we were unable to recover it. 00:33:14.360 [2024-07-26 09:06:32.560982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.360 [2024-07-26 09:06:32.561010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.360 qpair failed and we were unable to recover it. 00:33:14.360 [2024-07-26 09:06:32.561183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.360 [2024-07-26 09:06:32.561209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.360 qpair failed and we were unable to recover it. 00:33:14.360 [2024-07-26 09:06:32.561328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.360 [2024-07-26 09:06:32.561353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.360 qpair failed and we were unable to recover it. 00:33:14.360 [2024-07-26 09:06:32.561522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.360 [2024-07-26 09:06:32.561547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.360 qpair failed and we were unable to recover it. 00:33:14.360 [2024-07-26 09:06:32.561714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.360 [2024-07-26 09:06:32.561742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.360 qpair failed and we were unable to recover it. 00:33:14.360 [2024-07-26 09:06:32.561868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.360 [2024-07-26 09:06:32.561895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.360 qpair failed and we were unable to recover it. 00:33:14.360 [2024-07-26 09:06:32.562085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.360 [2024-07-26 09:06:32.562111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.360 qpair failed and we were unable to recover it. 00:33:14.360 [2024-07-26 09:06:32.562252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.360 [2024-07-26 09:06:32.562281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.360 qpair failed and we were unable to recover it. 00:33:14.360 [2024-07-26 09:06:32.562431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.360 [2024-07-26 09:06:32.562459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.360 qpair failed and we were unable to recover it. 00:33:14.360 [2024-07-26 09:06:32.562643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.360 [2024-07-26 09:06:32.562668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.360 qpair failed and we were unable to recover it. 00:33:14.360 [2024-07-26 09:06:32.562835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.360 [2024-07-26 09:06:32.562863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.360 qpair failed and we were unable to recover it. 00:33:14.360 [2024-07-26 09:06:32.563020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.360 [2024-07-26 09:06:32.563049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.360 qpair failed and we were unable to recover it. 00:33:14.360 [2024-07-26 09:06:32.563216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.360 [2024-07-26 09:06:32.563241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.360 qpair failed and we were unable to recover it. 00:33:14.360 [2024-07-26 09:06:32.563386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.360 [2024-07-26 09:06:32.563411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.360 qpair failed and we were unable to recover it. 00:33:14.360 [2024-07-26 09:06:32.563579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.360 [2024-07-26 09:06:32.563604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.360 qpair failed and we were unable to recover it. 00:33:14.360 [2024-07-26 09:06:32.563721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.360 [2024-07-26 09:06:32.563746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.360 qpair failed and we were unable to recover it. 00:33:14.360 [2024-07-26 09:06:32.563856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.360 [2024-07-26 09:06:32.563882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.360 qpair failed and we were unable to recover it. 00:33:14.360 [2024-07-26 09:06:32.564074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.360 [2024-07-26 09:06:32.564103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.360 qpair failed and we were unable to recover it. 00:33:14.360 [2024-07-26 09:06:32.564241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.360 [2024-07-26 09:06:32.564267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.360 qpair failed and we were unable to recover it. 00:33:14.360 [2024-07-26 09:06:32.564410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.360 [2024-07-26 09:06:32.564435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.360 qpair failed and we were unable to recover it. 00:33:14.360 [2024-07-26 09:06:32.564619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.360 [2024-07-26 09:06:32.564647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.360 qpair failed and we were unable to recover it. 00:33:14.360 [2024-07-26 09:06:32.564808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.360 [2024-07-26 09:06:32.564833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.360 qpair failed and we were unable to recover it. 00:33:14.360 [2024-07-26 09:06:32.564957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.360 [2024-07-26 09:06:32.564998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.360 qpair failed and we were unable to recover it. 00:33:14.360 [2024-07-26 09:06:32.565186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.360 [2024-07-26 09:06:32.565215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.360 qpair failed and we were unable to recover it. 00:33:14.360 [2024-07-26 09:06:32.565383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.360 [2024-07-26 09:06:32.565408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.360 qpair failed and we were unable to recover it. 00:33:14.360 [2024-07-26 09:06:32.565609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.360 [2024-07-26 09:06:32.565637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.360 qpair failed and we were unable to recover it. 00:33:14.360 [2024-07-26 09:06:32.565766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.360 [2024-07-26 09:06:32.565794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.360 qpair failed and we were unable to recover it. 00:33:14.360 [2024-07-26 09:06:32.565928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.360 [2024-07-26 09:06:32.565955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.360 qpair failed and we were unable to recover it. 00:33:14.360 [2024-07-26 09:06:32.566099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.360 [2024-07-26 09:06:32.566125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.360 qpair failed and we were unable to recover it. 00:33:14.360 [2024-07-26 09:06:32.566273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.360 [2024-07-26 09:06:32.566301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.360 qpair failed and we were unable to recover it. 00:33:14.360 [2024-07-26 09:06:32.566444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.360 [2024-07-26 09:06:32.566470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.360 qpair failed and we were unable to recover it. 00:33:14.360 [2024-07-26 09:06:32.566614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.360 [2024-07-26 09:06:32.566641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.360 qpair failed and we were unable to recover it. 00:33:14.360 [2024-07-26 09:06:32.566815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.360 [2024-07-26 09:06:32.566844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.360 qpair failed and we were unable to recover it. 00:33:14.360 [2024-07-26 09:06:32.566981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.360 [2024-07-26 09:06:32.567030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.360 qpair failed and we were unable to recover it. 00:33:14.360 [2024-07-26 09:06:32.567208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.360 [2024-07-26 09:06:32.567233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.360 qpair failed and we were unable to recover it. 00:33:14.360 [2024-07-26 09:06:32.567351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.360 [2024-07-26 09:06:32.567376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.360 qpair failed and we were unable to recover it. 00:33:14.360 [2024-07-26 09:06:32.567556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.360 [2024-07-26 09:06:32.567583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.360 qpair failed and we were unable to recover it. 00:33:14.360 [2024-07-26 09:06:32.567731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.360 [2024-07-26 09:06:32.567757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.360 qpair failed and we were unable to recover it. 00:33:14.360 [2024-07-26 09:06:32.567900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.360 [2024-07-26 09:06:32.567925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.360 qpair failed and we were unable to recover it. 00:33:14.360 [2024-07-26 09:06:32.568069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.360 [2024-07-26 09:06:32.568095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.360 qpair failed and we were unable to recover it. 00:33:14.360 [2024-07-26 09:06:32.568242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.360 [2024-07-26 09:06:32.568267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.360 qpair failed and we were unable to recover it. 00:33:14.360 [2024-07-26 09:06:32.568427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.360 [2024-07-26 09:06:32.568455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.360 qpair failed and we were unable to recover it. 00:33:14.360 [2024-07-26 09:06:32.568614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.361 [2024-07-26 09:06:32.568640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.361 qpair failed and we were unable to recover it. 00:33:14.361 [2024-07-26 09:06:32.568802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.361 [2024-07-26 09:06:32.568830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.361 qpair failed and we were unable to recover it. 00:33:14.361 [2024-07-26 09:06:32.569016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.361 [2024-07-26 09:06:32.569044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.361 qpair failed and we were unable to recover it. 00:33:14.361 [2024-07-26 09:06:32.569211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.361 [2024-07-26 09:06:32.569237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.361 qpair failed and we were unable to recover it. 00:33:14.361 [2024-07-26 09:06:32.569352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.361 [2024-07-26 09:06:32.569394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.361 qpair failed and we were unable to recover it. 00:33:14.361 [2024-07-26 09:06:32.569549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.361 [2024-07-26 09:06:32.569576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.361 qpair failed and we were unable to recover it. 00:33:14.361 [2024-07-26 09:06:32.569725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.361 [2024-07-26 09:06:32.569750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.361 qpair failed and we were unable to recover it. 00:33:14.361 [2024-07-26 09:06:32.569889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.361 [2024-07-26 09:06:32.569930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.361 qpair failed and we were unable to recover it. 00:33:14.361 [2024-07-26 09:06:32.570081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.361 [2024-07-26 09:06:32.570108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.361 qpair failed and we were unable to recover it. 00:33:14.361 [2024-07-26 09:06:32.570231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.361 [2024-07-26 09:06:32.570257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.361 qpair failed and we were unable to recover it. 00:33:14.361 [2024-07-26 09:06:32.570404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.361 [2024-07-26 09:06:32.570447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.361 qpair failed and we were unable to recover it. 00:33:14.361 [2024-07-26 09:06:32.570605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.361 [2024-07-26 09:06:32.570633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.361 qpair failed and we were unable to recover it. 00:33:14.361 [2024-07-26 09:06:32.570779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.361 [2024-07-26 09:06:32.570805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.361 qpair failed and we were unable to recover it. 00:33:14.361 [2024-07-26 09:06:32.570994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.361 [2024-07-26 09:06:32.571022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.361 qpair failed and we were unable to recover it. 00:33:14.361 [2024-07-26 09:06:32.571192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.361 [2024-07-26 09:06:32.571221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.361 qpair failed and we were unable to recover it. 00:33:14.361 [2024-07-26 09:06:32.571368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.361 [2024-07-26 09:06:32.571394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.361 qpair failed and we were unable to recover it. 00:33:14.361 [2024-07-26 09:06:32.571566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.361 [2024-07-26 09:06:32.571591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.361 qpair failed and we were unable to recover it. 00:33:14.361 [2024-07-26 09:06:32.571721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.361 [2024-07-26 09:06:32.571749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.361 qpair failed and we were unable to recover it. 00:33:14.361 [2024-07-26 09:06:32.571947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.361 [2024-07-26 09:06:32.571972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.361 qpair failed and we were unable to recover it. 00:33:14.361 [2024-07-26 09:06:32.572093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.361 [2024-07-26 09:06:32.572119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.361 qpair failed and we were unable to recover it. 00:33:14.361 [2024-07-26 09:06:32.572266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.361 [2024-07-26 09:06:32.572291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.361 qpair failed and we were unable to recover it. 00:33:14.361 [2024-07-26 09:06:32.572438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.361 [2024-07-26 09:06:32.572464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.361 qpair failed and we were unable to recover it. 00:33:14.361 [2024-07-26 09:06:32.572622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.361 [2024-07-26 09:06:32.572650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.361 qpair failed and we were unable to recover it. 00:33:14.361 [2024-07-26 09:06:32.572809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.361 [2024-07-26 09:06:32.572838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.361 qpair failed and we were unable to recover it. 00:33:14.361 [2024-07-26 09:06:32.573024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.361 [2024-07-26 09:06:32.573053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.361 qpair failed and we were unable to recover it. 00:33:14.361 [2024-07-26 09:06:32.573203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.361 [2024-07-26 09:06:32.573229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.361 qpair failed and we were unable to recover it. 00:33:14.361 [2024-07-26 09:06:32.573377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.361 [2024-07-26 09:06:32.573402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.361 qpair failed and we were unable to recover it. 00:33:14.361 [2024-07-26 09:06:32.573588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.361 [2024-07-26 09:06:32.573614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.361 qpair failed and we were unable to recover it. 00:33:14.361 [2024-07-26 09:06:32.573780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.361 [2024-07-26 09:06:32.573809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.361 qpair failed and we were unable to recover it. 00:33:14.361 [2024-07-26 09:06:32.574003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.361 [2024-07-26 09:06:32.574031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.361 qpair failed and we were unable to recover it. 00:33:14.361 [2024-07-26 09:06:32.574203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.361 [2024-07-26 09:06:32.574230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.361 qpair failed and we were unable to recover it. 00:33:14.361 [2024-07-26 09:06:32.574379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.361 [2024-07-26 09:06:32.574429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.361 qpair failed and we were unable to recover it. 00:33:14.361 [2024-07-26 09:06:32.574619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.361 [2024-07-26 09:06:32.574647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.361 qpair failed and we were unable to recover it. 00:33:14.361 [2024-07-26 09:06:32.574818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.361 [2024-07-26 09:06:32.574843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.361 qpair failed and we were unable to recover it. 00:33:14.361 [2024-07-26 09:06:32.574983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.361 [2024-07-26 09:06:32.575009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.361 qpair failed and we were unable to recover it. 00:33:14.361 [2024-07-26 09:06:32.575196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.361 [2024-07-26 09:06:32.575225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.361 qpair failed and we were unable to recover it. 00:33:14.361 [2024-07-26 09:06:32.575367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.361 [2024-07-26 09:06:32.575392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.361 qpair failed and we were unable to recover it. 00:33:14.361 [2024-07-26 09:06:32.575563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.361 [2024-07-26 09:06:32.575588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.361 qpair failed and we were unable to recover it. 00:33:14.361 [2024-07-26 09:06:32.575754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.361 [2024-07-26 09:06:32.575782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.361 qpair failed and we were unable to recover it. 00:33:14.361 [2024-07-26 09:06:32.575946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.361 [2024-07-26 09:06:32.575972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.361 qpair failed and we were unable to recover it. 00:33:14.361 [2024-07-26 09:06:32.576123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.361 [2024-07-26 09:06:32.576150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.361 qpair failed and we were unable to recover it. 00:33:14.361 [2024-07-26 09:06:32.576272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.361 [2024-07-26 09:06:32.576297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.361 qpair failed and we were unable to recover it. 00:33:14.361 [2024-07-26 09:06:32.576446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.361 [2024-07-26 09:06:32.576471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.361 qpair failed and we were unable to recover it. 00:33:14.361 [2024-07-26 09:06:32.576584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.361 [2024-07-26 09:06:32.576626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.361 qpair failed and we were unable to recover it. 00:33:14.361 [2024-07-26 09:06:32.576796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.361 [2024-07-26 09:06:32.576823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.361 qpair failed and we were unable to recover it. 00:33:14.361 [2024-07-26 09:06:32.576972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.361 [2024-07-26 09:06:32.576997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.361 qpair failed and we were unable to recover it. 00:33:14.361 [2024-07-26 09:06:32.577149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.361 [2024-07-26 09:06:32.577176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.361 qpair failed and we were unable to recover it. 00:33:14.361 [2024-07-26 09:06:32.577319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.361 [2024-07-26 09:06:32.577360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.361 qpair failed and we were unable to recover it. 00:33:14.361 [2024-07-26 09:06:32.577527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.361 [2024-07-26 09:06:32.577553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.361 qpair failed and we were unable to recover it. 00:33:14.361 [2024-07-26 09:06:32.577667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.361 [2024-07-26 09:06:32.577692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.361 qpair failed and we were unable to recover it. 00:33:14.361 [2024-07-26 09:06:32.577862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.361 [2024-07-26 09:06:32.577891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.361 qpair failed and we were unable to recover it. 00:33:14.361 [2024-07-26 09:06:32.578089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.361 [2024-07-26 09:06:32.578132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.361 qpair failed and we were unable to recover it. 00:33:14.361 [2024-07-26 09:06:32.578257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.361 [2024-07-26 09:06:32.578283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.361 qpair failed and we were unable to recover it. 00:33:14.361 [2024-07-26 09:06:32.578428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.361 [2024-07-26 09:06:32.578454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.361 qpair failed and we were unable to recover it. 00:33:14.361 [2024-07-26 09:06:32.578624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.361 [2024-07-26 09:06:32.578650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.361 qpair failed and we were unable to recover it. 00:33:14.361 [2024-07-26 09:06:32.578831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.361 [2024-07-26 09:06:32.578859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.361 qpair failed and we were unable to recover it. 00:33:14.361 [2024-07-26 09:06:32.579064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.361 [2024-07-26 09:06:32.579090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.361 qpair failed and we were unable to recover it. 00:33:14.361 [2024-07-26 09:06:32.579271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.361 [2024-07-26 09:06:32.579297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.361 qpair failed and we were unable to recover it. 00:33:14.361 [2024-07-26 09:06:32.579461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.361 [2024-07-26 09:06:32.579489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.361 qpair failed and we were unable to recover it. 00:33:14.361 [2024-07-26 09:06:32.579661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.362 [2024-07-26 09:06:32.579688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.362 qpair failed and we were unable to recover it. 00:33:14.362 [2024-07-26 09:06:32.579834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.362 [2024-07-26 09:06:32.579859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.362 qpair failed and we were unable to recover it. 00:33:14.362 [2024-07-26 09:06:32.580051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.362 [2024-07-26 09:06:32.580085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.362 qpair failed and we were unable to recover it. 00:33:14.362 [2024-07-26 09:06:32.580242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.362 [2024-07-26 09:06:32.580270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.362 qpair failed and we were unable to recover it. 00:33:14.362 [2024-07-26 09:06:32.580457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.362 [2024-07-26 09:06:32.580482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.362 qpair failed and we were unable to recover it. 00:33:14.362 [2024-07-26 09:06:32.580641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.362 [2024-07-26 09:06:32.580670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.362 qpair failed and we were unable to recover it. 00:33:14.362 [2024-07-26 09:06:32.580802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.362 [2024-07-26 09:06:32.580830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.362 qpair failed and we were unable to recover it. 00:33:14.362 [2024-07-26 09:06:32.580964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.362 [2024-07-26 09:06:32.580989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.362 qpair failed and we were unable to recover it. 00:33:14.362 [2024-07-26 09:06:32.581114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.362 [2024-07-26 09:06:32.581140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.362 qpair failed and we were unable to recover it. 00:33:14.362 [2024-07-26 09:06:32.581261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.362 [2024-07-26 09:06:32.581287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.362 qpair failed and we were unable to recover it. 00:33:14.362 [2024-07-26 09:06:32.581409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.362 [2024-07-26 09:06:32.581435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.362 qpair failed and we were unable to recover it. 00:33:14.362 [2024-07-26 09:06:32.581620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.362 [2024-07-26 09:06:32.581649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.362 qpair failed and we were unable to recover it. 00:33:14.362 [2024-07-26 09:06:32.581805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.362 [2024-07-26 09:06:32.581838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.362 qpair failed and we were unable to recover it. 00:33:14.362 [2024-07-26 09:06:32.581985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.362 [2024-07-26 09:06:32.582010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.362 qpair failed and we were unable to recover it. 00:33:14.362 [2024-07-26 09:06:32.582155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.362 [2024-07-26 09:06:32.582181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.362 qpair failed and we were unable to recover it. 00:33:14.362 [2024-07-26 09:06:32.582293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.362 [2024-07-26 09:06:32.582318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.362 qpair failed and we were unable to recover it. 00:33:14.362 [2024-07-26 09:06:32.582502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.362 [2024-07-26 09:06:32.582527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.362 qpair failed and we were unable to recover it. 00:33:14.362 [2024-07-26 09:06:32.582673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.362 [2024-07-26 09:06:32.582698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.362 qpair failed and we were unable to recover it. 00:33:14.362 [2024-07-26 09:06:32.582854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.362 [2024-07-26 09:06:32.582882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.362 qpair failed and we were unable to recover it. 00:33:14.362 [2024-07-26 09:06:32.583043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.362 [2024-07-26 09:06:32.583077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.362 qpair failed and we were unable to recover it. 00:33:14.362 [2024-07-26 09:06:32.583229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.362 [2024-07-26 09:06:32.583271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.362 qpair failed and we were unable to recover it. 00:33:14.362 [2024-07-26 09:06:32.583458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.362 [2024-07-26 09:06:32.583486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.362 qpair failed and we were unable to recover it. 00:33:14.362 [2024-07-26 09:06:32.583649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.362 [2024-07-26 09:06:32.583674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.362 qpair failed and we were unable to recover it. 00:33:14.362 [2024-07-26 09:06:32.583836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.362 [2024-07-26 09:06:32.583864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.362 qpair failed and we were unable to recover it. 00:33:14.362 [2024-07-26 09:06:32.584034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.362 [2024-07-26 09:06:32.584068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.362 qpair failed and we were unable to recover it. 00:33:14.362 [2024-07-26 09:06:32.584206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.362 [2024-07-26 09:06:32.584232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.362 qpair failed and we were unable to recover it. 00:33:14.362 [2024-07-26 09:06:32.584354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.362 [2024-07-26 09:06:32.584380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.362 qpair failed and we were unable to recover it. 00:33:14.362 [2024-07-26 09:06:32.584509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.362 [2024-07-26 09:06:32.584535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.362 qpair failed and we were unable to recover it. 00:33:14.362 [2024-07-26 09:06:32.584674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.362 [2024-07-26 09:06:32.584699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.362 qpair failed and we were unable to recover it. 00:33:14.362 [2024-07-26 09:06:32.584819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.362 [2024-07-26 09:06:32.584860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.362 qpair failed and we were unable to recover it. 00:33:14.362 [2024-07-26 09:06:32.584986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.362 [2024-07-26 09:06:32.585015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.362 qpair failed and we were unable to recover it. 00:33:14.362 [2024-07-26 09:06:32.585159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.362 [2024-07-26 09:06:32.585186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.362 qpair failed and we were unable to recover it. 00:33:14.362 [2024-07-26 09:06:32.585358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.362 [2024-07-26 09:06:32.585401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.362 qpair failed and we were unable to recover it. 00:33:14.362 [2024-07-26 09:06:32.585534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.362 [2024-07-26 09:06:32.585561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.362 qpair failed and we were unable to recover it. 00:33:14.362 [2024-07-26 09:06:32.585729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.362 [2024-07-26 09:06:32.585755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.362 qpair failed and we were unable to recover it. 00:33:14.362 [2024-07-26 09:06:32.585898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.362 [2024-07-26 09:06:32.585923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.362 qpair failed and we were unable to recover it. 00:33:14.362 [2024-07-26 09:06:32.586118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.362 [2024-07-26 09:06:32.586147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.362 qpair failed and we were unable to recover it. 00:33:14.362 [2024-07-26 09:06:32.586314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.362 [2024-07-26 09:06:32.586341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.362 qpair failed and we were unable to recover it. 00:33:14.362 [2024-07-26 09:06:32.586463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.362 [2024-07-26 09:06:32.586489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.362 qpair failed and we were unable to recover it. 00:33:14.362 [2024-07-26 09:06:32.586665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.362 [2024-07-26 09:06:32.586707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.362 qpair failed and we were unable to recover it. 00:33:14.362 [2024-07-26 09:06:32.586894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.362 [2024-07-26 09:06:32.586919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.362 qpair failed and we were unable to recover it. 00:33:14.362 [2024-07-26 09:06:32.587108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.362 [2024-07-26 09:06:32.587137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.362 qpair failed and we were unable to recover it. 00:33:14.362 [2024-07-26 09:06:32.587272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.362 [2024-07-26 09:06:32.587301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.362 qpair failed and we were unable to recover it. 00:33:14.362 [2024-07-26 09:06:32.587467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.362 [2024-07-26 09:06:32.587492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.362 qpair failed and we were unable to recover it. 00:33:14.362 [2024-07-26 09:06:32.587642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.362 [2024-07-26 09:06:32.587668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.362 qpair failed and we were unable to recover it. 00:33:14.362 [2024-07-26 09:06:32.587812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.362 [2024-07-26 09:06:32.587839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.362 qpair failed and we were unable to recover it. 00:33:14.362 [2024-07-26 09:06:32.587987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.362 [2024-07-26 09:06:32.588013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.362 qpair failed and we were unable to recover it. 00:33:14.362 [2024-07-26 09:06:32.588130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.362 [2024-07-26 09:06:32.588156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.362 qpair failed and we were unable to recover it. 00:33:14.362 [2024-07-26 09:06:32.588281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.362 [2024-07-26 09:06:32.588307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.362 qpair failed and we were unable to recover it. 00:33:14.362 [2024-07-26 09:06:32.588426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.362 [2024-07-26 09:06:32.588453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.362 qpair failed and we were unable to recover it. 00:33:14.362 [2024-07-26 09:06:32.588573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.362 [2024-07-26 09:06:32.588598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.362 qpair failed and we were unable to recover it. 00:33:14.362 [2024-07-26 09:06:32.588767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.362 [2024-07-26 09:06:32.588796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.362 qpair failed and we were unable to recover it. 00:33:14.362 [2024-07-26 09:06:32.588969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.362 [2024-07-26 09:06:32.589002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.362 qpair failed and we were unable to recover it. 00:33:14.362 [2024-07-26 09:06:32.589168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.362 [2024-07-26 09:06:32.589194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.362 qpair failed and we were unable to recover it. 00:33:14.362 [2024-07-26 09:06:32.589322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.362 [2024-07-26 09:06:32.589347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.362 qpair failed and we were unable to recover it. 00:33:14.362 [2024-07-26 09:06:32.589463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.362 [2024-07-26 09:06:32.589488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.362 qpair failed and we were unable to recover it. 00:33:14.362 [2024-07-26 09:06:32.589674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.362 [2024-07-26 09:06:32.589702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.362 qpair failed and we were unable to recover it. 00:33:14.362 [2024-07-26 09:06:32.589893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.362 [2024-07-26 09:06:32.589921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.362 qpair failed and we were unable to recover it. 00:33:14.362 [2024-07-26 09:06:32.590070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.362 [2024-07-26 09:06:32.590096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.362 qpair failed and we were unable to recover it. 00:33:14.362 [2024-07-26 09:06:32.590209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.362 [2024-07-26 09:06:32.590234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.362 qpair failed and we were unable to recover it. 00:33:14.362 [2024-07-26 09:06:32.590376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.362 [2024-07-26 09:06:32.590405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.362 qpair failed and we were unable to recover it. 00:33:14.362 [2024-07-26 09:06:32.590570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.362 [2024-07-26 09:06:32.590595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.362 qpair failed and we were unable to recover it. 00:33:14.362 [2024-07-26 09:06:32.590709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.362 [2024-07-26 09:06:32.590735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.363 qpair failed and we were unable to recover it. 00:33:14.363 [2024-07-26 09:06:32.590901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.363 [2024-07-26 09:06:32.590929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.363 qpair failed and we were unable to recover it. 00:33:14.363 [2024-07-26 09:06:32.591068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.363 [2024-07-26 09:06:32.591095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.363 qpair failed and we were unable to recover it. 00:33:14.363 [2024-07-26 09:06:32.591242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.363 [2024-07-26 09:06:32.591267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.363 qpair failed and we were unable to recover it. 00:33:14.363 [2024-07-26 09:06:32.591420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.363 [2024-07-26 09:06:32.591445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.363 qpair failed and we were unable to recover it. 00:33:14.363 [2024-07-26 09:06:32.591618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.363 [2024-07-26 09:06:32.591643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.363 qpair failed and we were unable to recover it. 00:33:14.363 [2024-07-26 09:06:32.591809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.363 [2024-07-26 09:06:32.591839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.363 qpair failed and we were unable to recover it. 00:33:14.363 [2024-07-26 09:06:32.591965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.363 [2024-07-26 09:06:32.591993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.363 qpair failed and we were unable to recover it. 00:33:14.363 [2024-07-26 09:06:32.592137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.363 [2024-07-26 09:06:32.592163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.363 qpair failed and we were unable to recover it. 00:33:14.363 [2024-07-26 09:06:32.592312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.363 [2024-07-26 09:06:32.592338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.363 qpair failed and we were unable to recover it. 00:33:14.363 [2024-07-26 09:06:32.592479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.363 [2024-07-26 09:06:32.592520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.363 qpair failed and we were unable to recover it. 00:33:14.363 [2024-07-26 09:06:32.592690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.363 [2024-07-26 09:06:32.592715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.363 qpair failed and we were unable to recover it. 00:33:14.363 [2024-07-26 09:06:32.592831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.363 [2024-07-26 09:06:32.592856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.363 qpair failed and we were unable to recover it. 00:33:14.363 [2024-07-26 09:06:32.593032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.363 [2024-07-26 09:06:32.593068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.363 qpair failed and we were unable to recover it. 00:33:14.363 [2024-07-26 09:06:32.593206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.363 [2024-07-26 09:06:32.593231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.363 qpair failed and we were unable to recover it. 00:33:14.363 [2024-07-26 09:06:32.593377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.363 [2024-07-26 09:06:32.593402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.363 qpair failed and we were unable to recover it. 00:33:14.363 [2024-07-26 09:06:32.593581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.363 [2024-07-26 09:06:32.593609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.363 qpair failed and we were unable to recover it. 00:33:14.363 [2024-07-26 09:06:32.593776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.363 [2024-07-26 09:06:32.593801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.363 qpair failed and we were unable to recover it. 00:33:14.363 [2024-07-26 09:06:32.593918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.363 [2024-07-26 09:06:32.593944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.363 qpair failed and we were unable to recover it. 00:33:14.363 [2024-07-26 09:06:32.594077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.363 [2024-07-26 09:06:32.594103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.363 qpair failed and we were unable to recover it. 00:33:14.363 [2024-07-26 09:06:32.594250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.363 [2024-07-26 09:06:32.594275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.363 qpair failed and we were unable to recover it. 00:33:14.363 [2024-07-26 09:06:32.594442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.363 [2024-07-26 09:06:32.594470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.363 qpair failed and we were unable to recover it. 00:33:14.363 [2024-07-26 09:06:32.594599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.363 [2024-07-26 09:06:32.594628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.363 qpair failed and we were unable to recover it. 00:33:14.363 [2024-07-26 09:06:32.594777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.363 [2024-07-26 09:06:32.594803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.363 qpair failed and we were unable to recover it. 00:33:14.363 [2024-07-26 09:06:32.594918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.363 [2024-07-26 09:06:32.594943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.363 qpair failed and we were unable to recover it. 00:33:14.363 [2024-07-26 09:06:32.595144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.363 [2024-07-26 09:06:32.595169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.363 qpair failed and we were unable to recover it. 00:33:14.363 [2024-07-26 09:06:32.595285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.363 [2024-07-26 09:06:32.595311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.363 qpair failed and we were unable to recover it. 00:33:14.363 [2024-07-26 09:06:32.595458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.363 [2024-07-26 09:06:32.595484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.363 qpair failed and we were unable to recover it. 00:33:14.363 [2024-07-26 09:06:32.595631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.363 [2024-07-26 09:06:32.595660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.363 qpair failed and we were unable to recover it. 00:33:14.363 [2024-07-26 09:06:32.595793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.363 [2024-07-26 09:06:32.595819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.363 qpair failed and we were unable to recover it. 00:33:14.363 [2024-07-26 09:06:32.596014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.363 [2024-07-26 09:06:32.596046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.363 qpair failed and we were unable to recover it. 00:33:14.363 [2024-07-26 09:06:32.596188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.363 [2024-07-26 09:06:32.596217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.363 qpair failed and we were unable to recover it. 00:33:14.363 [2024-07-26 09:06:32.596358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.363 [2024-07-26 09:06:32.596385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.363 qpair failed and we were unable to recover it. 00:33:14.363 [2024-07-26 09:06:32.596527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.363 [2024-07-26 09:06:32.596552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.363 qpair failed and we were unable to recover it. 00:33:14.363 [2024-07-26 09:06:32.596730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.363 [2024-07-26 09:06:32.596757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.363 qpair failed and we were unable to recover it. 00:33:14.363 [2024-07-26 09:06:32.596930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.363 [2024-07-26 09:06:32.596956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.363 qpair failed and we were unable to recover it. 00:33:14.363 [2024-07-26 09:06:32.597079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.363 [2024-07-26 09:06:32.597105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.363 qpair failed and we were unable to recover it. 00:33:14.363 [2024-07-26 09:06:32.597248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.363 [2024-07-26 09:06:32.597274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.363 qpair failed and we were unable to recover it. 00:33:14.363 [2024-07-26 09:06:32.597422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.363 [2024-07-26 09:06:32.597449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.363 qpair failed and we were unable to recover it. 00:33:14.363 [2024-07-26 09:06:32.597564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.363 [2024-07-26 09:06:32.597590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.363 qpair failed and we were unable to recover it. 00:33:14.363 [2024-07-26 09:06:32.597751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.363 [2024-07-26 09:06:32.597777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.363 qpair failed and we were unable to recover it. 00:33:14.363 [2024-07-26 09:06:32.597955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.363 [2024-07-26 09:06:32.597981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.363 qpair failed and we were unable to recover it. 00:33:14.363 [2024-07-26 09:06:32.598146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.363 [2024-07-26 09:06:32.598175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.363 qpair failed and we were unable to recover it. 00:33:14.363 [2024-07-26 09:06:32.598338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.363 [2024-07-26 09:06:32.598366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.363 qpair failed and we were unable to recover it. 00:33:14.363 [2024-07-26 09:06:32.598504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.363 [2024-07-26 09:06:32.598530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.363 qpair failed and we were unable to recover it. 00:33:14.363 [2024-07-26 09:06:32.598704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.363 [2024-07-26 09:06:32.598748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.363 qpair failed and we were unable to recover it. 00:33:14.363 [2024-07-26 09:06:32.598871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.363 [2024-07-26 09:06:32.598900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.363 qpair failed and we were unable to recover it. 00:33:14.363 [2024-07-26 09:06:32.599069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.363 [2024-07-26 09:06:32.599095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.363 qpair failed and we were unable to recover it. 00:33:14.363 [2024-07-26 09:06:32.599214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.363 [2024-07-26 09:06:32.599241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.363 qpair failed and we were unable to recover it. 00:33:14.363 [2024-07-26 09:06:32.599359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.363 [2024-07-26 09:06:32.599385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.363 qpair failed and we were unable to recover it. 00:33:14.363 [2024-07-26 09:06:32.599534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.363 [2024-07-26 09:06:32.599559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.363 qpair failed and we were unable to recover it. 00:33:14.363 [2024-07-26 09:06:32.599722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.363 [2024-07-26 09:06:32.599750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.363 qpair failed and we were unable to recover it. 00:33:14.363 [2024-07-26 09:06:32.599911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.363 [2024-07-26 09:06:32.599940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.363 qpair failed and we were unable to recover it. 00:33:14.363 [2024-07-26 09:06:32.600083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.363 [2024-07-26 09:06:32.600109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.363 qpair failed and we were unable to recover it. 00:33:14.363 [2024-07-26 09:06:32.600280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.364 [2024-07-26 09:06:32.600305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.364 qpair failed and we were unable to recover it. 00:33:14.364 [2024-07-26 09:06:32.600518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.364 [2024-07-26 09:06:32.600580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.364 qpair failed and we were unable to recover it. 00:33:14.364 [2024-07-26 09:06:32.600744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.364 [2024-07-26 09:06:32.600769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.364 qpair failed and we were unable to recover it. 00:33:14.364 [2024-07-26 09:06:32.600938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.364 [2024-07-26 09:06:32.600967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.364 qpair failed and we were unable to recover it. 00:33:14.364 [2024-07-26 09:06:32.601131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.364 [2024-07-26 09:06:32.601160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.364 qpair failed and we were unable to recover it. 00:33:14.364 [2024-07-26 09:06:32.601296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.364 [2024-07-26 09:06:32.601322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.364 qpair failed and we were unable to recover it. 00:33:14.364 [2024-07-26 09:06:32.601492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.364 [2024-07-26 09:06:32.601535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.364 qpair failed and we were unable to recover it. 00:33:14.364 [2024-07-26 09:06:32.601670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.364 [2024-07-26 09:06:32.601699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.364 qpair failed and we were unable to recover it. 00:33:14.364 [2024-07-26 09:06:32.601845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.364 [2024-07-26 09:06:32.601870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.364 qpair failed and we were unable to recover it. 00:33:14.364 [2024-07-26 09:06:32.602015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.364 [2024-07-26 09:06:32.602057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.364 qpair failed and we were unable to recover it. 00:33:14.364 [2024-07-26 09:06:32.602239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.364 [2024-07-26 09:06:32.602267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.364 qpair failed and we were unable to recover it. 00:33:14.364 [2024-07-26 09:06:32.602428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.364 [2024-07-26 09:06:32.602454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.364 qpair failed and we were unable to recover it. 00:33:14.364 [2024-07-26 09:06:32.602616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.364 [2024-07-26 09:06:32.602646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.364 qpair failed and we were unable to recover it. 00:33:14.364 [2024-07-26 09:06:32.602834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.364 [2024-07-26 09:06:32.602862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.364 qpair failed and we were unable to recover it. 00:33:14.364 [2024-07-26 09:06:32.603065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.364 [2024-07-26 09:06:32.603091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.364 qpair failed and we were unable to recover it. 00:33:14.364 [2024-07-26 09:06:32.603253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.364 [2024-07-26 09:06:32.603281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.364 qpair failed and we were unable to recover it. 00:33:14.364 [2024-07-26 09:06:32.603463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.364 [2024-07-26 09:06:32.603496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.364 qpair failed and we were unable to recover it. 00:33:14.364 [2024-07-26 09:06:32.603639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.364 [2024-07-26 09:06:32.603664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.364 qpair failed and we were unable to recover it. 00:33:14.364 [2024-07-26 09:06:32.603814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.364 [2024-07-26 09:06:32.603839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.364 qpair failed and we were unable to recover it. 00:33:14.364 [2024-07-26 09:06:32.604039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.364 [2024-07-26 09:06:32.604076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.364 qpair failed and we were unable to recover it. 00:33:14.364 [2024-07-26 09:06:32.604241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.364 [2024-07-26 09:06:32.604266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.364 qpair failed and we were unable to recover it. 00:33:14.364 [2024-07-26 09:06:32.604413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.364 [2024-07-26 09:06:32.604438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.364 qpair failed and we were unable to recover it. 00:33:14.364 [2024-07-26 09:06:32.604555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.364 [2024-07-26 09:06:32.604581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.364 qpair failed and we were unable to recover it. 00:33:14.364 [2024-07-26 09:06:32.604750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.364 [2024-07-26 09:06:32.604775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.364 qpair failed and we were unable to recover it. 00:33:14.364 [2024-07-26 09:06:32.604918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.364 [2024-07-26 09:06:32.604944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.364 qpair failed and we were unable to recover it. 00:33:14.364 [2024-07-26 09:06:32.605136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.364 [2024-07-26 09:06:32.605165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.364 qpair failed and we were unable to recover it. 00:33:14.364 [2024-07-26 09:06:32.605358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.364 [2024-07-26 09:06:32.605383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.364 qpair failed and we were unable to recover it. 00:33:14.364 [2024-07-26 09:06:32.605498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.364 [2024-07-26 09:06:32.605524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.364 qpair failed and we were unable to recover it. 00:33:14.364 [2024-07-26 09:06:32.605668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.364 [2024-07-26 09:06:32.605694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.364 qpair failed and we were unable to recover it. 00:33:14.364 [2024-07-26 09:06:32.605908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.364 [2024-07-26 09:06:32.605933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.364 qpair failed and we were unable to recover it. 00:33:14.364 [2024-07-26 09:06:32.606065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.364 [2024-07-26 09:06:32.606091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.364 qpair failed and we were unable to recover it. 00:33:14.364 [2024-07-26 09:06:32.606236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.364 [2024-07-26 09:06:32.606262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.364 qpair failed and we were unable to recover it. 00:33:14.364 [2024-07-26 09:06:32.606411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.364 [2024-07-26 09:06:32.606436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.364 qpair failed and we were unable to recover it. 00:33:14.364 [2024-07-26 09:06:32.606559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.364 [2024-07-26 09:06:32.606584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.364 qpair failed and we were unable to recover it. 00:33:14.364 [2024-07-26 09:06:32.606706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.364 [2024-07-26 09:06:32.606731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.364 qpair failed and we were unable to recover it. 00:33:14.364 [2024-07-26 09:06:32.606877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.364 [2024-07-26 09:06:32.606903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.364 qpair failed and we were unable to recover it. 00:33:14.364 [2024-07-26 09:06:32.607065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.364 [2024-07-26 09:06:32.607095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.364 qpair failed and we were unable to recover it. 00:33:14.364 [2024-07-26 09:06:32.607254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.364 [2024-07-26 09:06:32.607282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.364 qpair failed and we were unable to recover it. 00:33:14.364 [2024-07-26 09:06:32.607443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.364 [2024-07-26 09:06:32.607469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.364 qpair failed and we were unable to recover it. 00:33:14.364 [2024-07-26 09:06:32.607592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.364 [2024-07-26 09:06:32.607618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.364 qpair failed and we were unable to recover it. 00:33:14.364 [2024-07-26 09:06:32.607736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.364 [2024-07-26 09:06:32.607761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.364 qpair failed and we were unable to recover it. 00:33:14.364 [2024-07-26 09:06:32.607906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.364 [2024-07-26 09:06:32.607933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.364 qpair failed and we were unable to recover it. 00:33:14.364 [2024-07-26 09:06:32.608079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.364 [2024-07-26 09:06:32.608106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.364 qpair failed and we were unable to recover it. 00:33:14.364 [2024-07-26 09:06:32.608223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.364 [2024-07-26 09:06:32.608249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.364 qpair failed and we were unable to recover it. 00:33:14.364 [2024-07-26 09:06:32.608376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.364 [2024-07-26 09:06:32.608403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.364 qpair failed and we were unable to recover it. 00:33:14.364 [2024-07-26 09:06:32.608556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.364 [2024-07-26 09:06:32.608582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.364 qpair failed and we were unable to recover it. 00:33:14.364 [2024-07-26 09:06:32.608703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.364 [2024-07-26 09:06:32.608729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.364 qpair failed and we were unable to recover it. 00:33:14.364 [2024-07-26 09:06:32.608936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.364 [2024-07-26 09:06:32.608964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.364 qpair failed and we were unable to recover it. 00:33:14.364 [2024-07-26 09:06:32.609163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.364 [2024-07-26 09:06:32.609189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.364 qpair failed and we were unable to recover it. 00:33:14.364 [2024-07-26 09:06:32.609328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.364 [2024-07-26 09:06:32.609372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.364 qpair failed and we were unable to recover it. 00:33:14.364 [2024-07-26 09:06:32.609513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.364 [2024-07-26 09:06:32.609538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.364 qpair failed and we were unable to recover it. 00:33:14.364 [2024-07-26 09:06:32.609709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.364 [2024-07-26 09:06:32.609751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.364 qpair failed and we were unable to recover it. 00:33:14.364 [2024-07-26 09:06:32.609912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.364 [2024-07-26 09:06:32.609941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.364 qpair failed and we were unable to recover it. 00:33:14.364 [2024-07-26 09:06:32.610084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.364 [2024-07-26 09:06:32.610111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.364 qpair failed and we were unable to recover it. 00:33:14.364 [2024-07-26 09:06:32.610235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.364 [2024-07-26 09:06:32.610261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.364 qpair failed and we were unable to recover it. 00:33:14.364 [2024-07-26 09:06:32.610459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.364 [2024-07-26 09:06:32.610488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.364 qpair failed and we were unable to recover it. 00:33:14.364 [2024-07-26 09:06:32.610626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.364 [2024-07-26 09:06:32.610658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.364 qpair failed and we were unable to recover it. 00:33:14.365 [2024-07-26 09:06:32.610800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.365 [2024-07-26 09:06:32.610826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.365 qpair failed and we were unable to recover it. 00:33:14.365 [2024-07-26 09:06:32.611025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.365 [2024-07-26 09:06:32.611053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.365 qpair failed and we were unable to recover it. 00:33:14.365 [2024-07-26 09:06:32.611191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.365 [2024-07-26 09:06:32.611216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.365 qpair failed and we were unable to recover it. 00:33:14.365 [2024-07-26 09:06:32.611337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.365 [2024-07-26 09:06:32.611364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.365 qpair failed and we were unable to recover it. 00:33:14.365 [2024-07-26 09:06:32.611538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.365 [2024-07-26 09:06:32.611563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.365 qpair failed and we were unable to recover it. 00:33:14.365 [2024-07-26 09:06:32.611708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.365 [2024-07-26 09:06:32.611734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.365 qpair failed and we were unable to recover it. 00:33:14.365 [2024-07-26 09:06:32.611855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.365 [2024-07-26 09:06:32.611880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.365 qpair failed and we were unable to recover it. 00:33:14.365 [2024-07-26 09:06:32.612018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.365 [2024-07-26 09:06:32.612046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.365 qpair failed and we were unable to recover it. 00:33:14.365 [2024-07-26 09:06:32.612186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.365 [2024-07-26 09:06:32.612212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.365 qpair failed and we were unable to recover it. 00:33:14.365 [2024-07-26 09:06:32.612360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.365 [2024-07-26 09:06:32.612401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.365 qpair failed and we were unable to recover it. 00:33:14.365 [2024-07-26 09:06:32.612528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.365 [2024-07-26 09:06:32.612556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.365 qpair failed and we were unable to recover it. 00:33:14.365 [2024-07-26 09:06:32.612690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.365 [2024-07-26 09:06:32.612716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.365 qpair failed and we were unable to recover it. 00:33:14.365 [2024-07-26 09:06:32.612910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.365 [2024-07-26 09:06:32.612938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.365 qpair failed and we were unable to recover it. 00:33:14.365 [2024-07-26 09:06:32.613112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.365 [2024-07-26 09:06:32.613139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.365 qpair failed and we were unable to recover it. 00:33:14.365 [2024-07-26 09:06:32.613282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.365 [2024-07-26 09:06:32.613308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.365 qpair failed and we were unable to recover it. 00:33:14.365 [2024-07-26 09:06:32.613469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.365 [2024-07-26 09:06:32.613498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.365 qpair failed and we were unable to recover it. 00:33:14.365 [2024-07-26 09:06:32.613685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.365 [2024-07-26 09:06:32.613714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.365 qpair failed and we were unable to recover it. 00:33:14.365 [2024-07-26 09:06:32.613883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.365 [2024-07-26 09:06:32.613909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.365 qpair failed and we were unable to recover it. 00:33:14.365 [2024-07-26 09:06:32.614054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.365 [2024-07-26 09:06:32.614086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.365 qpair failed and we were unable to recover it. 00:33:14.365 [2024-07-26 09:06:32.614201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.365 [2024-07-26 09:06:32.614227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.365 qpair failed and we were unable to recover it. 00:33:14.365 [2024-07-26 09:06:32.614376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.365 [2024-07-26 09:06:32.614402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.365 qpair failed and we were unable to recover it. 00:33:14.365 [2024-07-26 09:06:32.614551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.365 [2024-07-26 09:06:32.614577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.365 qpair failed and we were unable to recover it. 00:33:14.365 [2024-07-26 09:06:32.614723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.365 [2024-07-26 09:06:32.614749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.365 qpair failed and we were unable to recover it. 00:33:14.365 [2024-07-26 09:06:32.614883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.365 [2024-07-26 09:06:32.614911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.365 qpair failed and we were unable to recover it. 00:33:14.365 [2024-07-26 09:06:32.615112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.365 [2024-07-26 09:06:32.615137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.365 qpair failed and we were unable to recover it. 00:33:14.365 [2024-07-26 09:06:32.615281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.365 [2024-07-26 09:06:32.615307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.365 qpair failed and we were unable to recover it. 00:33:14.365 [2024-07-26 09:06:32.615469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.365 [2024-07-26 09:06:32.615508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.365 qpair failed and we were unable to recover it. 00:33:14.365 [2024-07-26 09:06:32.615680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.365 [2024-07-26 09:06:32.615724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.365 qpair failed and we were unable to recover it. 00:33:14.365 [2024-07-26 09:06:32.615922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.365 [2024-07-26 09:06:32.615966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.365 qpair failed and we were unable to recover it. 00:33:14.365 [2024-07-26 09:06:32.616115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.365 [2024-07-26 09:06:32.616143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.365 qpair failed and we were unable to recover it. 00:33:14.365 [2024-07-26 09:06:32.616294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.365 [2024-07-26 09:06:32.616321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.365 qpair failed and we were unable to recover it. 00:33:14.365 [2024-07-26 09:06:32.616490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.365 [2024-07-26 09:06:32.616534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.365 qpair failed and we were unable to recover it. 00:33:14.365 [2024-07-26 09:06:32.616793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.365 [2024-07-26 09:06:32.616843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.365 qpair failed and we were unable to recover it. 00:33:14.365 [2024-07-26 09:06:32.617004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.365 [2024-07-26 09:06:32.617033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.365 qpair failed and we were unable to recover it. 00:33:14.365 [2024-07-26 09:06:32.617231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.365 [2024-07-26 09:06:32.617257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.365 qpair failed and we were unable to recover it. 00:33:14.365 [2024-07-26 09:06:32.617511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.365 [2024-07-26 09:06:32.617572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.365 qpair failed and we were unable to recover it. 00:33:14.365 [2024-07-26 09:06:32.617758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.365 [2024-07-26 09:06:32.617786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.365 qpair failed and we were unable to recover it. 00:33:14.365 [2024-07-26 09:06:32.617944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.365 [2024-07-26 09:06:32.617973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.365 qpair failed and we were unable to recover it. 00:33:14.365 [2024-07-26 09:06:32.618138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.365 [2024-07-26 09:06:32.618164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.365 qpair failed and we were unable to recover it. 00:33:14.365 [2024-07-26 09:06:32.618299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.365 [2024-07-26 09:06:32.618344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.365 qpair failed and we were unable to recover it. 00:33:14.365 [2024-07-26 09:06:32.618500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.365 [2024-07-26 09:06:32.618528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.365 qpair failed and we were unable to recover it. 00:33:14.365 [2024-07-26 09:06:32.618691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.365 [2024-07-26 09:06:32.618720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.365 qpair failed and we were unable to recover it. 00:33:14.365 [2024-07-26 09:06:32.618844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.365 [2024-07-26 09:06:32.618874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.365 qpair failed and we were unable to recover it. 00:33:14.365 [2024-07-26 09:06:32.619010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.365 [2024-07-26 09:06:32.619036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.365 qpair failed and we were unable to recover it. 00:33:14.365 [2024-07-26 09:06:32.619208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.365 [2024-07-26 09:06:32.619248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.365 qpair failed and we were unable to recover it. 00:33:14.365 [2024-07-26 09:06:32.619425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.365 [2024-07-26 09:06:32.619452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.365 qpair failed and we were unable to recover it. 00:33:14.365 [2024-07-26 09:06:32.619591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.365 [2024-07-26 09:06:32.619639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.365 qpair failed and we were unable to recover it. 00:33:14.365 [2024-07-26 09:06:32.619802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.365 [2024-07-26 09:06:32.619845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.365 qpair failed and we were unable to recover it. 00:33:14.365 [2024-07-26 09:06:32.619992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.365 [2024-07-26 09:06:32.620019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.365 qpair failed and we were unable to recover it. 00:33:14.365 [2024-07-26 09:06:32.620149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.365 [2024-07-26 09:06:32.620175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.365 qpair failed and we were unable to recover it. 00:33:14.365 [2024-07-26 09:06:32.620312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.365 [2024-07-26 09:06:32.620356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.365 qpair failed and we were unable to recover it. 00:33:14.365 [2024-07-26 09:06:32.620541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.365 [2024-07-26 09:06:32.620583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.365 qpair failed and we were unable to recover it. 00:33:14.365 [2024-07-26 09:06:32.620731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.365 [2024-07-26 09:06:32.620774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.365 qpair failed and we were unable to recover it. 00:33:14.365 [2024-07-26 09:06:32.620901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.365 [2024-07-26 09:06:32.620927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.365 qpair failed and we were unable to recover it. 00:33:14.365 [2024-07-26 09:06:32.621071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.365 [2024-07-26 09:06:32.621098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.365 qpair failed and we were unable to recover it. 00:33:14.365 [2024-07-26 09:06:32.621296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.365 [2024-07-26 09:06:32.621339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.365 qpair failed and we were unable to recover it. 00:33:14.365 [2024-07-26 09:06:32.621497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.365 [2024-07-26 09:06:32.621540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.365 qpair failed and we were unable to recover it. 00:33:14.365 [2024-07-26 09:06:32.621714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.365 [2024-07-26 09:06:32.621739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.365 qpair failed and we were unable to recover it. 00:33:14.365 [2024-07-26 09:06:32.621860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.365 [2024-07-26 09:06:32.621886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.365 qpair failed and we were unable to recover it. 00:33:14.365 [2024-07-26 09:06:32.622032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.365 [2024-07-26 09:06:32.622067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.365 qpair failed and we were unable to recover it. 00:33:14.365 [2024-07-26 09:06:32.622242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.365 [2024-07-26 09:06:32.622270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.365 qpair failed and we were unable to recover it. 00:33:14.365 [2024-07-26 09:06:32.622479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.365 [2024-07-26 09:06:32.622508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.365 qpair failed and we were unable to recover it. 00:33:14.365 [2024-07-26 09:06:32.622695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.366 [2024-07-26 09:06:32.622742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.366 qpair failed and we were unable to recover it. 00:33:14.366 [2024-07-26 09:06:32.622864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.366 [2024-07-26 09:06:32.622891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.366 qpair failed and we were unable to recover it. 00:33:14.366 [2024-07-26 09:06:32.623069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.366 [2024-07-26 09:06:32.623095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.366 qpair failed and we were unable to recover it. 00:33:14.366 [2024-07-26 09:06:32.623289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.366 [2024-07-26 09:06:32.623332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.366 qpair failed and we were unable to recover it. 00:33:14.366 [2024-07-26 09:06:32.623493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.366 [2024-07-26 09:06:32.623535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.366 qpair failed and we were unable to recover it. 00:33:14.366 [2024-07-26 09:06:32.623711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.366 [2024-07-26 09:06:32.623740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.366 qpair failed and we were unable to recover it. 00:33:14.366 [2024-07-26 09:06:32.623875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.366 [2024-07-26 09:06:32.623904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.366 qpair failed and we were unable to recover it. 00:33:14.366 [2024-07-26 09:06:32.624076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.366 [2024-07-26 09:06:32.624102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.366 qpair failed and we were unable to recover it. 00:33:14.366 [2024-07-26 09:06:32.624236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.366 [2024-07-26 09:06:32.624265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.366 qpair failed and we were unable to recover it. 00:33:14.366 [2024-07-26 09:06:32.624421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.366 [2024-07-26 09:06:32.624450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.366 qpair failed and we were unable to recover it. 00:33:14.366 [2024-07-26 09:06:32.624712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.366 [2024-07-26 09:06:32.624765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.366 qpair failed and we were unable to recover it. 00:33:14.366 [2024-07-26 09:06:32.624952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.366 [2024-07-26 09:06:32.624980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.366 qpair failed and we were unable to recover it. 00:33:14.366 [2024-07-26 09:06:32.625169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.366 [2024-07-26 09:06:32.625207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.366 qpair failed and we were unable to recover it. 00:33:14.366 [2024-07-26 09:06:32.625379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.366 [2024-07-26 09:06:32.625408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.366 qpair failed and we were unable to recover it. 00:33:14.366 [2024-07-26 09:06:32.625592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.366 [2024-07-26 09:06:32.625620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.366 qpair failed and we were unable to recover it. 00:33:14.366 [2024-07-26 09:06:32.625778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.366 [2024-07-26 09:06:32.625808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.366 qpair failed and we were unable to recover it. 00:33:14.366 [2024-07-26 09:06:32.625993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.366 [2024-07-26 09:06:32.626019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.366 qpair failed and we were unable to recover it. 00:33:14.366 [2024-07-26 09:06:32.626174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.366 [2024-07-26 09:06:32.626200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.366 qpair failed and we were unable to recover it. 00:33:14.366 [2024-07-26 09:06:32.626323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.366 [2024-07-26 09:06:32.626364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.366 qpair failed and we were unable to recover it. 00:33:14.366 [2024-07-26 09:06:32.626634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.366 [2024-07-26 09:06:32.626686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.366 qpair failed and we were unable to recover it. 00:33:14.366 [2024-07-26 09:06:32.626854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.366 [2024-07-26 09:06:32.626882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.366 qpair failed and we were unable to recover it. 00:33:14.366 [2024-07-26 09:06:32.627071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.366 [2024-07-26 09:06:32.627128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.366 qpair failed and we were unable to recover it. 00:33:14.366 [2024-07-26 09:06:32.627284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.366 [2024-07-26 09:06:32.627312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.366 qpair failed and we were unable to recover it. 00:33:14.366 [2024-07-26 09:06:32.627483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.366 [2024-07-26 09:06:32.627528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.366 qpair failed and we were unable to recover it. 00:33:14.366 [2024-07-26 09:06:32.627704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.366 [2024-07-26 09:06:32.627746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.366 qpair failed and we were unable to recover it. 00:33:14.366 [2024-07-26 09:06:32.627893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.366 [2024-07-26 09:06:32.627919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.366 qpair failed and we were unable to recover it. 00:33:14.366 [2024-07-26 09:06:32.628070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.366 [2024-07-26 09:06:32.628096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.366 qpair failed and we were unable to recover it. 00:33:14.366 [2024-07-26 09:06:32.628292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.366 [2024-07-26 09:06:32.628338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.366 qpair failed and we were unable to recover it. 00:33:14.366 [2024-07-26 09:06:32.628534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.366 [2024-07-26 09:06:32.628562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.366 qpair failed and we were unable to recover it. 00:33:14.366 [2024-07-26 09:06:32.628726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.366 [2024-07-26 09:06:32.628768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.366 qpair failed and we were unable to recover it. 00:33:14.366 [2024-07-26 09:06:32.628886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.366 [2024-07-26 09:06:32.628912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.366 qpair failed and we were unable to recover it. 00:33:14.366 [2024-07-26 09:06:32.629071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.366 [2024-07-26 09:06:32.629098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.366 qpair failed and we were unable to recover it. 00:33:14.366 [2024-07-26 09:06:32.629297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.366 [2024-07-26 09:06:32.629341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.366 qpair failed and we were unable to recover it. 00:33:14.366 [2024-07-26 09:06:32.629542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.366 [2024-07-26 09:06:32.629570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.366 qpair failed and we were unable to recover it. 00:33:14.366 [2024-07-26 09:06:32.629817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.366 [2024-07-26 09:06:32.629868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.366 qpair failed and we were unable to recover it. 00:33:14.366 [2024-07-26 09:06:32.629986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.366 [2024-07-26 09:06:32.630011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.366 qpair failed and we were unable to recover it. 00:33:14.366 [2024-07-26 09:06:32.630175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.366 [2024-07-26 09:06:32.630204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.366 qpair failed and we were unable to recover it. 00:33:14.366 [2024-07-26 09:06:32.630402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.366 [2024-07-26 09:06:32.630429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.366 qpair failed and we were unable to recover it. 00:33:14.366 [2024-07-26 09:06:32.630563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.366 [2024-07-26 09:06:32.630605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.366 qpair failed and we were unable to recover it. 00:33:14.366 [2024-07-26 09:06:32.630799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.366 [2024-07-26 09:06:32.630842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.366 qpair failed and we were unable to recover it. 00:33:14.366 [2024-07-26 09:06:32.630974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.366 [2024-07-26 09:06:32.631007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.366 qpair failed and we were unable to recover it. 00:33:14.366 [2024-07-26 09:06:32.631162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.366 [2024-07-26 09:06:32.631189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.366 qpair failed and we were unable to recover it. 00:33:14.366 [2024-07-26 09:06:32.631341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.366 [2024-07-26 09:06:32.631383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.366 qpair failed and we were unable to recover it. 00:33:14.366 [2024-07-26 09:06:32.631541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.366 [2024-07-26 09:06:32.631569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.366 qpair failed and we were unable to recover it. 00:33:14.366 [2024-07-26 09:06:32.631701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.366 [2024-07-26 09:06:32.631734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.366 qpair failed and we were unable to recover it. 00:33:14.366 [2024-07-26 09:06:32.631900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.366 [2024-07-26 09:06:32.631928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.366 qpair failed and we were unable to recover it. 00:33:14.366 [2024-07-26 09:06:32.632093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.366 [2024-07-26 09:06:32.632119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.366 qpair failed and we were unable to recover it. 00:33:14.366 [2024-07-26 09:06:32.632283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.366 [2024-07-26 09:06:32.632312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.366 qpair failed and we were unable to recover it. 00:33:14.366 [2024-07-26 09:06:32.632468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.366 [2024-07-26 09:06:32.632496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.366 qpair failed and we were unable to recover it. 00:33:14.366 [2024-07-26 09:06:32.632685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.366 [2024-07-26 09:06:32.632738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.366 qpair failed and we were unable to recover it. 00:33:14.366 [2024-07-26 09:06:32.632871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.366 [2024-07-26 09:06:32.632899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.366 qpair failed and we were unable to recover it. 00:33:14.366 [2024-07-26 09:06:32.633024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.366 [2024-07-26 09:06:32.633051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.366 qpair failed and we were unable to recover it. 00:33:14.366 [2024-07-26 09:06:32.633228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.366 [2024-07-26 09:06:32.633253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.366 qpair failed and we were unable to recover it. 00:33:14.366 [2024-07-26 09:06:32.633416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.366 [2024-07-26 09:06:32.633444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.366 qpair failed and we were unable to recover it. 00:33:14.366 [2024-07-26 09:06:32.633575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.366 [2024-07-26 09:06:32.633603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.366 qpair failed and we were unable to recover it. 00:33:14.366 [2024-07-26 09:06:32.633838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.366 [2024-07-26 09:06:32.633863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.366 qpair failed and we were unable to recover it. 00:33:14.366 [2024-07-26 09:06:32.634010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.366 [2024-07-26 09:06:32.634035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.366 qpair failed and we were unable to recover it. 00:33:14.366 [2024-07-26 09:06:32.634227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.367 [2024-07-26 09:06:32.634265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.367 qpair failed and we were unable to recover it. 00:33:14.367 [2024-07-26 09:06:32.634446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.367 [2024-07-26 09:06:32.634478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.367 qpair failed and we were unable to recover it. 00:33:14.367 [2024-07-26 09:06:32.634666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.367 [2024-07-26 09:06:32.634695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.367 qpair failed and we were unable to recover it. 00:33:14.367 [2024-07-26 09:06:32.634856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.367 [2024-07-26 09:06:32.634885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.367 qpair failed and we were unable to recover it. 00:33:14.367 [2024-07-26 09:06:32.635092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.367 [2024-07-26 09:06:32.635130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.367 qpair failed and we were unable to recover it. 00:33:14.367 [2024-07-26 09:06:32.635259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.367 [2024-07-26 09:06:32.635286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.367 qpair failed and we were unable to recover it. 00:33:14.367 [2024-07-26 09:06:32.635521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.367 [2024-07-26 09:06:32.635573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.367 qpair failed and we were unable to recover it. 00:33:14.367 [2024-07-26 09:06:32.635747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.367 [2024-07-26 09:06:32.635794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.367 qpair failed and we were unable to recover it. 00:33:14.367 [2024-07-26 09:06:32.635918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.367 [2024-07-26 09:06:32.635944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.367 qpair failed and we were unable to recover it. 00:33:14.367 [2024-07-26 09:06:32.636132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.367 [2024-07-26 09:06:32.636176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.367 qpair failed and we were unable to recover it. 00:33:14.367 [2024-07-26 09:06:32.636321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.367 [2024-07-26 09:06:32.636347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.367 qpair failed and we were unable to recover it. 00:33:14.367 [2024-07-26 09:06:32.636494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.367 [2024-07-26 09:06:32.636537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.367 qpair failed and we were unable to recover it. 00:33:14.367 [2024-07-26 09:06:32.636703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.367 [2024-07-26 09:06:32.636745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.367 qpair failed and we were unable to recover it. 00:33:14.367 [2024-07-26 09:06:32.636915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.367 [2024-07-26 09:06:32.636941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.367 qpair failed and we were unable to recover it. 00:33:14.367 [2024-07-26 09:06:32.637081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.367 [2024-07-26 09:06:32.637112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.367 qpair failed and we were unable to recover it. 00:33:14.367 [2024-07-26 09:06:32.637289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.367 [2024-07-26 09:06:32.637319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.367 qpair failed and we were unable to recover it. 00:33:14.367 [2024-07-26 09:06:32.637452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.367 [2024-07-26 09:06:32.637480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.367 qpair failed and we were unable to recover it. 00:33:14.367 [2024-07-26 09:06:32.637643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.367 [2024-07-26 09:06:32.637671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.367 qpair failed and we were unable to recover it. 00:33:14.367 [2024-07-26 09:06:32.637844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.367 [2024-07-26 09:06:32.637869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.367 qpair failed and we were unable to recover it. 00:33:14.367 [2024-07-26 09:06:32.638034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.367 [2024-07-26 09:06:32.638070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.367 qpair failed and we were unable to recover it. 00:33:14.367 [2024-07-26 09:06:32.638217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.367 [2024-07-26 09:06:32.638245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.367 qpair failed and we were unable to recover it. 00:33:14.367 [2024-07-26 09:06:32.638381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.367 [2024-07-26 09:06:32.638409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.367 qpair failed and we were unable to recover it. 00:33:14.367 [2024-07-26 09:06:32.638566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.367 [2024-07-26 09:06:32.638594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.367 qpair failed and we were unable to recover it. 00:33:14.367 [2024-07-26 09:06:32.638779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.367 [2024-07-26 09:06:32.638808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.367 qpair failed and we were unable to recover it. 00:33:14.367 [2024-07-26 09:06:32.638971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.367 [2024-07-26 09:06:32.638998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.367 qpair failed and we were unable to recover it. 00:33:14.367 [2024-07-26 09:06:32.639168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.367 [2024-07-26 09:06:32.639214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.367 qpair failed and we were unable to recover it. 00:33:14.367 [2024-07-26 09:06:32.639381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.367 [2024-07-26 09:06:32.639425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.367 qpair failed and we were unable to recover it. 00:33:14.367 [2024-07-26 09:06:32.639587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.367 [2024-07-26 09:06:32.639643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.367 qpair failed and we were unable to recover it. 00:33:14.367 [2024-07-26 09:06:32.639897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.367 [2024-07-26 09:06:32.639948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.367 qpair failed and we were unable to recover it. 00:33:14.367 [2024-07-26 09:06:32.640108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.367 [2024-07-26 09:06:32.640137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.367 qpair failed and we were unable to recover it. 00:33:14.367 [2024-07-26 09:06:32.640350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.367 [2024-07-26 09:06:32.640379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.367 qpair failed and we were unable to recover it. 00:33:14.367 [2024-07-26 09:06:32.640557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.367 [2024-07-26 09:06:32.640599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.367 qpair failed and we were unable to recover it. 00:33:14.367 [2024-07-26 09:06:32.640775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.367 [2024-07-26 09:06:32.640823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.367 qpair failed and we were unable to recover it. 00:33:14.367 [2024-07-26 09:06:32.640973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.367 [2024-07-26 09:06:32.640999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.367 qpair failed and we were unable to recover it. 00:33:14.367 [2024-07-26 09:06:32.641147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.367 [2024-07-26 09:06:32.641173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.367 qpair failed and we were unable to recover it. 00:33:14.367 [2024-07-26 09:06:32.641318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.367 [2024-07-26 09:06:32.641361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.367 qpair failed and we were unable to recover it. 00:33:14.367 [2024-07-26 09:06:32.641547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.367 [2024-07-26 09:06:32.641614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.367 qpair failed and we were unable to recover it. 00:33:14.367 [2024-07-26 09:06:32.641887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.367 [2024-07-26 09:06:32.641938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.367 qpair failed and we were unable to recover it. 00:33:14.367 [2024-07-26 09:06:32.642093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.367 [2024-07-26 09:06:32.642136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.367 qpair failed and we were unable to recover it. 00:33:14.367 [2024-07-26 09:06:32.642288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.367 [2024-07-26 09:06:32.642313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.367 qpair failed and we were unable to recover it. 00:33:14.367 [2024-07-26 09:06:32.642460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.367 [2024-07-26 09:06:32.642485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.367 qpair failed and we were unable to recover it. 00:33:14.367 [2024-07-26 09:06:32.642614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.367 [2024-07-26 09:06:32.642647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.367 qpair failed and we were unable to recover it. 00:33:14.367 [2024-07-26 09:06:32.642811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.367 [2024-07-26 09:06:32.642840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.367 qpair failed and we were unable to recover it. 00:33:14.367 [2024-07-26 09:06:32.643016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.367 [2024-07-26 09:06:32.643041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.367 qpair failed and we were unable to recover it. 00:33:14.367 [2024-07-26 09:06:32.643178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.367 [2024-07-26 09:06:32.643204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.367 qpair failed and we were unable to recover it. 00:33:14.367 [2024-07-26 09:06:32.643328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.367 [2024-07-26 09:06:32.643353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.367 qpair failed and we were unable to recover it. 00:33:14.367 [2024-07-26 09:06:32.643529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.367 [2024-07-26 09:06:32.643555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.367 qpair failed and we were unable to recover it. 00:33:14.367 [2024-07-26 09:06:32.643722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.367 [2024-07-26 09:06:32.643751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.367 qpair failed and we were unable to recover it. 00:33:14.367 [2024-07-26 09:06:32.643916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.367 [2024-07-26 09:06:32.643944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.367 qpair failed and we were unable to recover it. 00:33:14.367 [2024-07-26 09:06:32.644113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.367 [2024-07-26 09:06:32.644140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.367 qpair failed and we were unable to recover it. 00:33:14.367 [2024-07-26 09:06:32.644254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.367 [2024-07-26 09:06:32.644279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.367 qpair failed and we were unable to recover it. 00:33:14.367 [2024-07-26 09:06:32.644424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.367 [2024-07-26 09:06:32.644449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.367 qpair failed and we were unable to recover it. 00:33:14.367 [2024-07-26 09:06:32.644643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.367 [2024-07-26 09:06:32.644694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.367 qpair failed and we were unable to recover it. 00:33:14.367 [2024-07-26 09:06:32.644852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.367 [2024-07-26 09:06:32.644880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.367 qpair failed and we were unable to recover it. 00:33:14.367 [2024-07-26 09:06:32.645030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.367 [2024-07-26 09:06:32.645056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.367 qpair failed and we were unable to recover it. 00:33:14.367 [2024-07-26 09:06:32.645222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.367 [2024-07-26 09:06:32.645247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.367 qpair failed and we were unable to recover it. 00:33:14.367 [2024-07-26 09:06:32.645371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.367 [2024-07-26 09:06:32.645397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.367 qpair failed and we were unable to recover it. 00:33:14.367 [2024-07-26 09:06:32.645599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.367 [2024-07-26 09:06:32.645627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.367 qpair failed and we were unable to recover it. 00:33:14.367 [2024-07-26 09:06:32.645784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.367 [2024-07-26 09:06:32.645813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.367 qpair failed and we were unable to recover it. 00:33:14.367 [2024-07-26 09:06:32.645955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.367 [2024-07-26 09:06:32.645980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.367 qpair failed and we were unable to recover it. 00:33:14.367 [2024-07-26 09:06:32.646122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.367 [2024-07-26 09:06:32.646150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.367 qpair failed and we were unable to recover it. 00:33:14.367 [2024-07-26 09:06:32.646268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.367 [2024-07-26 09:06:32.646294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.367 qpair failed and we were unable to recover it. 00:33:14.368 [2024-07-26 09:06:32.646453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.368 [2024-07-26 09:06:32.646496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.368 qpair failed and we were unable to recover it. 00:33:14.368 [2024-07-26 09:06:32.646721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.368 [2024-07-26 09:06:32.646749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.368 qpair failed and we were unable to recover it. 00:33:14.368 [2024-07-26 09:06:32.646920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.368 [2024-07-26 09:06:32.646949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.368 qpair failed and we were unable to recover it. 00:33:14.368 [2024-07-26 09:06:32.647117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.368 [2024-07-26 09:06:32.647143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.368 qpair failed and we were unable to recover it. 00:33:14.368 [2024-07-26 09:06:32.647259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.368 [2024-07-26 09:06:32.647300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.368 qpair failed and we were unable to recover it. 00:33:14.368 [2024-07-26 09:06:32.647459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.368 [2024-07-26 09:06:32.647488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.368 qpair failed and we were unable to recover it. 00:33:14.368 [2024-07-26 09:06:32.647654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.368 [2024-07-26 09:06:32.647682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.368 qpair failed and we were unable to recover it. 00:33:14.368 [2024-07-26 09:06:32.647877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.368 [2024-07-26 09:06:32.647905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.368 qpair failed and we were unable to recover it. 00:33:14.368 [2024-07-26 09:06:32.648030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.368 [2024-07-26 09:06:32.648066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.368 qpair failed and we were unable to recover it. 00:33:14.368 [2024-07-26 09:06:32.648227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.368 [2024-07-26 09:06:32.648253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.368 qpair failed and we were unable to recover it. 00:33:14.368 [2024-07-26 09:06:32.648424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.368 [2024-07-26 09:06:32.648452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.368 qpair failed and we were unable to recover it. 00:33:14.368 [2024-07-26 09:06:32.648620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.368 [2024-07-26 09:06:32.648647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.368 qpair failed and we were unable to recover it. 00:33:14.368 [2024-07-26 09:06:32.648799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.368 [2024-07-26 09:06:32.648828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.368 qpair failed and we were unable to recover it. 00:33:14.368 [2024-07-26 09:06:32.648982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.368 [2024-07-26 09:06:32.649021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.368 qpair failed and we were unable to recover it. 00:33:14.368 [2024-07-26 09:06:32.649197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.368 [2024-07-26 09:06:32.649225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.368 qpair failed and we were unable to recover it. 00:33:14.368 [2024-07-26 09:06:32.649391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.368 [2024-07-26 09:06:32.649435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.368 qpair failed and we were unable to recover it. 00:33:14.368 [2024-07-26 09:06:32.649575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.368 [2024-07-26 09:06:32.649619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.368 qpair failed and we were unable to recover it. 00:33:14.368 [2024-07-26 09:06:32.649787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.368 [2024-07-26 09:06:32.649830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.368 qpair failed and we were unable to recover it. 00:33:14.368 [2024-07-26 09:06:32.649980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.368 [2024-07-26 09:06:32.650006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.368 qpair failed and we were unable to recover it. 00:33:14.368 [2024-07-26 09:06:32.650123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.368 [2024-07-26 09:06:32.650170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.368 qpair failed and we were unable to recover it. 00:33:14.368 [2024-07-26 09:06:32.650334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.368 [2024-07-26 09:06:32.650363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.368 qpair failed and we were unable to recover it. 00:33:14.368 [2024-07-26 09:06:32.650488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.368 [2024-07-26 09:06:32.650516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.368 qpair failed and we were unable to recover it. 00:33:14.368 [2024-07-26 09:06:32.650655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.368 [2024-07-26 09:06:32.650697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.368 qpair failed and we were unable to recover it. 00:33:14.368 [2024-07-26 09:06:32.650883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.368 [2024-07-26 09:06:32.650911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.368 qpair failed and we were unable to recover it. 00:33:14.368 [2024-07-26 09:06:32.651083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.368 [2024-07-26 09:06:32.651109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.368 qpair failed and we were unable to recover it. 00:33:14.368 [2024-07-26 09:06:32.651280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.368 [2024-07-26 09:06:32.651308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.368 qpair failed and we were unable to recover it. 00:33:14.368 [2024-07-26 09:06:32.651494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.368 [2024-07-26 09:06:32.651522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.368 qpair failed and we were unable to recover it. 00:33:14.368 [2024-07-26 09:06:32.651667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.368 [2024-07-26 09:06:32.651692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.368 qpair failed and we were unable to recover it. 00:33:14.368 [2024-07-26 09:06:32.651866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.368 [2024-07-26 09:06:32.651893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.368 qpair failed and we were unable to recover it. 00:33:14.368 [2024-07-26 09:06:32.652049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.368 [2024-07-26 09:06:32.652085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.368 qpair failed and we were unable to recover it. 00:33:14.368 [2024-07-26 09:06:32.652215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.368 [2024-07-26 09:06:32.652240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.368 qpair failed and we were unable to recover it. 00:33:14.368 [2024-07-26 09:06:32.652426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.368 [2024-07-26 09:06:32.652454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.368 qpair failed and we were unable to recover it. 00:33:14.368 [2024-07-26 09:06:32.652614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.368 [2024-07-26 09:06:32.652643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.368 qpair failed and we were unable to recover it. 00:33:14.368 [2024-07-26 09:06:32.652802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.368 [2024-07-26 09:06:32.652830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.368 qpair failed and we were unable to recover it. 00:33:14.368 [2024-07-26 09:06:32.652975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.368 [2024-07-26 09:06:32.653003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.368 qpair failed and we were unable to recover it. 00:33:14.368 [2024-07-26 09:06:32.653157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.368 [2024-07-26 09:06:32.653201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.368 qpair failed and we were unable to recover it. 00:33:14.368 [2024-07-26 09:06:32.653336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.368 [2024-07-26 09:06:32.653379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.368 qpair failed and we were unable to recover it. 00:33:14.368 [2024-07-26 09:06:32.653551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.368 [2024-07-26 09:06:32.653612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.368 qpair failed and we were unable to recover it. 00:33:14.368 [2024-07-26 09:06:32.653786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.368 [2024-07-26 09:06:32.653830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.368 qpair failed and we were unable to recover it. 00:33:14.368 [2024-07-26 09:06:32.653973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.368 [2024-07-26 09:06:32.653999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.368 qpair failed and we were unable to recover it. 00:33:14.368 [2024-07-26 09:06:32.654144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.368 [2024-07-26 09:06:32.654175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.368 qpair failed and we were unable to recover it. 00:33:14.368 [2024-07-26 09:06:32.654352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.368 [2024-07-26 09:06:32.654381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.368 qpair failed and we were unable to recover it. 00:33:14.368 [2024-07-26 09:06:32.654521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.368 [2024-07-26 09:06:32.654549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.368 qpair failed and we were unable to recover it. 00:33:14.368 [2024-07-26 09:06:32.654712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.368 [2024-07-26 09:06:32.654740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.368 qpair failed and we were unable to recover it. 00:33:14.368 [2024-07-26 09:06:32.654896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.368 [2024-07-26 09:06:32.654924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.368 qpair failed and we were unable to recover it. 00:33:14.368 [2024-07-26 09:06:32.655074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.368 [2024-07-26 09:06:32.655117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.368 qpair failed and we were unable to recover it. 00:33:14.368 [2024-07-26 09:06:32.655259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.368 [2024-07-26 09:06:32.655284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.368 qpair failed and we were unable to recover it. 00:33:14.368 [2024-07-26 09:06:32.655428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.368 [2024-07-26 09:06:32.655456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.368 qpair failed and we were unable to recover it. 00:33:14.368 [2024-07-26 09:06:32.655575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.368 [2024-07-26 09:06:32.655604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.368 qpair failed and we were unable to recover it. 00:33:14.368 [2024-07-26 09:06:32.655791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.368 [2024-07-26 09:06:32.655819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.368 qpair failed and we were unable to recover it. 00:33:14.368 [2024-07-26 09:06:32.655968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.368 [2024-07-26 09:06:32.655993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.368 qpair failed and we were unable to recover it. 00:33:14.368 [2024-07-26 09:06:32.656165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.368 [2024-07-26 09:06:32.656191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.368 qpair failed and we were unable to recover it. 00:33:14.368 [2024-07-26 09:06:32.656390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.368 [2024-07-26 09:06:32.656436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.368 qpair failed and we were unable to recover it. 00:33:14.368 [2024-07-26 09:06:32.656602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.368 [2024-07-26 09:06:32.656645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.368 qpair failed and we were unable to recover it. 00:33:14.368 [2024-07-26 09:06:32.656832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.368 [2024-07-26 09:06:32.656875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.368 qpair failed and we were unable to recover it. 00:33:14.368 [2024-07-26 09:06:32.656992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.368 [2024-07-26 09:06:32.657018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.368 qpair failed and we were unable to recover it. 00:33:14.368 [2024-07-26 09:06:32.657194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.368 [2024-07-26 09:06:32.657241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.368 qpair failed and we were unable to recover it. 00:33:14.368 [2024-07-26 09:06:32.657437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.368 [2024-07-26 09:06:32.657466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.368 qpair failed and we were unable to recover it. 00:33:14.368 [2024-07-26 09:06:32.657725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.368 [2024-07-26 09:06:32.657768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.368 qpair failed and we were unable to recover it. 00:33:14.368 [2024-07-26 09:06:32.657885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.368 [2024-07-26 09:06:32.657910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.368 qpair failed and we were unable to recover it. 00:33:14.368 [2024-07-26 09:06:32.658031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.368 [2024-07-26 09:06:32.658057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.368 qpair failed and we were unable to recover it. 00:33:14.369 [2024-07-26 09:06:32.658244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.369 [2024-07-26 09:06:32.658287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.369 qpair failed and we were unable to recover it. 00:33:14.369 [2024-07-26 09:06:32.658469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.369 [2024-07-26 09:06:32.658495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.369 qpair failed and we were unable to recover it. 00:33:14.369 [2024-07-26 09:06:32.658608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.369 [2024-07-26 09:06:32.658634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.369 qpair failed and we were unable to recover it. 00:33:14.369 [2024-07-26 09:06:32.658807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.369 [2024-07-26 09:06:32.658833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.369 qpair failed and we were unable to recover it. 00:33:14.369 [2024-07-26 09:06:32.658981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.369 [2024-07-26 09:06:32.659007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.369 qpair failed and we were unable to recover it. 00:33:14.369 [2024-07-26 09:06:32.659136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.369 [2024-07-26 09:06:32.659165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.369 qpair failed and we were unable to recover it. 00:33:14.369 [2024-07-26 09:06:32.659380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.369 [2024-07-26 09:06:32.659422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.369 qpair failed and we were unable to recover it. 00:33:14.369 [2024-07-26 09:06:32.659628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.369 [2024-07-26 09:06:32.659671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.369 qpair failed and we were unable to recover it. 00:33:14.369 [2024-07-26 09:06:32.659822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.369 [2024-07-26 09:06:32.659848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.369 qpair failed and we were unable to recover it. 00:33:14.369 [2024-07-26 09:06:32.659970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.369 [2024-07-26 09:06:32.659996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.369 qpair failed and we were unable to recover it. 00:33:14.369 [2024-07-26 09:06:32.660139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.369 [2024-07-26 09:06:32.660183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.369 qpair failed and we were unable to recover it. 00:33:14.369 [2024-07-26 09:06:32.660379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.369 [2024-07-26 09:06:32.660408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.369 qpair failed and we were unable to recover it. 00:33:14.369 [2024-07-26 09:06:32.660594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.369 [2024-07-26 09:06:32.660625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.369 qpair failed and we were unable to recover it. 00:33:14.369 [2024-07-26 09:06:32.660780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.369 [2024-07-26 09:06:32.660806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.369 qpair failed and we were unable to recover it. 00:33:14.369 [2024-07-26 09:06:32.660949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.369 [2024-07-26 09:06:32.660975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.369 qpair failed and we were unable to recover it. 00:33:14.369 [2024-07-26 09:06:32.661142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.369 [2024-07-26 09:06:32.661171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.369 qpair failed and we were unable to recover it. 00:33:14.369 [2024-07-26 09:06:32.661330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.369 [2024-07-26 09:06:32.661357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.369 qpair failed and we were unable to recover it. 00:33:14.369 [2024-07-26 09:06:32.661519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.369 [2024-07-26 09:06:32.661547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.369 qpair failed and we were unable to recover it. 00:33:14.369 [2024-07-26 09:06:32.661801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.369 [2024-07-26 09:06:32.661853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.369 qpair failed and we were unable to recover it. 00:33:14.369 [2024-07-26 09:06:32.662013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.369 [2024-07-26 09:06:32.662041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.369 qpair failed and we were unable to recover it. 00:33:14.369 [2024-07-26 09:06:32.662219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.369 [2024-07-26 09:06:32.662262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.369 qpair failed and we were unable to recover it. 00:33:14.369 [2024-07-26 09:06:32.662439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.369 [2024-07-26 09:06:32.662468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.369 qpair failed and we were unable to recover it. 00:33:14.369 [2024-07-26 09:06:32.662626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.369 [2024-07-26 09:06:32.662654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.369 qpair failed and we were unable to recover it. 00:33:14.369 [2024-07-26 09:06:32.662780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.369 [2024-07-26 09:06:32.662808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.369 qpair failed and we were unable to recover it. 00:33:14.369 [2024-07-26 09:06:32.662969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.369 [2024-07-26 09:06:32.662994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.369 qpair failed and we were unable to recover it. 00:33:14.369 [2024-07-26 09:06:32.663136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.369 [2024-07-26 09:06:32.663162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.369 qpair failed and we were unable to recover it. 00:33:14.369 [2024-07-26 09:06:32.663286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.369 [2024-07-26 09:06:32.663312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.369 qpair failed and we were unable to recover it. 00:33:14.369 [2024-07-26 09:06:32.663490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.369 [2024-07-26 09:06:32.663518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.369 qpair failed and we were unable to recover it. 00:33:14.369 [2024-07-26 09:06:32.663679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.369 [2024-07-26 09:06:32.663709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.369 qpair failed and we were unable to recover it. 00:33:14.369 [2024-07-26 09:06:32.663894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.369 [2024-07-26 09:06:32.663922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.369 qpair failed and we were unable to recover it. 00:33:14.369 [2024-07-26 09:06:32.664089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.369 [2024-07-26 09:06:32.664115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.369 qpair failed and we were unable to recover it. 00:33:14.369 [2024-07-26 09:06:32.664264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.369 [2024-07-26 09:06:32.664289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.369 qpair failed and we were unable to recover it. 00:33:14.369 [2024-07-26 09:06:32.664456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.369 [2024-07-26 09:06:32.664499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.369 qpair failed and we were unable to recover it. 00:33:14.369 [2024-07-26 09:06:32.664688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.369 [2024-07-26 09:06:32.664715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.369 qpair failed and we were unable to recover it. 00:33:14.369 [2024-07-26 09:06:32.664875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.369 [2024-07-26 09:06:32.664903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.369 qpair failed and we were unable to recover it. 00:33:14.369 [2024-07-26 09:06:32.665031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.369 [2024-07-26 09:06:32.665066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.369 qpair failed and we were unable to recover it. 00:33:14.369 [2024-07-26 09:06:32.665253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.369 [2024-07-26 09:06:32.665278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.369 qpair failed and we were unable to recover it. 00:33:14.369 [2024-07-26 09:06:32.665435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.369 [2024-07-26 09:06:32.665462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.369 qpair failed and we were unable to recover it. 00:33:14.369 [2024-07-26 09:06:32.665619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.369 [2024-07-26 09:06:32.665647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.369 qpair failed and we were unable to recover it. 00:33:14.369 [2024-07-26 09:06:32.665867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.369 [2024-07-26 09:06:32.665894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.369 qpair failed and we were unable to recover it. 00:33:14.369 [2024-07-26 09:06:32.666055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.369 [2024-07-26 09:06:32.666119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.369 qpair failed and we were unable to recover it. 00:33:14.369 [2024-07-26 09:06:32.666248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.369 [2024-07-26 09:06:32.666273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.369 qpair failed and we were unable to recover it. 00:33:14.369 [2024-07-26 09:06:32.666450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.369 [2024-07-26 09:06:32.666479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.369 qpair failed and we were unable to recover it. 00:33:14.369 [2024-07-26 09:06:32.666639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.369 [2024-07-26 09:06:32.666667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.369 qpair failed and we were unable to recover it. 00:33:14.369 [2024-07-26 09:06:32.666790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.369 [2024-07-26 09:06:32.666818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.369 qpair failed and we were unable to recover it. 00:33:14.369 [2024-07-26 09:06:32.666996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.369 [2024-07-26 09:06:32.667036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.369 qpair failed and we were unable to recover it. 00:33:14.369 [2024-07-26 09:06:32.667202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.369 [2024-07-26 09:06:32.667230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.369 qpair failed and we were unable to recover it. 00:33:14.369 [2024-07-26 09:06:32.667383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.369 [2024-07-26 09:06:32.667410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.369 qpair failed and we were unable to recover it. 00:33:14.369 [2024-07-26 09:06:32.667583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.369 [2024-07-26 09:06:32.667609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.369 qpair failed and we were unable to recover it. 00:33:14.369 [2024-07-26 09:06:32.667774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.369 [2024-07-26 09:06:32.667818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.369 qpair failed and we were unable to recover it. 00:33:14.369 [2024-07-26 09:06:32.667944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.369 [2024-07-26 09:06:32.667970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.369 qpair failed and we were unable to recover it. 00:33:14.369 [2024-07-26 09:06:32.668148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.369 [2024-07-26 09:06:32.668178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.369 qpair failed and we were unable to recover it. 00:33:14.369 [2024-07-26 09:06:32.668362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.369 [2024-07-26 09:06:32.668390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.369 qpair failed and we were unable to recover it. 00:33:14.369 [2024-07-26 09:06:32.668523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.369 [2024-07-26 09:06:32.668552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.369 qpair failed and we were unable to recover it. 00:33:14.369 [2024-07-26 09:06:32.668720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.369 [2024-07-26 09:06:32.668748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.369 qpair failed and we were unable to recover it. 00:33:14.369 [2024-07-26 09:06:32.668933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.370 [2024-07-26 09:06:32.668961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.370 qpair failed and we were unable to recover it. 00:33:14.370 [2024-07-26 09:06:32.669131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.370 [2024-07-26 09:06:32.669157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.370 qpair failed and we were unable to recover it. 00:33:14.370 [2024-07-26 09:06:32.669289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.370 [2024-07-26 09:06:32.669317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.370 qpair failed and we were unable to recover it. 00:33:14.370 [2024-07-26 09:06:32.669469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.370 [2024-07-26 09:06:32.669497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.370 qpair failed and we were unable to recover it. 00:33:14.370 [2024-07-26 09:06:32.669623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.370 [2024-07-26 09:06:32.669651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.370 qpair failed and we were unable to recover it. 00:33:14.370 [2024-07-26 09:06:32.669796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.370 [2024-07-26 09:06:32.669821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.370 qpair failed and we were unable to recover it. 00:33:14.370 [2024-07-26 09:06:32.669963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.370 [2024-07-26 09:06:32.669988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.370 qpair failed and we were unable to recover it. 00:33:14.370 [2024-07-26 09:06:32.670154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.370 [2024-07-26 09:06:32.670181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.370 qpair failed and we were unable to recover it. 00:33:14.370 [2024-07-26 09:06:32.670327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.370 [2024-07-26 09:06:32.670355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.370 qpair failed and we were unable to recover it. 00:33:14.370 [2024-07-26 09:06:32.670510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.370 [2024-07-26 09:06:32.670539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.370 qpair failed and we were unable to recover it. 00:33:14.370 [2024-07-26 09:06:32.670699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.370 [2024-07-26 09:06:32.670727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.370 qpair failed and we were unable to recover it. 00:33:14.370 [2024-07-26 09:06:32.670866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.370 [2024-07-26 09:06:32.670892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.370 qpair failed and we were unable to recover it. 00:33:14.370 [2024-07-26 09:06:32.671070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.370 [2024-07-26 09:06:32.671100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.370 qpair failed and we were unable to recover it. 00:33:14.370 [2024-07-26 09:06:32.671216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.370 [2024-07-26 09:06:32.671242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.370 qpair failed and we were unable to recover it. 00:33:14.370 [2024-07-26 09:06:32.671405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.370 [2024-07-26 09:06:32.671433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.370 qpair failed and we were unable to recover it. 00:33:14.370 [2024-07-26 09:06:32.671591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.370 [2024-07-26 09:06:32.671619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.370 qpair failed and we were unable to recover it. 00:33:14.370 [2024-07-26 09:06:32.671744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.370 [2024-07-26 09:06:32.671772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.370 qpair failed and we were unable to recover it. 00:33:14.370 [2024-07-26 09:06:32.672014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.370 [2024-07-26 09:06:32.672042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.370 qpair failed and we were unable to recover it. 00:33:14.370 [2024-07-26 09:06:32.672213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.370 [2024-07-26 09:06:32.672239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.370 qpair failed and we were unable to recover it. 00:33:14.370 [2024-07-26 09:06:32.672404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.370 [2024-07-26 09:06:32.672433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.370 qpair failed and we were unable to recover it. 00:33:14.370 [2024-07-26 09:06:32.672665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.370 [2024-07-26 09:06:32.672716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.370 qpair failed and we were unable to recover it. 00:33:14.370 [2024-07-26 09:06:32.672904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.370 [2024-07-26 09:06:32.672932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.370 qpair failed and we were unable to recover it. 00:33:14.370 [2024-07-26 09:06:32.673129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.370 [2024-07-26 09:06:32.673155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.370 qpair failed and we were unable to recover it. 00:33:14.370 [2024-07-26 09:06:32.673280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.370 [2024-07-26 09:06:32.673306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.370 qpair failed and we were unable to recover it. 00:33:14.370 [2024-07-26 09:06:32.673422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.370 [2024-07-26 09:06:32.673447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.370 qpair failed and we were unable to recover it. 00:33:14.370 [2024-07-26 09:06:32.673607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.370 [2024-07-26 09:06:32.673635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.370 qpair failed and we were unable to recover it. 00:33:14.370 [2024-07-26 09:06:32.673782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.370 [2024-07-26 09:06:32.673829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.370 qpair failed and we were unable to recover it. 00:33:14.370 [2024-07-26 09:06:32.674011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.370 [2024-07-26 09:06:32.674039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.370 qpair failed and we were unable to recover it. 00:33:14.370 [2024-07-26 09:06:32.674191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.370 [2024-07-26 09:06:32.674217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.370 qpair failed and we were unable to recover it. 00:33:14.370 [2024-07-26 09:06:32.674338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.370 [2024-07-26 09:06:32.674365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.370 qpair failed and we were unable to recover it. 00:33:14.370 [2024-07-26 09:06:32.674508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.370 [2024-07-26 09:06:32.674551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.370 qpair failed and we were unable to recover it. 00:33:14.370 [2024-07-26 09:06:32.674705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.370 [2024-07-26 09:06:32.674733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.370 qpair failed and we were unable to recover it. 00:33:14.370 [2024-07-26 09:06:32.674918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.370 [2024-07-26 09:06:32.674946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.370 qpair failed and we were unable to recover it. 00:33:14.370 [2024-07-26 09:06:32.675120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.370 [2024-07-26 09:06:32.675146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.370 qpair failed and we were unable to recover it. 00:33:14.370 [2024-07-26 09:06:32.675266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.370 [2024-07-26 09:06:32.675292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.370 qpair failed and we were unable to recover it. 00:33:14.370 [2024-07-26 09:06:32.675443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.370 [2024-07-26 09:06:32.675469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.370 qpair failed and we were unable to recover it. 00:33:14.370 [2024-07-26 09:06:32.675634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.370 [2024-07-26 09:06:32.675662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.370 qpair failed and we were unable to recover it. 00:33:14.370 [2024-07-26 09:06:32.675844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.370 [2024-07-26 09:06:32.675872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.370 qpair failed and we were unable to recover it. 00:33:14.370 [2024-07-26 09:06:32.676034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.370 [2024-07-26 09:06:32.676066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.370 qpair failed and we were unable to recover it. 00:33:14.370 [2024-07-26 09:06:32.676180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.370 [2024-07-26 09:06:32.676210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.370 qpair failed and we were unable to recover it. 00:33:14.370 [2024-07-26 09:06:32.676373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.370 [2024-07-26 09:06:32.676401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.370 qpair failed and we were unable to recover it. 00:33:14.370 [2024-07-26 09:06:32.676594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.370 [2024-07-26 09:06:32.676622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.370 qpair failed and we were unable to recover it. 00:33:14.370 [2024-07-26 09:06:32.676782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.370 [2024-07-26 09:06:32.676810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.370 qpair failed and we were unable to recover it. 00:33:14.370 [2024-07-26 09:06:32.676962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.370 [2024-07-26 09:06:32.676990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.370 qpair failed and we were unable to recover it. 00:33:14.370 [2024-07-26 09:06:32.677170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.370 [2024-07-26 09:06:32.677196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.370 qpair failed and we were unable to recover it. 00:33:14.370 [2024-07-26 09:06:32.677349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.370 [2024-07-26 09:06:32.677377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.370 qpair failed and we were unable to recover it. 00:33:14.370 [2024-07-26 09:06:32.677541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.370 [2024-07-26 09:06:32.677584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.370 qpair failed and we were unable to recover it. 00:33:14.370 [2024-07-26 09:06:32.677922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.370 [2024-07-26 09:06:32.677973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.370 qpair failed and we were unable to recover it. 00:33:14.370 [2024-07-26 09:06:32.678146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.370 [2024-07-26 09:06:32.678173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.370 qpair failed and we were unable to recover it. 00:33:14.370 [2024-07-26 09:06:32.678293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.370 [2024-07-26 09:06:32.678319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.370 qpair failed and we were unable to recover it. 00:33:14.370 [2024-07-26 09:06:32.678490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.370 [2024-07-26 09:06:32.678515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.370 qpair failed and we were unable to recover it. 00:33:14.370 [2024-07-26 09:06:32.678691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.370 [2024-07-26 09:06:32.678720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.370 qpair failed and we were unable to recover it. 00:33:14.370 [2024-07-26 09:06:32.678879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.370 [2024-07-26 09:06:32.678908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.370 qpair failed and we were unable to recover it. 00:33:14.370 [2024-07-26 09:06:32.679094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.370 [2024-07-26 09:06:32.679184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.370 qpair failed and we were unable to recover it. 00:33:14.370 [2024-07-26 09:06:32.679333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.370 [2024-07-26 09:06:32.679359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.370 qpair failed and we were unable to recover it. 00:33:14.370 [2024-07-26 09:06:32.679548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.370 [2024-07-26 09:06:32.679576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.370 qpair failed and we were unable to recover it. 00:33:14.370 [2024-07-26 09:06:32.679754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.370 [2024-07-26 09:06:32.679782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.370 qpair failed and we were unable to recover it. 00:33:14.370 [2024-07-26 09:06:32.679956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.370 [2024-07-26 09:06:32.679981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.370 qpair failed and we were unable to recover it. 00:33:14.370 [2024-07-26 09:06:32.680113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.370 [2024-07-26 09:06:32.680139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.370 qpair failed and we were unable to recover it. 00:33:14.370 [2024-07-26 09:06:32.680287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.370 [2024-07-26 09:06:32.680313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.370 qpair failed and we were unable to recover it. 00:33:14.370 [2024-07-26 09:06:32.680447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.370 [2024-07-26 09:06:32.680475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.370 qpair failed and we were unable to recover it. 00:33:14.370 [2024-07-26 09:06:32.680613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.370 [2024-07-26 09:06:32.680641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.370 qpair failed and we were unable to recover it. 00:33:14.370 [2024-07-26 09:06:32.680790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.370 [2024-07-26 09:06:32.680819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.370 qpair failed and we were unable to recover it. 00:33:14.370 [2024-07-26 09:06:32.680979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.370 [2024-07-26 09:06:32.681007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.370 qpair failed and we were unable to recover it. 00:33:14.370 [2024-07-26 09:06:32.681152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.370 [2024-07-26 09:06:32.681179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.370 qpair failed and we were unable to recover it. 00:33:14.371 [2024-07-26 09:06:32.681333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.371 [2024-07-26 09:06:32.681359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.371 qpair failed and we were unable to recover it. 00:33:14.371 [2024-07-26 09:06:32.681548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.371 [2024-07-26 09:06:32.681576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.371 qpair failed and we were unable to recover it. 00:33:14.371 [2024-07-26 09:06:32.681737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.371 [2024-07-26 09:06:32.681765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.371 qpair failed and we were unable to recover it. 00:33:14.371 [2024-07-26 09:06:32.681950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.371 [2024-07-26 09:06:32.681979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.371 qpair failed and we were unable to recover it. 00:33:14.371 [2024-07-26 09:06:32.682153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.371 [2024-07-26 09:06:32.682180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.371 qpair failed and we were unable to recover it. 00:33:14.371 [2024-07-26 09:06:32.682370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.371 [2024-07-26 09:06:32.682400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.371 qpair failed and we were unable to recover it. 00:33:14.371 [2024-07-26 09:06:32.682586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.371 [2024-07-26 09:06:32.682636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.371 qpair failed and we were unable to recover it. 00:33:14.371 [2024-07-26 09:06:32.682819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.371 [2024-07-26 09:06:32.682847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.371 qpair failed and we were unable to recover it. 00:33:14.371 [2024-07-26 09:06:32.682978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.371 [2024-07-26 09:06:32.683008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.371 qpair failed and we were unable to recover it. 00:33:14.371 [2024-07-26 09:06:32.683193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.371 [2024-07-26 09:06:32.683219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.371 qpair failed and we were unable to recover it. 00:33:14.371 [2024-07-26 09:06:32.683370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.371 [2024-07-26 09:06:32.683395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.371 qpair failed and we were unable to recover it. 00:33:14.371 [2024-07-26 09:06:32.683592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.371 [2024-07-26 09:06:32.683620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.371 qpair failed and we were unable to recover it. 00:33:14.371 [2024-07-26 09:06:32.683813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.371 [2024-07-26 09:06:32.683839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.371 qpair failed and we were unable to recover it. 00:33:14.371 [2024-07-26 09:06:32.684031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.371 [2024-07-26 09:06:32.684068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.371 qpair failed and we were unable to recover it. 00:33:14.371 [2024-07-26 09:06:32.684229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.371 [2024-07-26 09:06:32.684258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.371 qpair failed and we were unable to recover it. 00:33:14.371 [2024-07-26 09:06:32.684419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.371 [2024-07-26 09:06:32.684448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.371 qpair failed and we were unable to recover it. 00:33:14.371 [2024-07-26 09:06:32.684600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.371 [2024-07-26 09:06:32.684625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.371 qpair failed and we were unable to recover it. 00:33:14.371 [2024-07-26 09:06:32.684764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.371 [2024-07-26 09:06:32.684805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.371 qpair failed and we were unable to recover it. 00:33:14.371 [2024-07-26 09:06:32.684995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.371 [2024-07-26 09:06:32.685020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.371 qpair failed and we were unable to recover it. 00:33:14.371 [2024-07-26 09:06:32.685133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.371 [2024-07-26 09:06:32.685159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.371 qpair failed and we were unable to recover it. 00:33:14.371 [2024-07-26 09:06:32.685302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.371 [2024-07-26 09:06:32.685346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.371 qpair failed and we were unable to recover it. 00:33:14.371 [2024-07-26 09:06:32.685514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.371 [2024-07-26 09:06:32.685540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.371 qpair failed and we were unable to recover it. 00:33:14.371 [2024-07-26 09:06:32.685659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.371 [2024-07-26 09:06:32.685685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.371 qpair failed and we were unable to recover it. 00:33:14.371 [2024-07-26 09:06:32.685835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.371 [2024-07-26 09:06:32.685860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.371 qpair failed and we were unable to recover it. 00:33:14.371 [2024-07-26 09:06:32.686074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.371 [2024-07-26 09:06:32.686104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.371 qpair failed and we were unable to recover it. 00:33:14.371 [2024-07-26 09:06:32.686238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.371 [2024-07-26 09:06:32.686266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.371 qpair failed and we were unable to recover it. 00:33:14.371 [2024-07-26 09:06:32.686424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.371 [2024-07-26 09:06:32.686452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.371 qpair failed and we were unable to recover it. 00:33:14.371 [2024-07-26 09:06:32.686616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.371 [2024-07-26 09:06:32.686641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.371 qpair failed and we were unable to recover it. 00:33:14.371 [2024-07-26 09:06:32.686800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.371 [2024-07-26 09:06:32.686828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.371 qpair failed and we were unable to recover it. 00:33:14.371 [2024-07-26 09:06:32.686995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.371 [2024-07-26 09:06:32.687023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.371 qpair failed and we were unable to recover it. 00:33:14.371 [2024-07-26 09:06:32.687211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.371 [2024-07-26 09:06:32.687237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.371 qpair failed and we were unable to recover it. 00:33:14.371 [2024-07-26 09:06:32.687403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.371 [2024-07-26 09:06:32.687431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.371 qpair failed and we were unable to recover it. 00:33:14.371 [2024-07-26 09:06:32.687576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.371 [2024-07-26 09:06:32.687602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.371 qpair failed and we were unable to recover it. 00:33:14.371 [2024-07-26 09:06:32.687748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.371 [2024-07-26 09:06:32.687774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.371 qpair failed and we were unable to recover it. 00:33:14.371 [2024-07-26 09:06:32.687919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.371 [2024-07-26 09:06:32.687944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.371 qpair failed and we were unable to recover it. 00:33:14.371 [2024-07-26 09:06:32.688129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.371 [2024-07-26 09:06:32.688156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.371 qpair failed and we were unable to recover it. 00:33:14.371 [2024-07-26 09:06:32.688281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.371 [2024-07-26 09:06:32.688306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.371 qpair failed and we were unable to recover it. 00:33:14.371 [2024-07-26 09:06:32.688457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.371 [2024-07-26 09:06:32.688482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.371 qpair failed and we were unable to recover it. 00:33:14.371 [2024-07-26 09:06:32.688625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.371 [2024-07-26 09:06:32.688651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.371 qpair failed and we were unable to recover it. 00:33:14.371 [2024-07-26 09:06:32.688795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.371 [2024-07-26 09:06:32.688820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.371 qpair failed and we were unable to recover it. 00:33:14.371 [2024-07-26 09:06:32.689012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.371 [2024-07-26 09:06:32.689040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.371 qpair failed and we were unable to recover it. 00:33:14.371 [2024-07-26 09:06:32.689208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.371 [2024-07-26 09:06:32.689237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.371 qpair failed and we were unable to recover it. 00:33:14.371 [2024-07-26 09:06:32.689399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.371 [2024-07-26 09:06:32.689425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.371 qpair failed and we were unable to recover it. 00:33:14.371 [2024-07-26 09:06:32.689552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.371 [2024-07-26 09:06:32.689578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.371 qpair failed and we were unable to recover it. 00:33:14.371 [2024-07-26 09:06:32.689697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.371 [2024-07-26 09:06:32.689722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.371 qpair failed and we were unable to recover it. 00:33:14.371 [2024-07-26 09:06:32.689860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.371 [2024-07-26 09:06:32.689885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.371 qpair failed and we were unable to recover it. 00:33:14.371 [2024-07-26 09:06:32.690027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.371 [2024-07-26 09:06:32.690078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.371 qpair failed and we were unable to recover it. 00:33:14.371 [2024-07-26 09:06:32.690253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.371 [2024-07-26 09:06:32.690282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.371 qpair failed and we were unable to recover it. 00:33:14.371 [2024-07-26 09:06:32.690418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.371 [2024-07-26 09:06:32.690443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.371 qpair failed and we were unable to recover it. 00:33:14.371 [2024-07-26 09:06:32.690592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.371 [2024-07-26 09:06:32.690617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.371 qpair failed and we were unable to recover it. 00:33:14.371 [2024-07-26 09:06:32.690804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.371 [2024-07-26 09:06:32.690832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.371 qpair failed and we were unable to recover it. 00:33:14.371 [2024-07-26 09:06:32.690972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.371 [2024-07-26 09:06:32.690998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.371 qpair failed and we were unable to recover it. 00:33:14.371 [2024-07-26 09:06:32.691171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.371 [2024-07-26 09:06:32.691212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.371 qpair failed and we were unable to recover it. 00:33:14.371 [2024-07-26 09:06:32.691372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.371 [2024-07-26 09:06:32.691401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.371 qpair failed and we were unable to recover it. 00:33:14.371 [2024-07-26 09:06:32.691537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.371 [2024-07-26 09:06:32.691562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.371 qpair failed and we were unable to recover it. 00:33:14.371 [2024-07-26 09:06:32.691678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.371 [2024-07-26 09:06:32.691704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.371 qpair failed and we were unable to recover it. 00:33:14.371 [2024-07-26 09:06:32.691851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.371 [2024-07-26 09:06:32.691876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.371 qpair failed and we were unable to recover it. 00:33:14.371 [2024-07-26 09:06:32.692022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.371 [2024-07-26 09:06:32.692047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.371 qpair failed and we were unable to recover it. 00:33:14.371 [2024-07-26 09:06:32.692191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.371 [2024-07-26 09:06:32.692220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.371 qpair failed and we were unable to recover it. 00:33:14.371 [2024-07-26 09:06:32.692406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.371 [2024-07-26 09:06:32.692434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.371 qpair failed and we were unable to recover it. 00:33:14.371 [2024-07-26 09:06:32.692596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.371 [2024-07-26 09:06:32.692621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.371 qpair failed and we were unable to recover it. 00:33:14.371 [2024-07-26 09:06:32.692734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.371 [2024-07-26 09:06:32.692760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.371 qpair failed and we were unable to recover it. 00:33:14.372 [2024-07-26 09:06:32.692888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.372 [2024-07-26 09:06:32.692915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.372 qpair failed and we were unable to recover it. 00:33:14.372 [2024-07-26 09:06:32.693083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.372 [2024-07-26 09:06:32.693109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.372 qpair failed and we were unable to recover it. 00:33:14.372 [2024-07-26 09:06:32.693218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.372 [2024-07-26 09:06:32.693260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.372 qpair failed and we were unable to recover it. 00:33:14.372 [2024-07-26 09:06:32.693395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.372 [2024-07-26 09:06:32.693423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.372 qpair failed and we were unable to recover it. 00:33:14.372 [2024-07-26 09:06:32.693591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.372 [2024-07-26 09:06:32.693617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.372 qpair failed and we were unable to recover it. 00:33:14.372 [2024-07-26 09:06:32.693789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.372 [2024-07-26 09:06:32.693815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.372 qpair failed and we were unable to recover it. 00:33:14.372 [2024-07-26 09:06:32.694016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.372 [2024-07-26 09:06:32.694041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.372 qpair failed and we were unable to recover it. 00:33:14.372 [2024-07-26 09:06:32.694224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.372 [2024-07-26 09:06:32.694251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.372 qpair failed and we were unable to recover it. 00:33:14.372 [2024-07-26 09:06:32.694418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.372 [2024-07-26 09:06:32.694446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.372 qpair failed and we were unable to recover it. 00:33:14.372 [2024-07-26 09:06:32.694579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.372 [2024-07-26 09:06:32.694608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.372 qpair failed and we were unable to recover it. 00:33:14.372 [2024-07-26 09:06:32.694773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.372 [2024-07-26 09:06:32.694798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.372 qpair failed and we were unable to recover it. 00:33:14.372 [2024-07-26 09:06:32.694965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.372 [2024-07-26 09:06:32.694993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.372 qpair failed and we were unable to recover it. 00:33:14.372 [2024-07-26 09:06:32.695135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.372 [2024-07-26 09:06:32.695161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.372 qpair failed and we were unable to recover it. 00:33:14.372 [2024-07-26 09:06:32.695310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.372 [2024-07-26 09:06:32.695336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.372 qpair failed and we were unable to recover it. 00:33:14.372 [2024-07-26 09:06:32.695480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.372 [2024-07-26 09:06:32.695520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.372 qpair failed and we were unable to recover it. 00:33:14.372 [2024-07-26 09:06:32.695675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.372 [2024-07-26 09:06:32.695703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.372 qpair failed and we were unable to recover it. 00:33:14.372 [2024-07-26 09:06:32.695853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.372 [2024-07-26 09:06:32.695878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.372 qpair failed and we were unable to recover it. 00:33:14.372 [2024-07-26 09:06:32.696017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.372 [2024-07-26 09:06:32.696042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.372 qpair failed and we were unable to recover it. 00:33:14.372 [2024-07-26 09:06:32.696250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.372 [2024-07-26 09:06:32.696279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.372 qpair failed and we were unable to recover it. 00:33:14.372 [2024-07-26 09:06:32.696451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.372 [2024-07-26 09:06:32.696477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.372 qpair failed and we were unable to recover it. 00:33:14.372 [2024-07-26 09:06:32.696599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.372 [2024-07-26 09:06:32.696625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.372 qpair failed and we were unable to recover it. 00:33:14.372 [2024-07-26 09:06:32.696768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.372 [2024-07-26 09:06:32.696797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.372 qpair failed and we were unable to recover it. 00:33:14.372 [2024-07-26 09:06:32.696949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.372 [2024-07-26 09:06:32.696975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.372 qpair failed and we were unable to recover it. 00:33:14.372 [2024-07-26 09:06:32.697082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.372 [2024-07-26 09:06:32.697127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.372 qpair failed and we were unable to recover it. 00:33:14.372 [2024-07-26 09:06:32.697313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.372 [2024-07-26 09:06:32.697342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.372 qpair failed and we were unable to recover it. 00:33:14.372 [2024-07-26 09:06:32.697500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.372 [2024-07-26 09:06:32.697525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.372 qpair failed and we were unable to recover it. 00:33:14.372 [2024-07-26 09:06:32.697673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.372 [2024-07-26 09:06:32.697717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.372 qpair failed and we were unable to recover it. 00:33:14.372 [2024-07-26 09:06:32.697898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.372 [2024-07-26 09:06:32.697926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.372 qpair failed and we were unable to recover it. 00:33:14.372 [2024-07-26 09:06:32.698124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.372 [2024-07-26 09:06:32.698150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.372 qpair failed and we were unable to recover it. 00:33:14.372 [2024-07-26 09:06:32.698345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.372 [2024-07-26 09:06:32.698373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.372 qpair failed and we were unable to recover it. 00:33:14.372 [2024-07-26 09:06:32.698558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.372 [2024-07-26 09:06:32.698587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.372 qpair failed and we were unable to recover it. 00:33:14.372 [2024-07-26 09:06:32.698720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.372 [2024-07-26 09:06:32.698745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.372 qpair failed and we were unable to recover it. 00:33:14.372 [2024-07-26 09:06:32.698890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.372 [2024-07-26 09:06:32.698932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.372 qpair failed and we were unable to recover it. 00:33:14.372 [2024-07-26 09:06:32.699086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.372 [2024-07-26 09:06:32.699116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.372 qpair failed and we were unable to recover it. 00:33:14.372 [2024-07-26 09:06:32.699286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.372 [2024-07-26 09:06:32.699313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.372 qpair failed and we were unable to recover it. 00:33:14.372 [2024-07-26 09:06:32.699507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.372 [2024-07-26 09:06:32.699536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.372 qpair failed and we were unable to recover it. 00:33:14.372 [2024-07-26 09:06:32.699718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.372 [2024-07-26 09:06:32.699747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.372 qpair failed and we were unable to recover it. 00:33:14.372 [2024-07-26 09:06:32.699915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.372 [2024-07-26 09:06:32.699941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.372 qpair failed and we were unable to recover it. 00:33:14.372 [2024-07-26 09:06:32.700136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.372 [2024-07-26 09:06:32.700165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.372 qpair failed and we were unable to recover it. 00:33:14.372 [2024-07-26 09:06:32.700318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.372 [2024-07-26 09:06:32.700346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.372 qpair failed and we were unable to recover it. 00:33:14.372 [2024-07-26 09:06:32.700509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.372 [2024-07-26 09:06:32.700534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.372 qpair failed and we were unable to recover it. 00:33:14.372 [2024-07-26 09:06:32.700728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.372 [2024-07-26 09:06:32.700756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.372 qpair failed and we were unable to recover it. 00:33:14.372 [2024-07-26 09:06:32.700926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.372 [2024-07-26 09:06:32.700951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.372 qpair failed and we were unable to recover it. 00:33:14.372 [2024-07-26 09:06:32.701074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.372 [2024-07-26 09:06:32.701100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.372 qpair failed and we were unable to recover it. 00:33:14.372 [2024-07-26 09:06:32.701242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.372 [2024-07-26 09:06:32.701267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.372 qpair failed and we were unable to recover it. 00:33:14.372 [2024-07-26 09:06:32.701448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.372 [2024-07-26 09:06:32.701475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.372 qpair failed and we were unable to recover it. 00:33:14.372 [2024-07-26 09:06:32.701596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.372 [2024-07-26 09:06:32.701622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.372 qpair failed and we were unable to recover it. 00:33:14.372 [2024-07-26 09:06:32.701774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.372 [2024-07-26 09:06:32.701817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.372 qpair failed and we were unable to recover it. 00:33:14.372 [2024-07-26 09:06:32.701944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.372 [2024-07-26 09:06:32.701972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.372 qpair failed and we were unable to recover it. 00:33:14.372 [2024-07-26 09:06:32.702147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.372 [2024-07-26 09:06:32.702174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.372 qpair failed and we were unable to recover it. 00:33:14.372 [2024-07-26 09:06:32.702325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.372 [2024-07-26 09:06:32.702368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.372 qpair failed and we were unable to recover it. 00:33:14.372 [2024-07-26 09:06:32.702534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.372 [2024-07-26 09:06:32.702559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.372 qpair failed and we were unable to recover it. 00:33:14.372 [2024-07-26 09:06:32.702706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.372 [2024-07-26 09:06:32.702732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.372 qpair failed and we were unable to recover it. 00:33:14.372 [2024-07-26 09:06:32.702843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.372 [2024-07-26 09:06:32.702869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.372 qpair failed and we were unable to recover it. 00:33:14.372 [2024-07-26 09:06:32.703024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.372 [2024-07-26 09:06:32.703049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.372 qpair failed and we were unable to recover it. 00:33:14.372 [2024-07-26 09:06:32.703165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.373 [2024-07-26 09:06:32.703191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.373 qpair failed and we were unable to recover it. 00:33:14.373 [2024-07-26 09:06:32.703306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.373 [2024-07-26 09:06:32.703331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.373 qpair failed and we were unable to recover it. 00:33:14.373 [2024-07-26 09:06:32.703505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.373 [2024-07-26 09:06:32.703533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.373 qpair failed and we were unable to recover it. 00:33:14.373 [2024-07-26 09:06:32.703726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.373 [2024-07-26 09:06:32.703751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.373 qpair failed and we were unable to recover it. 00:33:14.373 [2024-07-26 09:06:32.703907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.373 [2024-07-26 09:06:32.703935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.373 qpair failed and we were unable to recover it. 00:33:14.373 [2024-07-26 09:06:32.704101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.373 [2024-07-26 09:06:32.704127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.373 qpair failed and we were unable to recover it. 00:33:14.373 [2024-07-26 09:06:32.704281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.373 [2024-07-26 09:06:32.704307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.373 qpair failed and we were unable to recover it. 00:33:14.373 [2024-07-26 09:06:32.704504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.373 [2024-07-26 09:06:32.704539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.373 qpair failed and we were unable to recover it. 00:33:14.373 [2024-07-26 09:06:32.704703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.373 [2024-07-26 09:06:32.704731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.373 qpair failed and we were unable to recover it. 00:33:14.373 [2024-07-26 09:06:32.704872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.373 [2024-07-26 09:06:32.704897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.373 qpair failed and we were unable to recover it. 00:33:14.373 [2024-07-26 09:06:32.705019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.373 [2024-07-26 09:06:32.705044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.373 qpair failed and we were unable to recover it. 00:33:14.373 [2024-07-26 09:06:32.705231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.373 [2024-07-26 09:06:32.705260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.373 qpair failed and we were unable to recover it. 00:33:14.373 [2024-07-26 09:06:32.705428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.373 [2024-07-26 09:06:32.705453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.373 qpair failed and we were unable to recover it. 00:33:14.373 [2024-07-26 09:06:32.705567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.373 [2024-07-26 09:06:32.705592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.373 qpair failed and we were unable to recover it. 00:33:14.373 [2024-07-26 09:06:32.705730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.373 [2024-07-26 09:06:32.705758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.373 qpair failed and we were unable to recover it. 00:33:14.373 [2024-07-26 09:06:32.705923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.373 [2024-07-26 09:06:32.705948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.373 qpair failed and we were unable to recover it. 00:33:14.373 [2024-07-26 09:06:32.706078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.373 [2024-07-26 09:06:32.706124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.373 qpair failed and we were unable to recover it. 00:33:14.373 [2024-07-26 09:06:32.706281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.373 [2024-07-26 09:06:32.706309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.373 qpair failed and we were unable to recover it. 00:33:14.373 [2024-07-26 09:06:32.706449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.373 [2024-07-26 09:06:32.706475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.373 qpair failed and we were unable to recover it. 00:33:14.373 [2024-07-26 09:06:32.706595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.373 [2024-07-26 09:06:32.706620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.373 qpair failed and we were unable to recover it. 00:33:14.373 [2024-07-26 09:06:32.706811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.373 [2024-07-26 09:06:32.706839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.373 qpair failed and we were unable to recover it. 00:33:14.373 [2024-07-26 09:06:32.707039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.373 [2024-07-26 09:06:32.707087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.373 qpair failed and we were unable to recover it. 00:33:14.373 [2024-07-26 09:06:32.707252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.373 [2024-07-26 09:06:32.707281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.373 qpair failed and we were unable to recover it. 00:33:14.373 [2024-07-26 09:06:32.707418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.373 [2024-07-26 09:06:32.707446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.373 qpair failed and we were unable to recover it. 00:33:14.373 [2024-07-26 09:06:32.707580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.373 [2024-07-26 09:06:32.707606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.373 qpair failed and we were unable to recover it. 00:33:14.373 [2024-07-26 09:06:32.707747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.373 [2024-07-26 09:06:32.707772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.373 qpair failed and we were unable to recover it. 00:33:14.373 [2024-07-26 09:06:32.707916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.373 [2024-07-26 09:06:32.707945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.373 qpair failed and we were unable to recover it. 00:33:14.373 [2024-07-26 09:06:32.708108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.373 [2024-07-26 09:06:32.708135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.373 qpair failed and we were unable to recover it. 00:33:14.373 [2024-07-26 09:06:32.708277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.373 [2024-07-26 09:06:32.708303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.373 qpair failed and we were unable to recover it. 00:33:14.373 [2024-07-26 09:06:32.708477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.373 [2024-07-26 09:06:32.708505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.373 qpair failed and we were unable to recover it. 00:33:14.373 [2024-07-26 09:06:32.708680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.373 [2024-07-26 09:06:32.708705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.373 qpair failed and we were unable to recover it. 00:33:14.373 [2024-07-26 09:06:32.708894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.373 [2024-07-26 09:06:32.708922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.373 qpair failed and we were unable to recover it. 00:33:14.373 [2024-07-26 09:06:32.709126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.373 [2024-07-26 09:06:32.709152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.373 qpair failed and we were unable to recover it. 00:33:14.373 [2024-07-26 09:06:32.709299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.373 [2024-07-26 09:06:32.709325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.373 qpair failed and we were unable to recover it. 00:33:14.373 [2024-07-26 09:06:32.709473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.373 [2024-07-26 09:06:32.709502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.373 qpair failed and we were unable to recover it. 00:33:14.373 [2024-07-26 09:06:32.709684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.373 [2024-07-26 09:06:32.709710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.373 qpair failed and we were unable to recover it. 00:33:14.373 [2024-07-26 09:06:32.709851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.373 [2024-07-26 09:06:32.709877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.373 qpair failed and we were unable to recover it. 00:33:14.373 [2024-07-26 09:06:32.710024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.373 [2024-07-26 09:06:32.710050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.373 qpair failed and we were unable to recover it. 00:33:14.373 [2024-07-26 09:06:32.710204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.373 [2024-07-26 09:06:32.710230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.373 qpair failed and we were unable to recover it. 00:33:14.373 [2024-07-26 09:06:32.710356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.373 [2024-07-26 09:06:32.710381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.373 qpair failed and we were unable to recover it. 00:33:14.373 [2024-07-26 09:06:32.710522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.373 [2024-07-26 09:06:32.710547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.373 qpair failed and we were unable to recover it. 00:33:14.373 [2024-07-26 09:06:32.710762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.373 [2024-07-26 09:06:32.710788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.373 qpair failed and we were unable to recover it. 00:33:14.373 [2024-07-26 09:06:32.710899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.373 [2024-07-26 09:06:32.710924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.373 qpair failed and we were unable to recover it. 00:33:14.373 [2024-07-26 09:06:32.711112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.373 [2024-07-26 09:06:32.711141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.373 qpair failed and we were unable to recover it. 00:33:14.373 [2024-07-26 09:06:32.711295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.373 [2024-07-26 09:06:32.711323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.373 qpair failed and we were unable to recover it. 00:33:14.373 [2024-07-26 09:06:32.711524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.373 [2024-07-26 09:06:32.711549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.373 qpair failed and we were unable to recover it. 00:33:14.373 [2024-07-26 09:06:32.711668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.373 [2024-07-26 09:06:32.711712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.373 qpair failed and we were unable to recover it. 00:33:14.373 [2024-07-26 09:06:32.711853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.373 [2024-07-26 09:06:32.711881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.373 qpair failed and we were unable to recover it. 00:33:14.373 [2024-07-26 09:06:32.712051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.373 [2024-07-26 09:06:32.712089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.373 qpair failed and we were unable to recover it. 00:33:14.373 [2024-07-26 09:06:32.712250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.373 [2024-07-26 09:06:32.712278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.373 qpair failed and we were unable to recover it. 00:33:14.373 [2024-07-26 09:06:32.712442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.373 [2024-07-26 09:06:32.712471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.373 qpair failed and we were unable to recover it. 00:33:14.373 [2024-07-26 09:06:32.712642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.373 [2024-07-26 09:06:32.712668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.373 qpair failed and we were unable to recover it. 00:33:14.373 [2024-07-26 09:06:32.712812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.373 [2024-07-26 09:06:32.712853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.373 qpair failed and we were unable to recover it. 00:33:14.373 [2024-07-26 09:06:32.713016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.373 [2024-07-26 09:06:32.713044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.373 qpair failed and we were unable to recover it. 00:33:14.373 [2024-07-26 09:06:32.713201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.373 [2024-07-26 09:06:32.713227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.373 qpair failed and we were unable to recover it. 00:33:14.373 [2024-07-26 09:06:32.713375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.373 [2024-07-26 09:06:32.713401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.373 qpair failed and we were unable to recover it. 00:33:14.373 [2024-07-26 09:06:32.713585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.373 [2024-07-26 09:06:32.713613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.373 qpair failed and we were unable to recover it. 00:33:14.373 [2024-07-26 09:06:32.713786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.373 [2024-07-26 09:06:32.713812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.373 qpair failed and we were unable to recover it. 00:33:14.373 [2024-07-26 09:06:32.713983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.373 [2024-07-26 09:06:32.714024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.373 qpair failed and we were unable to recover it. 00:33:14.373 [2024-07-26 09:06:32.714177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.373 [2024-07-26 09:06:32.714204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.373 qpair failed and we were unable to recover it. 00:33:14.373 [2024-07-26 09:06:32.714350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.373 [2024-07-26 09:06:32.714376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.373 qpair failed and we were unable to recover it. 00:33:14.373 [2024-07-26 09:06:32.714532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.373 [2024-07-26 09:06:32.714561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.373 qpair failed and we were unable to recover it. 00:33:14.373 [2024-07-26 09:06:32.714723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.373 [2024-07-26 09:06:32.714751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.373 qpair failed and we were unable to recover it. 00:33:14.373 [2024-07-26 09:06:32.714936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.373 [2024-07-26 09:06:32.714964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.373 qpair failed and we were unable to recover it. 00:33:14.374 [2024-07-26 09:06:32.715147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.374 [2024-07-26 09:06:32.715174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.374 qpair failed and we were unable to recover it. 00:33:14.374 [2024-07-26 09:06:32.715297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.374 [2024-07-26 09:06:32.715323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.374 qpair failed and we were unable to recover it. 00:33:14.374 [2024-07-26 09:06:32.715476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.374 [2024-07-26 09:06:32.715501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.374 qpair failed and we were unable to recover it. 00:33:14.374 [2024-07-26 09:06:32.715690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.374 [2024-07-26 09:06:32.715718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.374 qpair failed and we were unable to recover it. 00:33:14.374 [2024-07-26 09:06:32.715880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.374 [2024-07-26 09:06:32.715905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.374 qpair failed and we were unable to recover it. 00:33:14.374 [2024-07-26 09:06:32.716026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.374 [2024-07-26 09:06:32.716052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.374 qpair failed and we were unable to recover it. 00:33:14.374 [2024-07-26 09:06:32.716187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.374 [2024-07-26 09:06:32.716228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.374 qpair failed and we were unable to recover it. 00:33:14.374 [2024-07-26 09:06:32.716375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.374 [2024-07-26 09:06:32.716400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.374 qpair failed and we were unable to recover it. 00:33:14.374 [2024-07-26 09:06:32.716526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.374 [2024-07-26 09:06:32.716551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.374 qpair failed and we were unable to recover it. 00:33:14.374 [2024-07-26 09:06:32.716665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.374 [2024-07-26 09:06:32.716692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.374 qpair failed and we were unable to recover it. 00:33:14.374 [2024-07-26 09:06:32.716889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.374 [2024-07-26 09:06:32.716917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.374 qpair failed and we were unable to recover it. 00:33:14.374 [2024-07-26 09:06:32.717091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.374 [2024-07-26 09:06:32.717121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.374 qpair failed and we were unable to recover it. 00:33:14.374 [2024-07-26 09:06:32.717260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.374 [2024-07-26 09:06:32.717289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.374 qpair failed and we were unable to recover it. 00:33:14.374 [2024-07-26 09:06:32.717452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.374 [2024-07-26 09:06:32.717480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.374 qpair failed and we were unable to recover it. 00:33:14.374 [2024-07-26 09:06:32.717643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.374 [2024-07-26 09:06:32.717668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.374 qpair failed and we were unable to recover it. 00:33:14.374 [2024-07-26 09:06:32.717862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.374 [2024-07-26 09:06:32.717890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.374 qpair failed and we were unable to recover it. 00:33:14.374 [2024-07-26 09:06:32.718082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.374 [2024-07-26 09:06:32.718112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.374 qpair failed and we were unable to recover it. 00:33:14.374 [2024-07-26 09:06:32.718280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.374 [2024-07-26 09:06:32.718305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.374 qpair failed and we were unable to recover it. 00:33:14.374 [2024-07-26 09:06:32.718426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.374 [2024-07-26 09:06:32.718467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.374 qpair failed and we were unable to recover it. 00:33:14.374 [2024-07-26 09:06:32.718591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.374 [2024-07-26 09:06:32.718619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.374 qpair failed and we were unable to recover it. 00:33:14.374 [2024-07-26 09:06:32.718763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.374 [2024-07-26 09:06:32.718788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.374 qpair failed and we were unable to recover it. 00:33:14.374 [2024-07-26 09:06:32.718934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.374 [2024-07-26 09:06:32.718977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.374 qpair failed and we were unable to recover it. 00:33:14.374 [2024-07-26 09:06:32.719130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.374 [2024-07-26 09:06:32.719157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.374 qpair failed and we were unable to recover it. 00:33:14.374 [2024-07-26 09:06:32.719327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.374 [2024-07-26 09:06:32.719353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.374 qpair failed and we were unable to recover it. 00:33:14.374 [2024-07-26 09:06:32.719485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.374 [2024-07-26 09:06:32.719509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.374 qpair failed and we were unable to recover it. 00:33:14.374 [2024-07-26 09:06:32.719685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.374 [2024-07-26 09:06:32.719728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.374 qpair failed and we were unable to recover it. 00:33:14.374 [2024-07-26 09:06:32.719872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.374 [2024-07-26 09:06:32.719897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.374 qpair failed and we were unable to recover it. 00:33:14.374 [2024-07-26 09:06:32.720041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.374 [2024-07-26 09:06:32.720088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.374 qpair failed and we were unable to recover it. 00:33:14.374 [2024-07-26 09:06:32.720255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.374 [2024-07-26 09:06:32.720283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.374 qpair failed and we were unable to recover it. 00:33:14.374 [2024-07-26 09:06:32.720428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.374 [2024-07-26 09:06:32.720454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.374 qpair failed and we were unable to recover it. 00:33:14.374 [2024-07-26 09:06:32.720624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.374 [2024-07-26 09:06:32.720649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.374 qpair failed and we were unable to recover it. 00:33:14.374 [2024-07-26 09:06:32.720867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.374 [2024-07-26 09:06:32.720893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.374 qpair failed and we were unable to recover it. 00:33:14.374 [2024-07-26 09:06:32.721051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.374 [2024-07-26 09:06:32.721085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.374 qpair failed and we were unable to recover it. 00:33:14.374 [2024-07-26 09:06:32.721250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.374 [2024-07-26 09:06:32.721275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.374 qpair failed and we were unable to recover it. 00:33:14.374 [2024-07-26 09:06:32.721466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.374 [2024-07-26 09:06:32.721494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.374 qpair failed and we were unable to recover it. 00:33:14.374 [2024-07-26 09:06:32.721665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.374 [2024-07-26 09:06:32.721690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.374 qpair failed and we were unable to recover it. 00:33:14.374 [2024-07-26 09:06:32.721849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.374 [2024-07-26 09:06:32.721877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.374 qpair failed and we were unable to recover it. 00:33:14.374 [2024-07-26 09:06:32.722032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.374 [2024-07-26 09:06:32.722070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.374 qpair failed and we were unable to recover it. 00:33:14.374 [2024-07-26 09:06:32.722211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.374 [2024-07-26 09:06:32.722241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.374 qpair failed and we were unable to recover it. 00:33:14.374 [2024-07-26 09:06:32.722379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.374 [2024-07-26 09:06:32.722422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.374 qpair failed and we were unable to recover it. 00:33:14.374 [2024-07-26 09:06:32.722580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.374 [2024-07-26 09:06:32.722608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.374 qpair failed and we were unable to recover it. 00:33:14.374 [2024-07-26 09:06:32.722796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.374 [2024-07-26 09:06:32.722821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.374 qpair failed and we were unable to recover it. 00:33:14.374 [2024-07-26 09:06:32.722985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.374 [2024-07-26 09:06:32.723013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.374 qpair failed and we were unable to recover it. 00:33:14.374 [2024-07-26 09:06:32.723165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.374 [2024-07-26 09:06:32.723195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.374 qpair failed and we were unable to recover it. 00:33:14.374 [2024-07-26 09:06:32.723329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.374 [2024-07-26 09:06:32.723354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.374 qpair failed and we were unable to recover it. 00:33:14.374 [2024-07-26 09:06:32.723499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.374 [2024-07-26 09:06:32.723524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.374 qpair failed and we were unable to recover it. 00:33:14.374 [2024-07-26 09:06:32.723668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.374 [2024-07-26 09:06:32.723696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.374 qpair failed and we were unable to recover it. 00:33:14.374 [2024-07-26 09:06:32.723890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.374 [2024-07-26 09:06:32.723915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.374 qpair failed and we were unable to recover it. 00:33:14.374 [2024-07-26 09:06:32.724051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.374 [2024-07-26 09:06:32.724085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.374 qpair failed and we were unable to recover it. 00:33:14.374 [2024-07-26 09:06:32.724206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.374 [2024-07-26 09:06:32.724231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.374 qpair failed and we were unable to recover it. 00:33:14.374 [2024-07-26 09:06:32.724412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.374 [2024-07-26 09:06:32.724437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.374 qpair failed and we were unable to recover it. 00:33:14.374 [2024-07-26 09:06:32.724599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.374 [2024-07-26 09:06:32.724627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.374 qpair failed and we were unable to recover it. 00:33:14.374 [2024-07-26 09:06:32.724759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.374 [2024-07-26 09:06:32.724787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.374 qpair failed and we were unable to recover it. 00:33:14.374 [2024-07-26 09:06:32.724933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.374 [2024-07-26 09:06:32.724958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.374 qpair failed and we were unable to recover it. 00:33:14.374 [2024-07-26 09:06:32.725077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.374 [2024-07-26 09:06:32.725102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.374 qpair failed and we were unable to recover it. 00:33:14.374 [2024-07-26 09:06:32.725248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.374 [2024-07-26 09:06:32.725273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.374 qpair failed and we were unable to recover it. 00:33:14.374 [2024-07-26 09:06:32.725488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.374 [2024-07-26 09:06:32.725514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.374 qpair failed and we were unable to recover it. 00:33:14.374 [2024-07-26 09:06:32.725673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.374 [2024-07-26 09:06:32.725702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.374 qpair failed and we were unable to recover it. 00:33:14.374 [2024-07-26 09:06:32.725825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.374 [2024-07-26 09:06:32.725853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.374 qpair failed and we were unable to recover it. 00:33:14.374 [2024-07-26 09:06:32.726024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.374 [2024-07-26 09:06:32.726049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.374 qpair failed and we were unable to recover it. 00:33:14.374 [2024-07-26 09:06:32.726215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.374 [2024-07-26 09:06:32.726241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.374 qpair failed and we were unable to recover it. 00:33:14.374 [2024-07-26 09:06:32.726364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.374 [2024-07-26 09:06:32.726389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.374 qpair failed and we were unable to recover it. 00:33:14.374 [2024-07-26 09:06:32.726544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.374 [2024-07-26 09:06:32.726570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.374 qpair failed and we were unable to recover it. 00:33:14.374 [2024-07-26 09:06:32.726719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.374 [2024-07-26 09:06:32.726761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.374 qpair failed and we were unable to recover it. 00:33:14.374 [2024-07-26 09:06:32.726919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.374 [2024-07-26 09:06:32.726947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.374 qpair failed and we were unable to recover it. 00:33:14.374 [2024-07-26 09:06:32.727122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.374 [2024-07-26 09:06:32.727148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.374 qpair failed and we were unable to recover it. 00:33:14.375 [2024-07-26 09:06:32.727311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.375 [2024-07-26 09:06:32.727340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.375 qpair failed and we were unable to recover it. 00:33:14.375 [2024-07-26 09:06:32.727464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.375 [2024-07-26 09:06:32.727492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.375 qpair failed and we were unable to recover it. 00:33:14.375 [2024-07-26 09:06:32.727663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.375 [2024-07-26 09:06:32.727688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.375 qpair failed and we were unable to recover it. 00:33:14.375 [2024-07-26 09:06:32.727859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.375 [2024-07-26 09:06:32.727899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.375 qpair failed and we were unable to recover it. 00:33:14.375 [2024-07-26 09:06:32.728095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.375 [2024-07-26 09:06:32.728124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.375 qpair failed and we were unable to recover it. 00:33:14.375 [2024-07-26 09:06:32.728262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.375 [2024-07-26 09:06:32.728287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.375 qpair failed and we were unable to recover it. 00:33:14.375 [2024-07-26 09:06:32.728438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.375 [2024-07-26 09:06:32.728463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.375 qpair failed and we were unable to recover it. 00:33:14.375 [2024-07-26 09:06:32.728616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.375 [2024-07-26 09:06:32.728641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.375 qpair failed and we were unable to recover it. 00:33:14.375 [2024-07-26 09:06:32.728757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.375 [2024-07-26 09:06:32.728782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.375 qpair failed and we were unable to recover it. 00:33:14.375 [2024-07-26 09:06:32.728925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.375 [2024-07-26 09:06:32.728950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.375 qpair failed and we were unable to recover it. 00:33:14.375 [2024-07-26 09:06:32.729106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.375 [2024-07-26 09:06:32.729132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.375 qpair failed and we were unable to recover it. 00:33:14.375 [2024-07-26 09:06:32.729275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.375 [2024-07-26 09:06:32.729300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.375 qpair failed and we were unable to recover it. 00:33:14.375 [2024-07-26 09:06:32.729422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.375 [2024-07-26 09:06:32.729447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.375 qpair failed and we were unable to recover it. 00:33:14.375 [2024-07-26 09:06:32.729623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.375 [2024-07-26 09:06:32.729653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.375 qpair failed and we were unable to recover it. 00:33:14.375 [2024-07-26 09:06:32.729802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.375 [2024-07-26 09:06:32.729828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.375 qpair failed and we were unable to recover it. 00:33:14.375 [2024-07-26 09:06:32.729989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.375 [2024-07-26 09:06:32.730017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.375 qpair failed and we were unable to recover it. 00:33:14.375 [2024-07-26 09:06:32.730202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.375 [2024-07-26 09:06:32.730229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.375 qpair failed and we were unable to recover it. 00:33:14.375 [2024-07-26 09:06:32.730369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.375 [2024-07-26 09:06:32.730394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.375 qpair failed and we were unable to recover it. 00:33:14.375 [2024-07-26 09:06:32.730509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.375 [2024-07-26 09:06:32.730550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.375 qpair failed and we were unable to recover it. 00:33:14.375 [2024-07-26 09:06:32.730710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.375 [2024-07-26 09:06:32.730739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.375 qpair failed and we were unable to recover it. 00:33:14.375 [2024-07-26 09:06:32.730872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.375 [2024-07-26 09:06:32.730897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.375 qpair failed and we were unable to recover it. 00:33:14.375 [2024-07-26 09:06:32.731103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.375 [2024-07-26 09:06:32.731132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.375 qpair failed and we were unable to recover it. 00:33:14.375 [2024-07-26 09:06:32.731317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.375 [2024-07-26 09:06:32.731345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.375 qpair failed and we were unable to recover it. 00:33:14.375 [2024-07-26 09:06:32.731521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.375 [2024-07-26 09:06:32.731546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.375 qpair failed and we were unable to recover it. 00:33:14.375 [2024-07-26 09:06:32.731695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.375 [2024-07-26 09:06:32.731721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.375 qpair failed and we were unable to recover it. 00:33:14.375 [2024-07-26 09:06:32.731868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.375 [2024-07-26 09:06:32.731912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.375 qpair failed and we were unable to recover it. 00:33:14.375 [2024-07-26 09:06:32.732085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.375 [2024-07-26 09:06:32.732112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.375 qpair failed and we were unable to recover it. 00:33:14.375 [2024-07-26 09:06:32.732231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.375 [2024-07-26 09:06:32.732256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.375 qpair failed and we were unable to recover it. 00:33:14.375 [2024-07-26 09:06:32.732414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.375 [2024-07-26 09:06:32.732439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.375 qpair failed and we were unable to recover it. 00:33:14.375 [2024-07-26 09:06:32.732585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.375 [2024-07-26 09:06:32.732610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.375 qpair failed and we were unable to recover it. 00:33:14.375 [2024-07-26 09:06:32.732734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.375 [2024-07-26 09:06:32.732759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.375 qpair failed and we were unable to recover it. 00:33:14.375 [2024-07-26 09:06:32.732931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.375 [2024-07-26 09:06:32.732956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.375 qpair failed and we were unable to recover it. 00:33:14.375 [2024-07-26 09:06:32.733141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.375 [2024-07-26 09:06:32.733167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.375 qpair failed and we were unable to recover it. 00:33:14.375 [2024-07-26 09:06:32.733320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.375 [2024-07-26 09:06:32.733348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.375 qpair failed and we were unable to recover it. 00:33:14.375 [2024-07-26 09:06:32.733512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.375 [2024-07-26 09:06:32.733540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.375 qpair failed and we were unable to recover it. 00:33:14.375 [2024-07-26 09:06:32.733705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.375 [2024-07-26 09:06:32.733730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.375 qpair failed and we were unable to recover it. 00:33:14.375 [2024-07-26 09:06:32.733847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.375 [2024-07-26 09:06:32.733872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.375 qpair failed and we were unable to recover it. 00:33:14.375 [2024-07-26 09:06:32.734043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.375 [2024-07-26 09:06:32.734078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.375 qpair failed and we were unable to recover it. 00:33:14.375 [2024-07-26 09:06:32.734198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.375 [2024-07-26 09:06:32.734223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.375 qpair failed and we were unable to recover it. 00:33:14.375 [2024-07-26 09:06:32.734368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.375 [2024-07-26 09:06:32.734409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.375 qpair failed and we were unable to recover it. 00:33:14.375 [2024-07-26 09:06:32.734596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.375 [2024-07-26 09:06:32.734628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.375 qpair failed and we were unable to recover it. 00:33:14.375 [2024-07-26 09:06:32.734788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.375 [2024-07-26 09:06:32.734813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.375 qpair failed and we were unable to recover it. 00:33:14.375 [2024-07-26 09:06:32.734976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.375 [2024-07-26 09:06:32.735003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.375 qpair failed and we were unable to recover it. 00:33:14.375 [2024-07-26 09:06:32.735142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.375 [2024-07-26 09:06:32.735168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.375 qpair failed and we were unable to recover it. 00:33:14.375 [2024-07-26 09:06:32.735311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.375 [2024-07-26 09:06:32.735337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.375 qpair failed and we were unable to recover it. 00:33:14.375 [2024-07-26 09:06:32.735497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.375 [2024-07-26 09:06:32.735525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.375 qpair failed and we were unable to recover it. 00:33:14.375 [2024-07-26 09:06:32.735699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.375 [2024-07-26 09:06:32.735724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.375 qpair failed and we were unable to recover it. 00:33:14.375 [2024-07-26 09:06:32.735904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.375 [2024-07-26 09:06:32.735929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.375 qpair failed and we were unable to recover it. 00:33:14.375 [2024-07-26 09:06:32.736064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.375 [2024-07-26 09:06:32.736093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.375 qpair failed and we were unable to recover it. 00:33:14.375 [2024-07-26 09:06:32.736221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.375 [2024-07-26 09:06:32.736249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.375 qpair failed and we were unable to recover it. 00:33:14.375 [2024-07-26 09:06:32.736403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.375 [2024-07-26 09:06:32.736428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.375 qpair failed and we were unable to recover it. 00:33:14.375 [2024-07-26 09:06:32.736575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.375 [2024-07-26 09:06:32.736618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.375 qpair failed and we were unable to recover it. 00:33:14.375 [2024-07-26 09:06:32.736743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.375 [2024-07-26 09:06:32.736772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.375 qpair failed and we were unable to recover it. 00:33:14.375 [2024-07-26 09:06:32.736921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.375 [2024-07-26 09:06:32.736947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.375 qpair failed and we were unable to recover it. 00:33:14.376 [2024-07-26 09:06:32.737097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.376 [2024-07-26 09:06:32.737123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.376 qpair failed and we were unable to recover it. 00:33:14.376 [2024-07-26 09:06:32.737273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.376 [2024-07-26 09:06:32.737302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.376 qpair failed and we were unable to recover it. 00:33:14.376 [2024-07-26 09:06:32.737448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.376 [2024-07-26 09:06:32.737473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.376 qpair failed and we were unable to recover it. 00:33:14.376 [2024-07-26 09:06:32.737640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.376 [2024-07-26 09:06:32.737682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.376 qpair failed and we were unable to recover it. 00:33:14.376 [2024-07-26 09:06:32.737806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.376 [2024-07-26 09:06:32.737834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.376 qpair failed and we were unable to recover it. 00:33:14.376 [2024-07-26 09:06:32.738012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.376 [2024-07-26 09:06:32.738038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.376 qpair failed and we were unable to recover it. 00:33:14.376 [2024-07-26 09:06:32.738202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.376 [2024-07-26 09:06:32.738228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.376 qpair failed and we were unable to recover it. 00:33:14.376 [2024-07-26 09:06:32.738384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.376 [2024-07-26 09:06:32.738427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.376 qpair failed and we were unable to recover it. 00:33:14.376 [2024-07-26 09:06:32.738568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.376 [2024-07-26 09:06:32.738593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.376 qpair failed and we were unable to recover it. 00:33:14.376 [2024-07-26 09:06:32.738741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.376 [2024-07-26 09:06:32.738783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.376 qpair failed and we were unable to recover it. 00:33:14.376 [2024-07-26 09:06:32.738909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.376 [2024-07-26 09:06:32.738937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.376 qpair failed and we were unable to recover it. 00:33:14.376 [2024-07-26 09:06:32.739106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.376 [2024-07-26 09:06:32.739133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.376 qpair failed and we were unable to recover it. 00:33:14.376 [2024-07-26 09:06:32.739274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.376 [2024-07-26 09:06:32.739299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.376 qpair failed and we were unable to recover it. 00:33:14.376 [2024-07-26 09:06:32.739454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.376 [2024-07-26 09:06:32.739481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.376 qpair failed and we were unable to recover it. 00:33:14.376 [2024-07-26 09:06:32.739623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.376 [2024-07-26 09:06:32.739650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.376 qpair failed and we were unable to recover it. 00:33:14.376 [2024-07-26 09:06:32.739770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.376 [2024-07-26 09:06:32.739795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.376 qpair failed and we were unable to recover it. 00:33:14.376 [2024-07-26 09:06:32.739947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.376 [2024-07-26 09:06:32.739972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.376 qpair failed and we were unable to recover it. 00:33:14.376 [2024-07-26 09:06:32.740150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.376 [2024-07-26 09:06:32.740176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.376 qpair failed and we were unable to recover it. 00:33:14.376 [2024-07-26 09:06:32.740287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.376 [2024-07-26 09:06:32.740328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.376 qpair failed and we were unable to recover it. 00:33:14.376 [2024-07-26 09:06:32.740482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.376 [2024-07-26 09:06:32.740510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.376 qpair failed and we were unable to recover it. 00:33:14.376 [2024-07-26 09:06:32.740706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.376 [2024-07-26 09:06:32.740731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.376 qpair failed and we were unable to recover it. 00:33:14.376 [2024-07-26 09:06:32.740865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.376 [2024-07-26 09:06:32.740892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.376 qpair failed and we were unable to recover it. 00:33:14.376 [2024-07-26 09:06:32.741050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.376 [2024-07-26 09:06:32.741087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.376 qpair failed and we were unable to recover it. 00:33:14.376 [2024-07-26 09:06:32.741229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.376 [2024-07-26 09:06:32.741254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.376 qpair failed and we were unable to recover it. 00:33:14.376 [2024-07-26 09:06:32.741367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.376 [2024-07-26 09:06:32.741393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.376 qpair failed and we were unable to recover it. 00:33:14.376 [2024-07-26 09:06:32.741556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.376 [2024-07-26 09:06:32.741585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.376 qpair failed and we were unable to recover it. 00:33:14.376 [2024-07-26 09:06:32.741747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.376 [2024-07-26 09:06:32.741772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.376 qpair failed and we were unable to recover it. 00:33:14.376 [2024-07-26 09:06:32.741929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.376 [2024-07-26 09:06:32.741964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.376 qpair failed and we were unable to recover it. 00:33:14.376 [2024-07-26 09:06:32.742137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.376 [2024-07-26 09:06:32.742164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.376 qpair failed and we were unable to recover it. 00:33:14.376 [2024-07-26 09:06:32.742291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.376 [2024-07-26 09:06:32.742316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.376 qpair failed and we were unable to recover it. 00:33:14.376 [2024-07-26 09:06:32.742432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.376 [2024-07-26 09:06:32.742457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.376 qpair failed and we were unable to recover it. 00:33:14.376 [2024-07-26 09:06:32.742575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.376 [2024-07-26 09:06:32.742601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.376 qpair failed and we were unable to recover it. 00:33:14.376 [2024-07-26 09:06:32.742738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.376 [2024-07-26 09:06:32.742763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.376 qpair failed and we were unable to recover it. 00:33:14.376 [2024-07-26 09:06:32.742886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.376 [2024-07-26 09:06:32.742929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.376 qpair failed and we were unable to recover it. 00:33:14.376 [2024-07-26 09:06:32.743053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.376 [2024-07-26 09:06:32.743088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.376 qpair failed and we were unable to recover it. 00:33:14.376 [2024-07-26 09:06:32.743249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.376 [2024-07-26 09:06:32.743275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.376 qpair failed and we were unable to recover it. 00:33:14.376 [2024-07-26 09:06:32.743440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.376 [2024-07-26 09:06:32.743468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.376 qpair failed and we were unable to recover it. 00:33:14.376 [2024-07-26 09:06:32.743661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.376 [2024-07-26 09:06:32.743686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.376 qpair failed and we were unable to recover it. 00:33:14.376 [2024-07-26 09:06:32.743832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.376 [2024-07-26 09:06:32.743857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.376 qpair failed and we were unable to recover it. 00:33:14.376 [2024-07-26 09:06:32.744014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.376 [2024-07-26 09:06:32.744042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.376 qpair failed and we were unable to recover it. 00:33:14.376 [2024-07-26 09:06:32.744202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.376 [2024-07-26 09:06:32.744227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.376 qpair failed and we were unable to recover it. 00:33:14.376 [2024-07-26 09:06:32.744351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.376 [2024-07-26 09:06:32.744376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.376 qpair failed and we were unable to recover it. 00:33:14.376 [2024-07-26 09:06:32.744493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.376 [2024-07-26 09:06:32.744518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.376 qpair failed and we were unable to recover it. 00:33:14.376 [2024-07-26 09:06:32.744707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.376 [2024-07-26 09:06:32.744735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.376 qpair failed and we were unable to recover it. 00:33:14.376 [2024-07-26 09:06:32.744901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.376 [2024-07-26 09:06:32.744926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.376 qpair failed and we were unable to recover it. 00:33:14.376 [2024-07-26 09:06:32.745070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.376 [2024-07-26 09:06:32.745113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.376 qpair failed and we were unable to recover it. 00:33:14.376 [2024-07-26 09:06:32.745246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.376 [2024-07-26 09:06:32.745274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.376 qpair failed and we were unable to recover it. 00:33:14.376 [2024-07-26 09:06:32.745427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.376 [2024-07-26 09:06:32.745454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.376 qpair failed and we were unable to recover it. 00:33:14.376 [2024-07-26 09:06:32.745576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.376 [2024-07-26 09:06:32.745601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.376 qpair failed and we were unable to recover it. 00:33:14.376 [2024-07-26 09:06:32.745750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.376 [2024-07-26 09:06:32.745776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.376 qpair failed and we were unable to recover it. 00:33:14.376 [2024-07-26 09:06:32.745899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.376 [2024-07-26 09:06:32.745925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.376 qpair failed and we were unable to recover it. 00:33:14.376 [2024-07-26 09:06:32.746077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.376 [2024-07-26 09:06:32.746119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.376 qpair failed and we were unable to recover it. 00:33:14.376 [2024-07-26 09:06:32.746275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.376 [2024-07-26 09:06:32.746303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.376 qpair failed and we were unable to recover it. 00:33:14.376 [2024-07-26 09:06:32.746466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.376 [2024-07-26 09:06:32.746491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.376 qpair failed and we were unable to recover it. 00:33:14.376 [2024-07-26 09:06:32.746661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.376 [2024-07-26 09:06:32.746689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.376 qpair failed and we were unable to recover it. 00:33:14.376 [2024-07-26 09:06:32.746846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.376 [2024-07-26 09:06:32.746875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.376 qpair failed and we were unable to recover it. 00:33:14.376 [2024-07-26 09:06:32.747045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.376 [2024-07-26 09:06:32.747084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.376 qpair failed and we were unable to recover it. 00:33:14.376 [2024-07-26 09:06:32.747225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.376 [2024-07-26 09:06:32.747254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.376 qpair failed and we were unable to recover it. 00:33:14.376 [2024-07-26 09:06:32.747419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.376 [2024-07-26 09:06:32.747447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.376 qpair failed and we were unable to recover it. 00:33:14.376 [2024-07-26 09:06:32.747613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.376 [2024-07-26 09:06:32.747639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.376 qpair failed and we were unable to recover it. 00:33:14.376 [2024-07-26 09:06:32.747801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.376 [2024-07-26 09:06:32.747828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.376 qpair failed and we were unable to recover it. 00:33:14.376 [2024-07-26 09:06:32.747965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.376 [2024-07-26 09:06:32.747993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.376 qpair failed and we were unable to recover it. 00:33:14.376 [2024-07-26 09:06:32.748190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.376 [2024-07-26 09:06:32.748217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.376 qpair failed and we were unable to recover it. 00:33:14.376 [2024-07-26 09:06:32.748375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.376 [2024-07-26 09:06:32.748403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.376 qpair failed and we were unable to recover it. 00:33:14.377 [2024-07-26 09:06:32.748598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.377 [2024-07-26 09:06:32.748627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.377 qpair failed and we were unable to recover it. 00:33:14.377 [2024-07-26 09:06:32.748813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.377 [2024-07-26 09:06:32.748856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.377 qpair failed and we were unable to recover it. 00:33:14.377 [2024-07-26 09:06:32.748985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.377 [2024-07-26 09:06:32.749013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.377 qpair failed and we were unable to recover it. 00:33:14.377 [2024-07-26 09:06:32.749181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.377 [2024-07-26 09:06:32.749207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.377 qpair failed and we were unable to recover it. 00:33:14.377 [2024-07-26 09:06:32.749327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.377 [2024-07-26 09:06:32.749352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.377 qpair failed and we were unable to recover it. 00:33:14.377 [2024-07-26 09:06:32.749493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.377 [2024-07-26 09:06:32.749519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.377 qpair failed and we were unable to recover it. 00:33:14.377 [2024-07-26 09:06:32.749696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.377 [2024-07-26 09:06:32.749726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.377 qpair failed and we were unable to recover it. 00:33:14.377 [2024-07-26 09:06:32.749868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.377 [2024-07-26 09:06:32.749895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.377 qpair failed and we were unable to recover it. 00:33:14.377 [2024-07-26 09:06:32.750057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.377 [2024-07-26 09:06:32.750098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.377 qpair failed and we were unable to recover it. 00:33:14.377 [2024-07-26 09:06:32.750259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.377 [2024-07-26 09:06:32.750298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.377 qpair failed and we were unable to recover it. 00:33:14.377 [2024-07-26 09:06:32.750441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.377 [2024-07-26 09:06:32.750466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.377 qpair failed and we were unable to recover it. 00:33:14.377 [2024-07-26 09:06:32.750607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.377 [2024-07-26 09:06:32.750632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.377 qpair failed and we were unable to recover it. 00:33:14.377 [2024-07-26 09:06:32.750828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.377 [2024-07-26 09:06:32.750857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.377 qpair failed and we were unable to recover it. 00:33:14.377 [2024-07-26 09:06:32.750989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.377 [2024-07-26 09:06:32.751015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.377 qpair failed and we were unable to recover it. 00:33:14.377 [2024-07-26 09:06:32.751201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.377 [2024-07-26 09:06:32.751227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.377 qpair failed and we were unable to recover it. 00:33:14.377 [2024-07-26 09:06:32.751371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.377 [2024-07-26 09:06:32.751411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.377 qpair failed and we were unable to recover it. 00:33:14.377 [2024-07-26 09:06:32.751586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.377 [2024-07-26 09:06:32.751611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.377 qpair failed and we were unable to recover it. 00:33:14.377 [2024-07-26 09:06:32.751799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.377 [2024-07-26 09:06:32.751827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.377 qpair failed and we were unable to recover it. 00:33:14.377 [2024-07-26 09:06:32.751999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.377 [2024-07-26 09:06:32.752024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.377 qpair failed and we were unable to recover it. 00:33:14.377 [2024-07-26 09:06:32.752209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.377 [2024-07-26 09:06:32.752236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.377 qpair failed and we were unable to recover it. 00:33:14.377 [2024-07-26 09:06:32.752405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.377 [2024-07-26 09:06:32.752433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.377 qpair failed and we were unable to recover it. 00:33:14.377 [2024-07-26 09:06:32.752587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.377 [2024-07-26 09:06:32.752615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.377 qpair failed and we were unable to recover it. 00:33:14.377 [2024-07-26 09:06:32.752773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.377 [2024-07-26 09:06:32.752799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.377 qpair failed and we were unable to recover it. 00:33:14.377 [2024-07-26 09:06:32.752938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.377 [2024-07-26 09:06:32.752964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.377 qpair failed and we were unable to recover it. 00:33:14.377 [2024-07-26 09:06:32.753134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.377 [2024-07-26 09:06:32.753163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.377 qpair failed and we were unable to recover it. 00:33:14.377 [2024-07-26 09:06:32.753325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.377 [2024-07-26 09:06:32.753350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.377 qpair failed and we were unable to recover it. 00:33:14.377 [2024-07-26 09:06:32.753461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.377 [2024-07-26 09:06:32.753486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.377 qpair failed and we were unable to recover it. 00:33:14.377 [2024-07-26 09:06:32.753630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.377 [2024-07-26 09:06:32.753658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.377 qpair failed and we were unable to recover it. 00:33:14.377 [2024-07-26 09:06:32.753823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.377 [2024-07-26 09:06:32.753851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.377 qpair failed and we were unable to recover it. 00:33:14.377 [2024-07-26 09:06:32.754037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.377 [2024-07-26 09:06:32.754073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.377 qpair failed and we were unable to recover it. 00:33:14.377 [2024-07-26 09:06:32.754213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.377 [2024-07-26 09:06:32.754239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.377 qpair failed and we were unable to recover it. 00:33:14.377 [2024-07-26 09:06:32.754353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.377 [2024-07-26 09:06:32.754384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.377 qpair failed and we were unable to recover it. 00:33:14.377 [2024-07-26 09:06:32.754528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.377 [2024-07-26 09:06:32.754569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.377 qpair failed and we were unable to recover it. 00:33:14.377 [2024-07-26 09:06:32.754723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.377 [2024-07-26 09:06:32.754752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.377 qpair failed and we were unable to recover it. 00:33:14.377 [2024-07-26 09:06:32.754912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.377 [2024-07-26 09:06:32.754937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.377 qpair failed and we were unable to recover it. 00:33:14.377 [2024-07-26 09:06:32.755129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.377 [2024-07-26 09:06:32.755157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.377 qpair failed and we were unable to recover it. 00:33:14.377 [2024-07-26 09:06:32.755291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.377 [2024-07-26 09:06:32.755320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.377 qpair failed and we were unable to recover it. 00:33:14.377 [2024-07-26 09:06:32.755459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.377 [2024-07-26 09:06:32.755484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.377 qpair failed and we were unable to recover it. 00:33:14.377 [2024-07-26 09:06:32.755629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.377 [2024-07-26 09:06:32.755671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.377 qpair failed and we were unable to recover it. 00:33:14.377 [2024-07-26 09:06:32.755832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.377 [2024-07-26 09:06:32.755861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.377 qpair failed and we were unable to recover it. 00:33:14.377 [2024-07-26 09:06:32.756003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.377 [2024-07-26 09:06:32.756029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.377 qpair failed and we were unable to recover it. 00:33:14.377 [2024-07-26 09:06:32.756185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.377 [2024-07-26 09:06:32.756211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.377 qpair failed and we were unable to recover it. 00:33:14.377 [2024-07-26 09:06:32.756373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.377 [2024-07-26 09:06:32.756401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.377 qpair failed and we were unable to recover it. 00:33:14.377 [2024-07-26 09:06:32.756542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.377 [2024-07-26 09:06:32.756568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.377 qpair failed and we were unable to recover it. 00:33:14.377 [2024-07-26 09:06:32.756740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.377 [2024-07-26 09:06:32.756766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.377 qpair failed and we were unable to recover it. 00:33:14.377 [2024-07-26 09:06:32.756940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.377 [2024-07-26 09:06:32.756968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.377 qpair failed and we were unable to recover it. 00:33:14.377 [2024-07-26 09:06:32.757104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.377 [2024-07-26 09:06:32.757131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.377 qpair failed and we were unable to recover it. 00:33:14.377 [2024-07-26 09:06:32.757281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.377 [2024-07-26 09:06:32.757322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.377 qpair failed and we were unable to recover it. 00:33:14.377 [2024-07-26 09:06:32.757535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.377 [2024-07-26 09:06:32.757578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.377 qpair failed and we were unable to recover it. 00:33:14.377 [2024-07-26 09:06:32.757759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.377 [2024-07-26 09:06:32.757787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.377 qpair failed and we were unable to recover it. 00:33:14.377 [2024-07-26 09:06:32.757938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.377 [2024-07-26 09:06:32.757965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.377 qpair failed and we were unable to recover it. 00:33:14.377 [2024-07-26 09:06:32.758137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.377 [2024-07-26 09:06:32.758168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.377 qpair failed and we were unable to recover it. 00:33:14.377 [2024-07-26 09:06:32.758308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.377 [2024-07-26 09:06:32.758334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.377 qpair failed and we were unable to recover it. 00:33:14.377 [2024-07-26 09:06:32.758478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.377 [2024-07-26 09:06:32.758522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.377 qpair failed and we were unable to recover it. 00:33:14.377 [2024-07-26 09:06:32.758766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.377 [2024-07-26 09:06:32.758793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.377 qpair failed and we were unable to recover it. 00:33:14.377 [2024-07-26 09:06:32.758918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.377 [2024-07-26 09:06:32.758943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.377 qpair failed and we were unable to recover it. 00:33:14.377 [2024-07-26 09:06:32.759149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.377 [2024-07-26 09:06:32.759179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.377 qpair failed and we were unable to recover it. 00:33:14.377 [2024-07-26 09:06:32.759335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.377 [2024-07-26 09:06:32.759363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.377 qpair failed and we were unable to recover it. 00:33:14.377 [2024-07-26 09:06:32.759534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.378 [2024-07-26 09:06:32.759563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.378 qpair failed and we were unable to recover it. 00:33:14.378 [2024-07-26 09:06:32.759707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.378 [2024-07-26 09:06:32.759750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.378 qpair failed and we were unable to recover it. 00:33:14.378 [2024-07-26 09:06:32.759883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.378 [2024-07-26 09:06:32.759911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.378 qpair failed and we were unable to recover it. 00:33:14.378 [2024-07-26 09:06:32.760080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.378 [2024-07-26 09:06:32.760106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.378 qpair failed and we were unable to recover it. 00:33:14.378 [2024-07-26 09:06:32.760223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.378 [2024-07-26 09:06:32.760264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.378 qpair failed and we were unable to recover it. 00:33:14.378 [2024-07-26 09:06:32.760467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.378 [2024-07-26 09:06:32.760521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.378 qpair failed and we were unable to recover it. 00:33:14.378 [2024-07-26 09:06:32.760719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.378 [2024-07-26 09:06:32.760745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.378 qpair failed and we were unable to recover it. 00:33:14.378 [2024-07-26 09:06:32.760915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.378 [2024-07-26 09:06:32.760943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.378 qpair failed and we were unable to recover it. 00:33:14.378 [2024-07-26 09:06:32.761115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.378 [2024-07-26 09:06:32.761144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.378 qpair failed and we were unable to recover it. 00:33:14.378 [2024-07-26 09:06:32.761277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.378 [2024-07-26 09:06:32.761302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.378 qpair failed and we were unable to recover it. 00:33:14.378 [2024-07-26 09:06:32.761481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.378 [2024-07-26 09:06:32.761522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.378 qpair failed and we were unable to recover it. 00:33:14.378 [2024-07-26 09:06:32.761773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.378 [2024-07-26 09:06:32.761826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.378 qpair failed and we were unable to recover it. 00:33:14.378 [2024-07-26 09:06:32.762043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.378 [2024-07-26 09:06:32.762093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.378 qpair failed and we were unable to recover it. 00:33:14.378 [2024-07-26 09:06:32.762233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.378 [2024-07-26 09:06:32.762259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.378 qpair failed and we were unable to recover it. 00:33:14.378 [2024-07-26 09:06:32.762408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.378 [2024-07-26 09:06:32.762447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.378 qpair failed and we were unable to recover it. 00:33:14.378 [2024-07-26 09:06:32.762637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.378 [2024-07-26 09:06:32.762665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.378 qpair failed and we were unable to recover it. 00:33:14.378 [2024-07-26 09:06:32.762819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.378 [2024-07-26 09:06:32.762846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.378 qpair failed and we were unable to recover it. 00:33:14.378 [2024-07-26 09:06:32.762990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.378 [2024-07-26 09:06:32.763017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.378 qpair failed and we were unable to recover it. 00:33:14.378 [2024-07-26 09:06:32.763146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.378 [2024-07-26 09:06:32.763173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.378 qpair failed and we were unable to recover it. 00:33:14.378 [2024-07-26 09:06:32.763320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.378 [2024-07-26 09:06:32.763347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.378 qpair failed and we were unable to recover it. 00:33:14.378 [2024-07-26 09:06:32.763522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.378 [2024-07-26 09:06:32.763551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.378 qpair failed and we were unable to recover it. 00:33:14.378 [2024-07-26 09:06:32.763721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.378 [2024-07-26 09:06:32.763747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.378 qpair failed and we were unable to recover it. 00:33:14.378 [2024-07-26 09:06:32.763862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.378 [2024-07-26 09:06:32.763888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.378 qpair failed and we were unable to recover it. 00:33:14.378 [2024-07-26 09:06:32.764028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.378 [2024-07-26 09:06:32.764081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.378 qpair failed and we were unable to recover it. 00:33:14.378 [2024-07-26 09:06:32.764251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.378 [2024-07-26 09:06:32.764277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.378 qpair failed and we were unable to recover it. 00:33:14.378 [2024-07-26 09:06:32.764418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.378 [2024-07-26 09:06:32.764460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.378 qpair failed and we were unable to recover it. 00:33:14.378 [2024-07-26 09:06:32.764618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.378 [2024-07-26 09:06:32.764671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.378 qpair failed and we were unable to recover it. 00:33:14.378 [2024-07-26 09:06:32.764844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.378 [2024-07-26 09:06:32.764875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.378 qpair failed and we were unable to recover it. 00:33:14.378 [2024-07-26 09:06:32.764992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.378 [2024-07-26 09:06:32.765017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.378 qpair failed and we were unable to recover it. 00:33:14.378 [2024-07-26 09:06:32.765219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.378 [2024-07-26 09:06:32.765250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.378 qpair failed and we were unable to recover it. 00:33:14.378 [2024-07-26 09:06:32.765411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.378 [2024-07-26 09:06:32.765438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.378 qpair failed and we were unable to recover it. 00:33:14.378 [2024-07-26 09:06:32.765561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.378 [2024-07-26 09:06:32.765587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.378 qpair failed and we were unable to recover it. 00:33:14.378 [2024-07-26 09:06:32.765734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.378 [2024-07-26 09:06:32.765760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.378 qpair failed and we were unable to recover it. 00:33:14.378 [2024-07-26 09:06:32.765977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.378 [2024-07-26 09:06:32.766002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.378 qpair failed and we were unable to recover it. 00:33:14.378 [2024-07-26 09:06:32.766147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.378 [2024-07-26 09:06:32.766178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.378 qpair failed and we were unable to recover it. 00:33:14.378 [2024-07-26 09:06:32.766322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.378 [2024-07-26 09:06:32.766353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.378 qpair failed and we were unable to recover it. 00:33:14.378 [2024-07-26 09:06:32.766504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.378 [2024-07-26 09:06:32.766530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.378 qpair failed and we were unable to recover it. 00:33:14.378 [2024-07-26 09:06:32.766676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.378 [2024-07-26 09:06:32.766701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.378 qpair failed and we were unable to recover it. 00:33:14.378 [2024-07-26 09:06:32.766832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.378 [2024-07-26 09:06:32.766860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.378 qpair failed and we were unable to recover it. 00:33:14.378 [2024-07-26 09:06:32.767018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.378 [2024-07-26 09:06:32.767046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.378 qpair failed and we were unable to recover it. 00:33:14.378 [2024-07-26 09:06:32.767255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.378 [2024-07-26 09:06:32.767280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.378 qpair failed and we were unable to recover it. 00:33:14.378 [2024-07-26 09:06:32.767452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.378 [2024-07-26 09:06:32.767481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.378 qpair failed and we were unable to recover it. 00:33:14.378 [2024-07-26 09:06:32.767640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.378 [2024-07-26 09:06:32.767666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.378 qpair failed and we were unable to recover it. 00:33:14.378 [2024-07-26 09:06:32.767810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.378 [2024-07-26 09:06:32.767854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.378 qpair failed and we were unable to recover it. 00:33:14.378 [2024-07-26 09:06:32.768009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.378 [2024-07-26 09:06:32.768037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.378 qpair failed and we were unable to recover it. 00:33:14.378 [2024-07-26 09:06:32.768184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.378 [2024-07-26 09:06:32.768210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.378 qpair failed and we were unable to recover it. 00:33:14.378 [2024-07-26 09:06:32.768380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.378 [2024-07-26 09:06:32.768405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.378 qpair failed and we were unable to recover it. 00:33:14.378 [2024-07-26 09:06:32.768524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.378 [2024-07-26 09:06:32.768550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.378 qpair failed and we were unable to recover it. 00:33:14.378 [2024-07-26 09:06:32.768708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.378 [2024-07-26 09:06:32.768733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.378 qpair failed and we were unable to recover it. 00:33:14.378 [2024-07-26 09:06:32.768873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.378 [2024-07-26 09:06:32.768916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.378 qpair failed and we were unable to recover it. 00:33:14.378 [2024-07-26 09:06:32.769111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.378 [2024-07-26 09:06:32.769140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.378 qpair failed and we were unable to recover it. 00:33:14.378 [2024-07-26 09:06:32.769310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.378 [2024-07-26 09:06:32.769335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.378 qpair failed and we were unable to recover it. 00:33:14.378 [2024-07-26 09:06:32.769447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.378 [2024-07-26 09:06:32.769488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.378 qpair failed and we were unable to recover it. 00:33:14.378 [2024-07-26 09:06:32.769646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.378 [2024-07-26 09:06:32.769674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.378 qpair failed and we were unable to recover it. 00:33:14.378 [2024-07-26 09:06:32.769871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.378 [2024-07-26 09:06:32.769896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.378 qpair failed and we were unable to recover it. 00:33:14.378 [2024-07-26 09:06:32.770077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.379 [2024-07-26 09:06:32.770110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.379 qpair failed and we were unable to recover it. 00:33:14.379 [2024-07-26 09:06:32.770236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.379 [2024-07-26 09:06:32.770264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.379 qpair failed and we were unable to recover it. 00:33:14.379 [2024-07-26 09:06:32.770428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.379 [2024-07-26 09:06:32.770454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.379 qpair failed and we were unable to recover it. 00:33:14.379 [2024-07-26 09:06:32.770584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.379 [2024-07-26 09:06:32.770611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.379 qpair failed and we were unable to recover it. 00:33:14.379 [2024-07-26 09:06:32.770762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.379 [2024-07-26 09:06:32.770788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.379 qpair failed and we were unable to recover it. 00:33:14.379 [2024-07-26 09:06:32.770899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.379 [2024-07-26 09:06:32.770925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.379 qpair failed and we were unable to recover it. 00:33:14.379 [2024-07-26 09:06:32.771082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.379 [2024-07-26 09:06:32.771127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.379 qpair failed and we were unable to recover it. 00:33:14.379 [2024-07-26 09:06:32.771278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.379 [2024-07-26 09:06:32.771307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.379 qpair failed and we were unable to recover it. 00:33:14.379 [2024-07-26 09:06:32.771472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.379 [2024-07-26 09:06:32.771498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.379 qpair failed and we were unable to recover it. 00:33:14.379 [2024-07-26 09:06:32.771657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.379 [2024-07-26 09:06:32.771685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.379 qpair failed and we were unable to recover it. 00:33:14.379 [2024-07-26 09:06:32.771874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.379 [2024-07-26 09:06:32.771900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.379 qpair failed and we were unable to recover it. 00:33:14.379 [2024-07-26 09:06:32.772045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.379 [2024-07-26 09:06:32.772082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.379 qpair failed and we were unable to recover it. 00:33:14.379 [2024-07-26 09:06:32.772196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.379 [2024-07-26 09:06:32.772221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.379 qpair failed and we were unable to recover it. 00:33:14.379 [2024-07-26 09:06:32.772340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.379 [2024-07-26 09:06:32.772379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.379 qpair failed and we were unable to recover it. 00:33:14.379 [2024-07-26 09:06:32.772554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.379 [2024-07-26 09:06:32.772579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.379 qpair failed and we were unable to recover it. 00:33:14.379 [2024-07-26 09:06:32.772709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.379 [2024-07-26 09:06:32.772737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.379 qpair failed and we were unable to recover it. 00:33:14.379 [2024-07-26 09:06:32.772890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.379 [2024-07-26 09:06:32.772918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.379 qpair failed and we were unable to recover it. 00:33:14.379 [2024-07-26 09:06:32.773086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.379 [2024-07-26 09:06:32.773111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.379 qpair failed and we were unable to recover it. 00:33:14.379 [2024-07-26 09:06:32.773223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.379 [2024-07-26 09:06:32.773248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.379 qpair failed and we were unable to recover it. 00:33:14.379 [2024-07-26 09:06:32.773437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.379 [2024-07-26 09:06:32.773481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.379 qpair failed and we were unable to recover it. 00:33:14.379 [2024-07-26 09:06:32.773657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.379 [2024-07-26 09:06:32.773685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.379 qpair failed and we were unable to recover it. 00:33:14.379 [2024-07-26 09:06:32.773829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.379 [2024-07-26 09:06:32.773872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.379 qpair failed and we were unable to recover it. 00:33:14.379 [2024-07-26 09:06:32.774033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.379 [2024-07-26 09:06:32.774068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.379 qpair failed and we were unable to recover it. 00:33:14.379 [2024-07-26 09:06:32.774231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.379 [2024-07-26 09:06:32.774257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.379 qpair failed and we were unable to recover it. 00:33:14.379 [2024-07-26 09:06:32.774427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.379 [2024-07-26 09:06:32.774456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.379 qpair failed and we were unable to recover it. 00:33:14.379 [2024-07-26 09:06:32.774613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.379 [2024-07-26 09:06:32.774642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.379 qpair failed and we were unable to recover it. 00:33:14.379 [2024-07-26 09:06:32.774787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.379 [2024-07-26 09:06:32.774814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.379 qpair failed and we were unable to recover it. 00:33:14.379 [2024-07-26 09:06:32.774968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.379 [2024-07-26 09:06:32.774994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.379 qpair failed and we were unable to recover it. 00:33:14.379 [2024-07-26 09:06:32.775181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.379 [2024-07-26 09:06:32.775207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.379 qpair failed and we were unable to recover it. 00:33:14.379 [2024-07-26 09:06:32.775352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.379 [2024-07-26 09:06:32.775378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.379 qpair failed and we were unable to recover it. 00:33:14.379 [2024-07-26 09:06:32.775539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.379 [2024-07-26 09:06:32.775567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.379 qpair failed and we were unable to recover it. 00:33:14.379 [2024-07-26 09:06:32.775713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.379 [2024-07-26 09:06:32.775739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.379 qpair failed and we were unable to recover it. 00:33:14.379 [2024-07-26 09:06:32.775887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.379 [2024-07-26 09:06:32.775913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.379 qpair failed and we were unable to recover it. 00:33:14.379 [2024-07-26 09:06:32.776068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.379 [2024-07-26 09:06:32.776094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.379 qpair failed and we were unable to recover it. 00:33:14.379 [2024-07-26 09:06:32.776297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.379 [2024-07-26 09:06:32.776323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.379 qpair failed and we were unable to recover it. 00:33:14.379 [2024-07-26 09:06:32.776466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.379 [2024-07-26 09:06:32.776492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.379 qpair failed and we were unable to recover it. 00:33:14.379 [2024-07-26 09:06:32.776633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.379 [2024-07-26 09:06:32.776676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.379 qpair failed and we were unable to recover it. 00:33:14.379 [2024-07-26 09:06:32.776815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.379 [2024-07-26 09:06:32.776844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.379 qpair failed and we were unable to recover it. 00:33:14.379 [2024-07-26 09:06:32.777007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.379 [2024-07-26 09:06:32.777032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.379 qpair failed and we were unable to recover it. 00:33:14.379 [2024-07-26 09:06:32.777166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.379 [2024-07-26 09:06:32.777192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.379 qpair failed and we were unable to recover it. 00:33:14.379 [2024-07-26 09:06:32.777338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.379 [2024-07-26 09:06:32.777369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.379 qpair failed and we were unable to recover it. 00:33:14.379 [2024-07-26 09:06:32.777517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.379 [2024-07-26 09:06:32.777544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.379 qpair failed and we were unable to recover it. 00:33:14.379 [2024-07-26 09:06:32.777663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.379 [2024-07-26 09:06:32.777690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.379 qpair failed and we were unable to recover it. 00:33:14.379 [2024-07-26 09:06:32.777862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.379 [2024-07-26 09:06:32.777905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.379 qpair failed and we were unable to recover it. 00:33:14.379 [2024-07-26 09:06:32.778064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.379 [2024-07-26 09:06:32.778109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.379 qpair failed and we were unable to recover it. 00:33:14.379 [2024-07-26 09:06:32.778286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.379 [2024-07-26 09:06:32.778312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.379 qpair failed and we were unable to recover it. 00:33:14.379 [2024-07-26 09:06:32.778519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.379 [2024-07-26 09:06:32.778547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.379 qpair failed and we were unable to recover it. 00:33:14.379 [2024-07-26 09:06:32.778719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.379 [2024-07-26 09:06:32.778746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.379 qpair failed and we were unable to recover it. 00:33:14.379 [2024-07-26 09:06:32.778937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.379 [2024-07-26 09:06:32.778966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.379 qpair failed and we were unable to recover it. 00:33:14.379 [2024-07-26 09:06:32.779096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.379 [2024-07-26 09:06:32.779126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.379 qpair failed and we were unable to recover it. 00:33:14.379 [2024-07-26 09:06:32.779300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.379 [2024-07-26 09:06:32.779326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.379 qpair failed and we were unable to recover it. 00:33:14.379 [2024-07-26 09:06:32.779522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.379 [2024-07-26 09:06:32.779565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.379 qpair failed and we were unable to recover it. 00:33:14.379 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 1119946 Killed "${NVMF_APP[@]}" "$@" 00:33:14.379 [2024-07-26 09:06:32.779825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.379 [2024-07-26 09:06:32.779863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.379 qpair failed and we were unable to recover it. 00:33:14.379 [2024-07-26 09:06:32.780078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.379 [2024-07-26 09:06:32.780113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.379 qpair failed and we were unable to recover it. 00:33:14.379 09:06:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:33:14.379 [2024-07-26 09:06:32.780286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.379 [2024-07-26 09:06:32.780321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.379 qpair failed and we were unable to recover it. 00:33:14.379 09:06:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:33:14.379 [2024-07-26 09:06:32.780536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.379 [2024-07-26 09:06:32.780575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.379 qpair failed and we were unable to recover it. 00:33:14.379 09:06:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:14.379 [2024-07-26 09:06:32.780764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.379 [2024-07-26 09:06:32.780799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.379 qpair failed and we were unable to recover it. 00:33:14.379 09:06:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:14.379 [2024-07-26 09:06:32.781004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.379 [2024-07-26 09:06:32.781035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.379 09:06:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:14.379 qpair failed and we were unable to recover it. 00:33:14.379 [2024-07-26 09:06:32.781211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.379 [2024-07-26 09:06:32.781242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.380 qpair failed and we were unable to recover it. 00:33:14.380 [2024-07-26 09:06:32.781448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.380 [2024-07-26 09:06:32.781475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.380 qpair failed and we were unable to recover it. 00:33:14.380 [2024-07-26 09:06:32.781704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.380 [2024-07-26 09:06:32.781755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.380 qpair failed and we were unable to recover it. 00:33:14.380 [2024-07-26 09:06:32.781887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.380 [2024-07-26 09:06:32.781916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.380 qpair failed and we were unable to recover it. 00:33:14.380 [2024-07-26 09:06:32.782076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.380 [2024-07-26 09:06:32.782102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.380 qpair failed and we were unable to recover it. 00:33:14.380 [2024-07-26 09:06:32.782261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.380 [2024-07-26 09:06:32.782289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.380 qpair failed and we were unable to recover it. 00:33:14.380 [2024-07-26 09:06:32.782463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.380 [2024-07-26 09:06:32.782492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.380 qpair failed and we were unable to recover it. 00:33:14.380 [2024-07-26 09:06:32.782628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.380 [2024-07-26 09:06:32.782654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.380 qpair failed and we were unable to recover it. 00:33:14.380 [2024-07-26 09:06:32.782843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.380 [2024-07-26 09:06:32.782872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.380 qpair failed and we were unable to recover it. 00:33:14.380 [2024-07-26 09:06:32.783082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.380 [2024-07-26 09:06:32.783112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.380 qpair failed and we were unable to recover it. 00:33:14.380 [2024-07-26 09:06:32.783244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.380 [2024-07-26 09:06:32.783271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.380 qpair failed and we were unable to recover it. 00:33:14.380 [2024-07-26 09:06:32.783428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.380 [2024-07-26 09:06:32.783471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.380 qpair failed and we were unable to recover it. 00:33:14.380 [2024-07-26 09:06:32.783609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.380 [2024-07-26 09:06:32.783649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.380 qpair failed and we were unable to recover it. 00:33:14.380 [2024-07-26 09:06:32.783817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.380 [2024-07-26 09:06:32.783843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.380 qpair failed and we were unable to recover it. 00:33:14.380 [2024-07-26 09:06:32.783996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.380 [2024-07-26 09:06:32.784052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.380 qpair failed and we were unable to recover it. 00:33:14.380 [2024-07-26 09:06:32.784238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.380 [2024-07-26 09:06:32.784265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.380 qpair failed and we were unable to recover it. 00:33:14.380 [2024-07-26 09:06:32.784439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.380 [2024-07-26 09:06:32.784465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.380 qpair failed and we were unable to recover it. 00:33:14.380 [2024-07-26 09:06:32.784693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.380 [2024-07-26 09:06:32.784751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.380 qpair failed and we were unable to recover it. 00:33:14.380 [2024-07-26 09:06:32.784915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.380 [2024-07-26 09:06:32.784943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.380 qpair failed and we were unable to recover it. 00:33:14.380 [2024-07-26 09:06:32.785115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.380 [2024-07-26 09:06:32.785147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.380 qpair failed and we were unable to recover it. 00:33:14.380 09:06:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1120497 00:33:14.380 [2024-07-26 09:06:32.785284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.380 [2024-07-26 09:06:32.785311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.380 qpair failed and we were unable to recover it. 00:33:14.380 09:06:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:33:14.380 09:06:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1120497 00:33:14.380 [2024-07-26 09:06:32.785459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.380 [2024-07-26 09:06:32.785486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.380 qpair failed and we were unable to recover it. 00:33:14.380 09:06:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 1120497 ']' 00:33:14.380 [2024-07-26 09:06:32.785627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.380 [2024-07-26 09:06:32.785671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.380 qpair failed and we were unable to recover it. 00:33:14.380 09:06:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:14.380 [2024-07-26 09:06:32.785817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.380 [2024-07-26 09:06:32.785846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.380 qpair failed and we were unable to recover it. 00:33:14.380 09:06:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:14.380 [2024-07-26 09:06:32.786035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.380 09:06:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:14.380 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:14.380 [2024-07-26 09:06:32.786076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.380 qpair failed and we were unable to recover it. 00:33:14.380 09:06:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:14.380 [2024-07-26 09:06:32.786246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.380 [2024-07-26 09:06:32.786273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.380 qpair failed and we were unable to recover it. 00:33:14.380 09:06:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:14.380 [2024-07-26 09:06:32.786402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.380 [2024-07-26 09:06:32.786430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.380 qpair failed and we were unable to recover it. 00:33:14.380 [2024-07-26 09:06:32.786896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.380 [2024-07-26 09:06:32.786939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.380 qpair failed and we were unable to recover it. 00:33:14.380 [2024-07-26 09:06:32.787128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.380 [2024-07-26 09:06:32.787156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.380 qpair failed and we were unable to recover it. 00:33:14.380 [2024-07-26 09:06:32.787280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.380 [2024-07-26 09:06:32.787308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.380 qpair failed and we were unable to recover it. 00:33:14.380 [2024-07-26 09:06:32.787441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.380 [2024-07-26 09:06:32.787467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.380 qpair failed and we were unable to recover it. 00:33:14.380 [2024-07-26 09:06:32.787634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.380 [2024-07-26 09:06:32.787670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.380 qpair failed and we were unable to recover it. 00:33:14.667 [2024-07-26 09:06:32.787849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.667 [2024-07-26 09:06:32.787875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.667 qpair failed and we were unable to recover it. 00:33:14.667 [2024-07-26 09:06:32.788015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.667 [2024-07-26 09:06:32.788041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.667 qpair failed and we were unable to recover it. 00:33:14.667 [2024-07-26 09:06:32.788183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.667 [2024-07-26 09:06:32.788209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.667 qpair failed and we were unable to recover it. 00:33:14.668 [2024-07-26 09:06:32.788331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.668 [2024-07-26 09:06:32.788374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.668 qpair failed and we were unable to recover it. 00:33:14.668 [2024-07-26 09:06:32.788545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.668 [2024-07-26 09:06:32.788571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.668 qpair failed and we were unable to recover it. 00:33:14.668 [2024-07-26 09:06:32.788693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.668 [2024-07-26 09:06:32.788720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.668 qpair failed and we were unable to recover it. 00:33:14.668 [2024-07-26 09:06:32.788866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.668 [2024-07-26 09:06:32.788892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.668 qpair failed and we were unable to recover it. 00:33:14.668 [2024-07-26 09:06:32.789066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.668 [2024-07-26 09:06:32.789094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.668 qpair failed and we were unable to recover it. 00:33:14.668 [2024-07-26 09:06:32.789240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.668 [2024-07-26 09:06:32.789267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.668 qpair failed and we were unable to recover it. 00:33:14.668 [2024-07-26 09:06:32.789479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.668 [2024-07-26 09:06:32.789512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.668 qpair failed and we were unable to recover it. 00:33:14.668 [2024-07-26 09:06:32.789714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.668 [2024-07-26 09:06:32.789740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.668 qpair failed and we were unable to recover it. 00:33:14.668 [2024-07-26 09:06:32.789905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.668 [2024-07-26 09:06:32.789933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.668 qpair failed and we were unable to recover it. 00:33:14.668 [2024-07-26 09:06:32.790144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.668 [2024-07-26 09:06:32.790171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.668 qpair failed and we were unable to recover it. 00:33:14.668 [2024-07-26 09:06:32.790322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.668 [2024-07-26 09:06:32.790359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.668 qpair failed and we were unable to recover it. 00:33:14.668 [2024-07-26 09:06:32.790543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.668 [2024-07-26 09:06:32.790593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.668 qpair failed and we were unable to recover it. 00:33:14.668 [2024-07-26 09:06:32.790738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.668 [2024-07-26 09:06:32.790765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.668 qpair failed and we were unable to recover it. 00:33:14.668 [2024-07-26 09:06:32.790928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.668 [2024-07-26 09:06:32.790957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.668 qpair failed and we were unable to recover it. 00:33:14.668 [2024-07-26 09:06:32.791087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.668 [2024-07-26 09:06:32.791131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.668 qpair failed and we were unable to recover it. 00:33:14.668 [2024-07-26 09:06:32.791280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.668 [2024-07-26 09:06:32.791306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.668 qpair failed and we were unable to recover it. 00:33:14.668 [2024-07-26 09:06:32.791476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.668 [2024-07-26 09:06:32.791503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.668 qpair failed and we were unable to recover it. 00:33:14.668 [2024-07-26 09:06:32.791651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.668 [2024-07-26 09:06:32.791685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.668 qpair failed and we were unable to recover it. 00:33:14.668 [2024-07-26 09:06:32.791830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.668 [2024-07-26 09:06:32.791859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.668 qpair failed and we were unable to recover it. 00:33:14.668 [2024-07-26 09:06:32.792025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.668 [2024-07-26 09:06:32.792051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.668 qpair failed and we were unable to recover it. 00:33:14.668 [2024-07-26 09:06:32.792184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.668 [2024-07-26 09:06:32.792210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.668 qpair failed and we were unable to recover it. 00:33:14.668 [2024-07-26 09:06:32.792381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.668 [2024-07-26 09:06:32.792409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.668 qpair failed and we were unable to recover it. 00:33:14.668 [2024-07-26 09:06:32.792575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.668 [2024-07-26 09:06:32.792600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.668 qpair failed and we were unable to recover it. 00:33:14.668 [2024-07-26 09:06:32.792747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.668 [2024-07-26 09:06:32.792790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.668 qpair failed and we were unable to recover it. 00:33:14.668 [2024-07-26 09:06:32.792975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.668 [2024-07-26 09:06:32.793004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.668 qpair failed and we were unable to recover it. 00:33:14.668 [2024-07-26 09:06:32.793142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.668 [2024-07-26 09:06:32.793168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.668 qpair failed and we were unable to recover it. 00:33:14.668 [2024-07-26 09:06:32.793284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.668 [2024-07-26 09:06:32.793310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.668 qpair failed and we were unable to recover it. 00:33:14.668 [2024-07-26 09:06:32.793490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.668 [2024-07-26 09:06:32.793518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.668 qpair failed and we were unable to recover it. 00:33:14.668 [2024-07-26 09:06:32.793685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.668 [2024-07-26 09:06:32.793710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.668 qpair failed and we were unable to recover it. 00:33:14.668 [2024-07-26 09:06:32.793823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.668 [2024-07-26 09:06:32.793867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.668 qpair failed and we were unable to recover it. 00:33:14.668 [2024-07-26 09:06:32.794033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.668 [2024-07-26 09:06:32.794075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.668 qpair failed and we were unable to recover it. 00:33:14.668 [2024-07-26 09:06:32.794225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.668 [2024-07-26 09:06:32.794252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.668 qpair failed and we were unable to recover it. 00:33:14.668 [2024-07-26 09:06:32.794421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.668 [2024-07-26 09:06:32.794450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.668 qpair failed and we were unable to recover it. 00:33:14.668 [2024-07-26 09:06:32.794611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.668 [2024-07-26 09:06:32.794640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.668 qpair failed and we were unable to recover it. 00:33:14.668 [2024-07-26 09:06:32.794803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.668 [2024-07-26 09:06:32.794829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.668 qpair failed and we were unable to recover it. 00:33:14.668 [2024-07-26 09:06:32.794959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.668 [2024-07-26 09:06:32.795018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.668 qpair failed and we were unable to recover it. 00:33:14.668 [2024-07-26 09:06:32.795216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.669 [2024-07-26 09:06:32.795244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.669 qpair failed and we were unable to recover it. 00:33:14.669 [2024-07-26 09:06:32.795416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.669 [2024-07-26 09:06:32.795442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.669 qpair failed and we were unable to recover it. 00:33:14.669 [2024-07-26 09:06:32.795615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.669 [2024-07-26 09:06:32.795660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.669 qpair failed and we were unable to recover it. 00:33:14.669 [2024-07-26 09:06:32.795842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.669 [2024-07-26 09:06:32.795871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.669 qpair failed and we were unable to recover it. 00:33:14.669 [2024-07-26 09:06:32.796066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.669 [2024-07-26 09:06:32.796092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.669 qpair failed and we were unable to recover it. 00:33:14.669 [2024-07-26 09:06:32.796256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.669 [2024-07-26 09:06:32.796285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.669 qpair failed and we were unable to recover it. 00:33:14.669 [2024-07-26 09:06:32.796469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.669 [2024-07-26 09:06:32.796498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.669 qpair failed and we were unable to recover it. 00:33:14.669 [2024-07-26 09:06:32.796667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.669 [2024-07-26 09:06:32.796693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.669 qpair failed and we were unable to recover it. 00:33:14.669 [2024-07-26 09:06:32.796863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.669 [2024-07-26 09:06:32.796896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.669 qpair failed and we were unable to recover it. 00:33:14.669 [2024-07-26 09:06:32.797048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.669 [2024-07-26 09:06:32.797084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.669 qpair failed and we were unable to recover it. 00:33:14.669 [2024-07-26 09:06:32.797247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.669 [2024-07-26 09:06:32.797278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.669 qpair failed and we were unable to recover it. 00:33:14.669 [2024-07-26 09:06:32.797413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.669 [2024-07-26 09:06:32.797457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.669 qpair failed and we were unable to recover it. 00:33:14.669 [2024-07-26 09:06:32.797628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.669 [2024-07-26 09:06:32.797654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.669 qpair failed and we were unable to recover it. 00:33:14.669 [2024-07-26 09:06:32.797785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.669 [2024-07-26 09:06:32.797811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.669 qpair failed and we were unable to recover it. 00:33:14.669 [2024-07-26 09:06:32.797966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.669 [2024-07-26 09:06:32.797995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.669 qpair failed and we were unable to recover it. 00:33:14.669 [2024-07-26 09:06:32.798144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.669 [2024-07-26 09:06:32.798170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.669 qpair failed and we were unable to recover it. 00:33:14.669 [2024-07-26 09:06:32.798311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.669 [2024-07-26 09:06:32.798337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.669 qpair failed and we were unable to recover it. 00:33:14.669 [2024-07-26 09:06:32.798513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.669 [2024-07-26 09:06:32.798543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.669 qpair failed and we were unable to recover it. 00:33:14.669 [2024-07-26 09:06:32.798693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.669 [2024-07-26 09:06:32.798719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.669 qpair failed and we were unable to recover it. 00:33:14.669 [2024-07-26 09:06:32.798834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.669 [2024-07-26 09:06:32.798861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.669 qpair failed and we were unable to recover it. 00:33:14.669 [2024-07-26 09:06:32.799041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.669 [2024-07-26 09:06:32.799071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.669 qpair failed and we were unable to recover it. 00:33:14.669 [2024-07-26 09:06:32.799226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.669 [2024-07-26 09:06:32.799252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.669 qpair failed and we were unable to recover it. 00:33:14.669 [2024-07-26 09:06:32.799402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.669 [2024-07-26 09:06:32.799428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.669 qpair failed and we were unable to recover it. 00:33:14.669 [2024-07-26 09:06:32.799635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.669 [2024-07-26 09:06:32.799679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.669 qpair failed and we were unable to recover it. 00:33:14.669 [2024-07-26 09:06:32.799850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.669 [2024-07-26 09:06:32.799879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.669 qpair failed and we were unable to recover it. 00:33:14.669 [2024-07-26 09:06:32.800042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.669 [2024-07-26 09:06:32.800075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.669 qpair failed and we were unable to recover it. 00:33:14.669 [2024-07-26 09:06:32.800251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.669 [2024-07-26 09:06:32.800281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.669 qpair failed and we were unable to recover it. 00:33:14.669 [2024-07-26 09:06:32.800458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.669 [2024-07-26 09:06:32.800487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.669 qpair failed and we were unable to recover it. 00:33:14.669 [2024-07-26 09:06:32.800622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.669 [2024-07-26 09:06:32.800648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.669 qpair failed and we were unable to recover it. 00:33:14.669 [2024-07-26 09:06:32.800791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.669 [2024-07-26 09:06:32.800833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.669 qpair failed and we were unable to recover it. 00:33:14.669 [2024-07-26 09:06:32.800961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.669 [2024-07-26 09:06:32.800989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.669 qpair failed and we were unable to recover it. 00:33:14.669 [2024-07-26 09:06:32.801157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.669 [2024-07-26 09:06:32.801183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.669 qpair failed and we were unable to recover it. 00:33:14.669 [2024-07-26 09:06:32.801299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.669 [2024-07-26 09:06:32.801340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.669 qpair failed and we were unable to recover it. 00:33:14.669 [2024-07-26 09:06:32.801485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.669 [2024-07-26 09:06:32.801513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.669 qpair failed and we were unable to recover it. 00:33:14.669 [2024-07-26 09:06:32.801656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.669 [2024-07-26 09:06:32.801681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.669 qpair failed and we were unable to recover it. 00:33:14.669 [2024-07-26 09:06:32.801800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.669 [2024-07-26 09:06:32.801825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.669 qpair failed and we were unable to recover it. 00:33:14.669 [2024-07-26 09:06:32.801975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.669 [2024-07-26 09:06:32.802000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.669 qpair failed and we were unable to recover it. 00:33:14.669 [2024-07-26 09:06:32.802169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.670 [2024-07-26 09:06:32.802196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.670 qpair failed and we were unable to recover it. 00:33:14.670 [2024-07-26 09:06:32.802334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.670 [2024-07-26 09:06:32.802362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.670 qpair failed and we were unable to recover it. 00:33:14.670 [2024-07-26 09:06:32.802514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.670 [2024-07-26 09:06:32.802557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.670 qpair failed and we were unable to recover it. 00:33:14.670 [2024-07-26 09:06:32.802731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.670 [2024-07-26 09:06:32.802757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.670 qpair failed and we were unable to recover it. 00:33:14.670 [2024-07-26 09:06:32.802905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.670 [2024-07-26 09:06:32.802936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.670 qpair failed and we were unable to recover it. 00:33:14.670 [2024-07-26 09:06:32.803121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.670 [2024-07-26 09:06:32.803149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.670 qpair failed and we were unable to recover it. 00:33:14.670 [2024-07-26 09:06:32.803299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.670 [2024-07-26 09:06:32.803325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.670 qpair failed and we were unable to recover it. 00:33:14.670 [2024-07-26 09:06:32.803483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.670 [2024-07-26 09:06:32.803525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.670 qpair failed and we were unable to recover it. 00:33:14.670 [2024-07-26 09:06:32.803709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.670 [2024-07-26 09:06:32.803738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.670 qpair failed and we were unable to recover it. 00:33:14.670 [2024-07-26 09:06:32.803934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.670 [2024-07-26 09:06:32.803960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.670 qpair failed and we were unable to recover it. 00:33:14.670 [2024-07-26 09:06:32.804125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.670 [2024-07-26 09:06:32.804153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.670 qpair failed and we were unable to recover it. 00:33:14.670 [2024-07-26 09:06:32.804309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.670 [2024-07-26 09:06:32.804335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.670 qpair failed and we were unable to recover it. 00:33:14.670 [2024-07-26 09:06:32.804479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.670 [2024-07-26 09:06:32.804504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.670 qpair failed and we were unable to recover it. 00:33:14.670 [2024-07-26 09:06:32.804624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.670 [2024-07-26 09:06:32.804654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.670 qpair failed and we were unable to recover it. 00:33:14.670 [2024-07-26 09:06:32.804823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.670 [2024-07-26 09:06:32.804849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.670 qpair failed and we were unable to recover it. 00:33:14.670 [2024-07-26 09:06:32.805001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.670 [2024-07-26 09:06:32.805027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.670 qpair failed and we were unable to recover it. 00:33:14.670 [2024-07-26 09:06:32.805180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.670 [2024-07-26 09:06:32.805207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.670 qpair failed and we were unable to recover it. 00:33:14.670 [2024-07-26 09:06:32.805326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.670 [2024-07-26 09:06:32.805351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.670 qpair failed and we were unable to recover it. 00:33:14.670 [2024-07-26 09:06:32.805473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.670 [2024-07-26 09:06:32.805500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.670 qpair failed and we were unable to recover it. 00:33:14.670 [2024-07-26 09:06:32.805624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.670 [2024-07-26 09:06:32.805649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.670 qpair failed and we were unable to recover it. 00:33:14.670 [2024-07-26 09:06:32.805831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.670 [2024-07-26 09:06:32.805856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.670 qpair failed and we were unable to recover it. 00:33:14.670 [2024-07-26 09:06:32.805994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.670 [2024-07-26 09:06:32.806020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.670 qpair failed and we were unable to recover it. 00:33:14.670 [2024-07-26 09:06:32.806164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.670 [2024-07-26 09:06:32.806192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.670 qpair failed and we were unable to recover it. 00:33:14.670 [2024-07-26 09:06:32.806360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.670 [2024-07-26 09:06:32.806385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.670 qpair failed and we were unable to recover it. 00:33:14.670 [2024-07-26 09:06:32.806530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.670 [2024-07-26 09:06:32.806556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.670 qpair failed and we were unable to recover it. 00:33:14.670 [2024-07-26 09:06:32.806699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.670 [2024-07-26 09:06:32.806725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.670 qpair failed and we were unable to recover it. 00:33:14.670 [2024-07-26 09:06:32.806900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.670 [2024-07-26 09:06:32.806926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.670 qpair failed and we were unable to recover it. 00:33:14.670 [2024-07-26 09:06:32.807047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.670 [2024-07-26 09:06:32.807081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.670 qpair failed and we were unable to recover it. 00:33:14.670 [2024-07-26 09:06:32.807235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.670 [2024-07-26 09:06:32.807261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.670 qpair failed and we were unable to recover it. 00:33:14.670 [2024-07-26 09:06:32.807375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.670 [2024-07-26 09:06:32.807400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.670 qpair failed and we were unable to recover it. 00:33:14.670 [2024-07-26 09:06:32.807523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.670 [2024-07-26 09:06:32.807548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.670 qpair failed and we were unable to recover it. 00:33:14.670 [2024-07-26 09:06:32.807671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.670 [2024-07-26 09:06:32.807697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.670 qpair failed and we were unable to recover it. 00:33:14.670 [2024-07-26 09:06:32.807814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.670 [2024-07-26 09:06:32.807839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.670 qpair failed and we were unable to recover it. 00:33:14.670 [2024-07-26 09:06:32.807955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.670 [2024-07-26 09:06:32.807981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.670 qpair failed and we were unable to recover it. 00:33:14.670 [2024-07-26 09:06:32.808109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.670 [2024-07-26 09:06:32.808135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.670 qpair failed and we were unable to recover it. 00:33:14.670 [2024-07-26 09:06:32.808282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.670 [2024-07-26 09:06:32.808307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.670 qpair failed and we were unable to recover it. 00:33:14.670 [2024-07-26 09:06:32.808428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.670 [2024-07-26 09:06:32.808455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.670 qpair failed and we were unable to recover it. 00:33:14.671 [2024-07-26 09:06:32.808600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.671 [2024-07-26 09:06:32.808626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.671 qpair failed and we were unable to recover it. 00:33:14.671 [2024-07-26 09:06:32.808751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.671 [2024-07-26 09:06:32.808777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.671 qpair failed and we were unable to recover it. 00:33:14.671 [2024-07-26 09:06:32.808899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.671 [2024-07-26 09:06:32.808925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.671 qpair failed and we were unable to recover it. 00:33:14.671 [2024-07-26 09:06:32.809105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.671 [2024-07-26 09:06:32.809145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.671 qpair failed and we were unable to recover it. 00:33:14.671 [2024-07-26 09:06:32.809294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.671 [2024-07-26 09:06:32.809321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.671 qpair failed and we were unable to recover it. 00:33:14.671 [2024-07-26 09:06:32.809474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.671 [2024-07-26 09:06:32.809501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.671 qpair failed and we were unable to recover it. 00:33:14.671 [2024-07-26 09:06:32.809650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.671 [2024-07-26 09:06:32.809676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.671 qpair failed and we were unable to recover it. 00:33:14.671 [2024-07-26 09:06:32.809827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.671 [2024-07-26 09:06:32.809853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.671 qpair failed and we were unable to recover it. 00:33:14.671 [2024-07-26 09:06:32.809970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.671 [2024-07-26 09:06:32.809996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.671 qpair failed and we were unable to recover it. 00:33:14.671 [2024-07-26 09:06:32.810122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.671 [2024-07-26 09:06:32.810148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.671 qpair failed and we were unable to recover it. 00:33:14.671 [2024-07-26 09:06:32.810278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.671 [2024-07-26 09:06:32.810304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.671 qpair failed and we were unable to recover it. 00:33:14.671 [2024-07-26 09:06:32.810428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.671 [2024-07-26 09:06:32.810454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.671 qpair failed and we were unable to recover it. 00:33:14.671 [2024-07-26 09:06:32.810628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.671 [2024-07-26 09:06:32.810655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.671 qpair failed and we were unable to recover it. 00:33:14.671 [2024-07-26 09:06:32.810801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.671 [2024-07-26 09:06:32.810827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.671 qpair failed and we were unable to recover it. 00:33:14.671 [2024-07-26 09:06:32.810974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.671 [2024-07-26 09:06:32.811000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.671 qpair failed and we were unable to recover it. 00:33:14.671 [2024-07-26 09:06:32.811151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.671 [2024-07-26 09:06:32.811177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.671 qpair failed and we were unable to recover it. 00:33:14.671 [2024-07-26 09:06:32.811323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.671 [2024-07-26 09:06:32.811358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.671 qpair failed and we were unable to recover it. 00:33:14.671 [2024-07-26 09:06:32.811487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.671 [2024-07-26 09:06:32.811513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.671 qpair failed and we were unable to recover it. 00:33:14.671 [2024-07-26 09:06:32.811688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.671 [2024-07-26 09:06:32.811714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.671 qpair failed and we were unable to recover it. 00:33:14.671 [2024-07-26 09:06:32.811844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.671 [2024-07-26 09:06:32.811871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.671 qpair failed and we were unable to recover it. 00:33:14.671 [2024-07-26 09:06:32.812002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.671 [2024-07-26 09:06:32.812034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.671 qpair failed and we were unable to recover it. 00:33:14.671 [2024-07-26 09:06:32.812161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.671 [2024-07-26 09:06:32.812190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.671 qpair failed and we were unable to recover it. 00:33:14.671 [2024-07-26 09:06:32.812318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.671 [2024-07-26 09:06:32.812343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.671 qpair failed and we were unable to recover it. 00:33:14.671 [2024-07-26 09:06:32.812453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.671 [2024-07-26 09:06:32.812478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.671 qpair failed and we were unable to recover it. 00:33:14.671 [2024-07-26 09:06:32.812599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.671 [2024-07-26 09:06:32.812625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.671 qpair failed and we were unable to recover it. 00:33:14.671 [2024-07-26 09:06:32.812792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.671 [2024-07-26 09:06:32.812817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.671 qpair failed and we were unable to recover it. 00:33:14.671 [2024-07-26 09:06:32.812971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.671 [2024-07-26 09:06:32.812997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.671 qpair failed and we were unable to recover it. 00:33:14.671 [2024-07-26 09:06:32.813153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.671 [2024-07-26 09:06:32.813180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.671 qpair failed and we were unable to recover it. 00:33:14.671 [2024-07-26 09:06:32.813351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.671 [2024-07-26 09:06:32.813376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.671 qpair failed and we were unable to recover it. 00:33:14.671 [2024-07-26 09:06:32.813496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.671 [2024-07-26 09:06:32.813521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.671 qpair failed and we were unable to recover it. 00:33:14.671 [2024-07-26 09:06:32.813674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.671 [2024-07-26 09:06:32.813700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.671 qpair failed and we were unable to recover it. 00:33:14.671 [2024-07-26 09:06:32.813846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.671 [2024-07-26 09:06:32.813871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.671 qpair failed and we were unable to recover it. 00:33:14.671 [2024-07-26 09:06:32.813984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.671 [2024-07-26 09:06:32.814009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.671 qpair failed and we were unable to recover it. 00:33:14.671 [2024-07-26 09:06:32.814158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.671 [2024-07-26 09:06:32.814183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.671 qpair failed and we were unable to recover it. 00:33:14.671 [2024-07-26 09:06:32.814304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.671 [2024-07-26 09:06:32.814330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.671 qpair failed and we were unable to recover it. 00:33:14.671 [2024-07-26 09:06:32.814450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.671 [2024-07-26 09:06:32.814476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.671 qpair failed and we were unable to recover it. 00:33:14.671 [2024-07-26 09:06:32.814646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.671 [2024-07-26 09:06:32.814671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.671 qpair failed and we were unable to recover it. 00:33:14.672 [2024-07-26 09:06:32.814817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.672 [2024-07-26 09:06:32.814843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.672 qpair failed and we were unable to recover it. 00:33:14.672 [2024-07-26 09:06:32.814970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.672 [2024-07-26 09:06:32.814996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.672 qpair failed and we were unable to recover it. 00:33:14.672 [2024-07-26 09:06:32.815146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.672 [2024-07-26 09:06:32.815172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.672 qpair failed and we were unable to recover it. 00:33:14.672 [2024-07-26 09:06:32.815295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.672 [2024-07-26 09:06:32.815322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.672 qpair failed and we were unable to recover it. 00:33:14.672 [2024-07-26 09:06:32.815444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.672 [2024-07-26 09:06:32.815469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.672 qpair failed and we were unable to recover it. 00:33:14.672 [2024-07-26 09:06:32.815613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.672 [2024-07-26 09:06:32.815638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.672 qpair failed and we were unable to recover it. 00:33:14.672 [2024-07-26 09:06:32.815822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.672 [2024-07-26 09:06:32.815861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.672 qpair failed and we were unable to recover it. 00:33:14.672 [2024-07-26 09:06:32.815982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.672 [2024-07-26 09:06:32.816009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.672 qpair failed and we were unable to recover it. 00:33:14.672 [2024-07-26 09:06:32.816137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.672 [2024-07-26 09:06:32.816165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.672 qpair failed and we were unable to recover it. 00:33:14.672 [2024-07-26 09:06:32.816290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.672 [2024-07-26 09:06:32.816315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.672 qpair failed and we were unable to recover it. 00:33:14.672 [2024-07-26 09:06:32.816435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.672 [2024-07-26 09:06:32.816460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.672 qpair failed and we were unable to recover it. 00:33:14.672 [2024-07-26 09:06:32.816575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.672 [2024-07-26 09:06:32.816600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.672 qpair failed and we were unable to recover it. 00:33:14.672 [2024-07-26 09:06:32.816748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.672 [2024-07-26 09:06:32.816773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.672 qpair failed and we were unable to recover it. 00:33:14.672 [2024-07-26 09:06:32.816919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.672 [2024-07-26 09:06:32.816944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.672 qpair failed and we were unable to recover it. 00:33:14.672 [2024-07-26 09:06:32.817089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.672 [2024-07-26 09:06:32.817123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.672 qpair failed and we were unable to recover it. 00:33:14.672 [2024-07-26 09:06:32.817239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.672 [2024-07-26 09:06:32.817264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.672 qpair failed and we were unable to recover it. 00:33:14.672 [2024-07-26 09:06:32.817380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.672 [2024-07-26 09:06:32.817406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.672 qpair failed and we were unable to recover it. 00:33:14.672 [2024-07-26 09:06:32.817577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.672 [2024-07-26 09:06:32.817602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.672 qpair failed and we were unable to recover it. 00:33:14.672 [2024-07-26 09:06:32.817725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.672 [2024-07-26 09:06:32.817751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.672 qpair failed and we were unable to recover it. 00:33:14.672 [2024-07-26 09:06:32.817869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.672 [2024-07-26 09:06:32.817896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.672 qpair failed and we were unable to recover it. 00:33:14.672 [2024-07-26 09:06:32.818047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.672 [2024-07-26 09:06:32.818079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.672 qpair failed and we were unable to recover it. 00:33:14.672 [2024-07-26 09:06:32.818201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.672 [2024-07-26 09:06:32.818226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.672 qpair failed and we were unable to recover it. 00:33:14.672 [2024-07-26 09:06:32.818368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.672 [2024-07-26 09:06:32.818393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.672 qpair failed and we were unable to recover it. 00:33:14.672 [2024-07-26 09:06:32.818504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.672 [2024-07-26 09:06:32.818529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.672 qpair failed and we were unable to recover it. 00:33:14.672 [2024-07-26 09:06:32.818883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.672 [2024-07-26 09:06:32.818911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.672 qpair failed and we were unable to recover it. 00:33:14.672 [2024-07-26 09:06:32.819042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.672 [2024-07-26 09:06:32.819074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.672 qpair failed and we were unable to recover it. 00:33:14.672 [2024-07-26 09:06:32.819242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.672 [2024-07-26 09:06:32.819269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.672 qpair failed and we were unable to recover it. 00:33:14.672 [2024-07-26 09:06:32.819418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.672 [2024-07-26 09:06:32.819444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.672 qpair failed and we were unable to recover it. 00:33:14.672 [2024-07-26 09:06:32.819604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.672 [2024-07-26 09:06:32.819630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.672 qpair failed and we were unable to recover it. 00:33:14.672 [2024-07-26 09:06:32.819762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.672 [2024-07-26 09:06:32.819801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.672 qpair failed and we were unable to recover it. 00:33:14.673 [2024-07-26 09:06:32.819965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.673 [2024-07-26 09:06:32.820004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.673 qpair failed and we were unable to recover it. 00:33:14.673 [2024-07-26 09:06:32.820155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.673 [2024-07-26 09:06:32.820182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.673 qpair failed and we were unable to recover it. 00:33:14.673 [2024-07-26 09:06:32.820333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.673 [2024-07-26 09:06:32.820358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.673 qpair failed and we were unable to recover it. 00:33:14.673 [2024-07-26 09:06:32.820502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.673 [2024-07-26 09:06:32.820533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.673 qpair failed and we were unable to recover it. 00:33:14.673 [2024-07-26 09:06:32.820660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.673 [2024-07-26 09:06:32.820687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.673 qpair failed and we were unable to recover it. 00:33:14.673 [2024-07-26 09:06:32.820808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.673 [2024-07-26 09:06:32.820834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.673 qpair failed and we were unable to recover it. 00:33:14.673 [2024-07-26 09:06:32.820976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.673 [2024-07-26 09:06:32.821003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.673 qpair failed and we were unable to recover it. 00:33:14.673 [2024-07-26 09:06:32.821161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.673 [2024-07-26 09:06:32.821186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.673 qpair failed and we were unable to recover it. 00:33:14.673 [2024-07-26 09:06:32.821332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.673 [2024-07-26 09:06:32.821357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.673 qpair failed and we were unable to recover it. 00:33:14.673 [2024-07-26 09:06:32.821504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.673 [2024-07-26 09:06:32.821530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.673 qpair failed and we were unable to recover it. 00:33:14.673 [2024-07-26 09:06:32.821653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.673 [2024-07-26 09:06:32.821679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.673 qpair failed and we were unable to recover it. 00:33:14.673 [2024-07-26 09:06:32.821800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.673 [2024-07-26 09:06:32.821825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.673 qpair failed and we were unable to recover it. 00:33:14.673 [2024-07-26 09:06:32.821975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.673 [2024-07-26 09:06:32.822001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.673 qpair failed and we were unable to recover it. 00:33:14.673 [2024-07-26 09:06:32.822322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.673 [2024-07-26 09:06:32.822356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.673 qpair failed and we were unable to recover it. 00:33:14.673 [2024-07-26 09:06:32.822528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.673 [2024-07-26 09:06:32.822554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.673 qpair failed and we were unable to recover it. 00:33:14.673 [2024-07-26 09:06:32.822676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.673 [2024-07-26 09:06:32.822701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.673 qpair failed and we were unable to recover it. 00:33:14.673 [2024-07-26 09:06:32.822854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.673 [2024-07-26 09:06:32.822879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.673 qpair failed and we were unable to recover it. 00:33:14.673 [2024-07-26 09:06:32.823013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.673 [2024-07-26 09:06:32.823065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.673 qpair failed and we were unable to recover it. 00:33:14.673 [2024-07-26 09:06:32.823219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.673 [2024-07-26 09:06:32.823246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.673 qpair failed and we were unable to recover it. 00:33:14.673 [2024-07-26 09:06:32.823365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.673 [2024-07-26 09:06:32.823390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.673 qpair failed and we were unable to recover it. 00:33:14.673 [2024-07-26 09:06:32.823558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.673 [2024-07-26 09:06:32.823584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.673 qpair failed and we were unable to recover it. 00:33:14.673 [2024-07-26 09:06:32.823703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.673 [2024-07-26 09:06:32.823729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.673 qpair failed and we were unable to recover it. 00:33:14.673 [2024-07-26 09:06:32.823876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.673 [2024-07-26 09:06:32.823901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.673 qpair failed and we were unable to recover it. 00:33:14.673 [2024-07-26 09:06:32.824018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.673 [2024-07-26 09:06:32.824045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.673 qpair failed and we were unable to recover it. 00:33:14.673 [2024-07-26 09:06:32.824214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.673 [2024-07-26 09:06:32.824253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.673 qpair failed and we were unable to recover it. 00:33:14.673 [2024-07-26 09:06:32.824379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.673 [2024-07-26 09:06:32.824406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.673 qpair failed and we were unable to recover it. 00:33:14.673 [2024-07-26 09:06:32.824537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.673 [2024-07-26 09:06:32.824563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.673 qpair failed and we were unable to recover it. 00:33:14.673 [2024-07-26 09:06:32.824708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.673 [2024-07-26 09:06:32.824733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.673 qpair failed and we were unable to recover it. 00:33:14.673 [2024-07-26 09:06:32.824905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.673 [2024-07-26 09:06:32.824931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.673 qpair failed and we were unable to recover it. 00:33:14.673 [2024-07-26 09:06:32.825075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.673 [2024-07-26 09:06:32.825102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.673 qpair failed and we were unable to recover it. 00:33:14.673 [2024-07-26 09:06:32.825228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.673 [2024-07-26 09:06:32.825255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.673 qpair failed and we were unable to recover it. 00:33:14.673 [2024-07-26 09:06:32.825377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.673 [2024-07-26 09:06:32.825403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.673 qpair failed and we were unable to recover it. 00:33:14.673 [2024-07-26 09:06:32.825521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.673 [2024-07-26 09:06:32.825547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.673 qpair failed and we were unable to recover it. 00:33:14.673 [2024-07-26 09:06:32.825696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.673 [2024-07-26 09:06:32.825721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.673 qpair failed and we were unable to recover it. 00:33:14.673 [2024-07-26 09:06:32.825871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.673 [2024-07-26 09:06:32.825897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.673 qpair failed and we were unable to recover it. 00:33:14.673 [2024-07-26 09:06:32.826026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.673 [2024-07-26 09:06:32.826074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.673 qpair failed and we were unable to recover it. 00:33:14.673 [2024-07-26 09:06:32.826237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.673 [2024-07-26 09:06:32.826264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.673 qpair failed and we were unable to recover it. 00:33:14.673 [2024-07-26 09:06:32.826386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.674 [2024-07-26 09:06:32.826411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.674 qpair failed and we were unable to recover it. 00:33:14.674 [2024-07-26 09:06:32.826557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.674 [2024-07-26 09:06:32.826583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.674 qpair failed and we were unable to recover it. 00:33:14.674 [2024-07-26 09:06:32.826755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.674 [2024-07-26 09:06:32.826781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.674 qpair failed and we were unable to recover it. 00:33:14.674 [2024-07-26 09:06:32.826901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.674 [2024-07-26 09:06:32.826926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.674 qpair failed and we were unable to recover it. 00:33:14.674 [2024-07-26 09:06:32.827044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.674 [2024-07-26 09:06:32.827079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.674 qpair failed and we were unable to recover it. 00:33:14.674 [2024-07-26 09:06:32.827209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.674 [2024-07-26 09:06:32.827247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.674 qpair failed and we were unable to recover it. 00:33:14.674 [2024-07-26 09:06:32.827444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.674 [2024-07-26 09:06:32.827489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.674 qpair failed and we were unable to recover it. 00:33:14.674 [2024-07-26 09:06:32.827619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.674 [2024-07-26 09:06:32.827648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.674 qpair failed and we were unable to recover it. 00:33:14.674 [2024-07-26 09:06:32.827823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.674 [2024-07-26 09:06:32.827850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.674 qpair failed and we were unable to recover it. 00:33:14.674 [2024-07-26 09:06:32.827998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.674 [2024-07-26 09:06:32.828025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.674 qpair failed and we were unable to recover it. 00:33:14.674 [2024-07-26 09:06:32.828168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.674 [2024-07-26 09:06:32.828206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.674 qpair failed and we were unable to recover it. 00:33:14.674 [2024-07-26 09:06:32.828406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.674 [2024-07-26 09:06:32.828445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.674 qpair failed and we were unable to recover it. 00:33:14.674 [2024-07-26 09:06:32.828601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.674 [2024-07-26 09:06:32.828628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.674 qpair failed and we were unable to recover it. 00:33:14.674 [2024-07-26 09:06:32.828773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.674 [2024-07-26 09:06:32.828799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.674 qpair failed and we were unable to recover it. 00:33:14.674 [2024-07-26 09:06:32.828926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.674 [2024-07-26 09:06:32.828952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.674 qpair failed and we were unable to recover it. 00:33:14.674 [2024-07-26 09:06:32.829069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.674 [2024-07-26 09:06:32.829095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.674 qpair failed and we were unable to recover it. 00:33:14.674 [2024-07-26 09:06:32.829136] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:33:14.674 [2024-07-26 09:06:32.829216] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:14.674 [2024-07-26 09:06:32.829243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.674 [2024-07-26 09:06:32.829269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.674 qpair failed and we were unable to recover it. 00:33:14.674 [2024-07-26 09:06:32.829420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.674 [2024-07-26 09:06:32.829445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.674 qpair failed and we were unable to recover it. 00:33:14.674 [2024-07-26 09:06:32.829589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.674 [2024-07-26 09:06:32.829621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.674 qpair failed and we were unable to recover it. 00:33:14.674 [2024-07-26 09:06:32.829740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.674 [2024-07-26 09:06:32.829767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.674 qpair failed and we were unable to recover it. 00:33:14.674 [2024-07-26 09:06:32.829913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.674 [2024-07-26 09:06:32.829939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.674 qpair failed and we were unable to recover it. 00:33:14.674 [2024-07-26 09:06:32.830120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.674 [2024-07-26 09:06:32.830159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.674 qpair failed and we were unable to recover it. 00:33:14.674 [2024-07-26 09:06:32.830283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.674 [2024-07-26 09:06:32.830310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.674 qpair failed and we were unable to recover it. 00:33:14.674 [2024-07-26 09:06:32.830443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.674 [2024-07-26 09:06:32.830469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.674 qpair failed and we were unable to recover it. 00:33:14.674 [2024-07-26 09:06:32.830616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.674 [2024-07-26 09:06:32.830642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.674 qpair failed and we were unable to recover it. 00:33:14.674 [2024-07-26 09:06:32.830767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.674 [2024-07-26 09:06:32.830807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.674 qpair failed and we were unable to recover it. 00:33:14.674 [2024-07-26 09:06:32.830924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.674 [2024-07-26 09:06:32.830953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.674 qpair failed and we were unable to recover it. 00:33:14.674 [2024-07-26 09:06:32.831110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.674 [2024-07-26 09:06:32.831138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.674 qpair failed and we were unable to recover it. 00:33:14.674 [2024-07-26 09:06:32.831287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.674 [2024-07-26 09:06:32.831314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.674 qpair failed and we were unable to recover it. 00:33:14.674 [2024-07-26 09:06:32.831473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.674 [2024-07-26 09:06:32.831512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.674 qpair failed and we were unable to recover it. 00:33:14.674 [2024-07-26 09:06:32.831643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.674 [2024-07-26 09:06:32.831670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.674 qpair failed and we were unable to recover it. 00:33:14.674 [2024-07-26 09:06:32.831820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.674 [2024-07-26 09:06:32.831846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.674 qpair failed and we were unable to recover it. 00:33:14.674 [2024-07-26 09:06:32.831975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.674 [2024-07-26 09:06:32.832002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.674 qpair failed and we were unable to recover it. 00:33:14.674 [2024-07-26 09:06:32.832180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.674 [2024-07-26 09:06:32.832210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.674 qpair failed and we were unable to recover it. 00:33:14.674 [2024-07-26 09:06:32.832322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.674 [2024-07-26 09:06:32.832349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.674 qpair failed and we were unable to recover it. 00:33:14.674 [2024-07-26 09:06:32.832499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.674 [2024-07-26 09:06:32.832525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.674 qpair failed and we were unable to recover it. 00:33:14.674 [2024-07-26 09:06:32.832669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.674 [2024-07-26 09:06:32.832695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.675 qpair failed and we were unable to recover it. 00:33:14.675 [2024-07-26 09:06:32.832844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.675 [2024-07-26 09:06:32.832873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.675 qpair failed and we were unable to recover it. 00:33:14.675 [2024-07-26 09:06:32.833038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.675 [2024-07-26 09:06:32.833086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.675 qpair failed and we were unable to recover it. 00:33:14.675 [2024-07-26 09:06:32.833232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.675 [2024-07-26 09:06:32.833258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.675 qpair failed and we were unable to recover it. 00:33:14.675 [2024-07-26 09:06:32.833412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.675 [2024-07-26 09:06:32.833437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.675 qpair failed and we were unable to recover it. 00:33:14.675 [2024-07-26 09:06:32.833583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.675 [2024-07-26 09:06:32.833609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.675 qpair failed and we were unable to recover it. 00:33:14.675 [2024-07-26 09:06:32.833754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.675 [2024-07-26 09:06:32.833781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.675 qpair failed and we were unable to recover it. 00:33:14.675 [2024-07-26 09:06:32.833906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.675 [2024-07-26 09:06:32.833933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.675 qpair failed and we were unable to recover it. 00:33:14.675 [2024-07-26 09:06:32.834075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.675 [2024-07-26 09:06:32.834117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.675 qpair failed and we were unable to recover it. 00:33:14.675 [2024-07-26 09:06:32.834247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.675 [2024-07-26 09:06:32.834280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.675 qpair failed and we were unable to recover it. 00:33:14.675 [2024-07-26 09:06:32.834405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.675 [2024-07-26 09:06:32.834432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.675 qpair failed and we were unable to recover it. 00:33:14.675 [2024-07-26 09:06:32.834588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.675 [2024-07-26 09:06:32.834616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.675 qpair failed and we were unable to recover it. 00:33:14.675 [2024-07-26 09:06:32.834735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.675 [2024-07-26 09:06:32.834762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.675 qpair failed and we were unable to recover it. 00:33:14.675 [2024-07-26 09:06:32.834935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.675 [2024-07-26 09:06:32.834961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.675 qpair failed and we were unable to recover it. 00:33:14.675 [2024-07-26 09:06:32.835102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.675 [2024-07-26 09:06:32.835130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.675 qpair failed and we were unable to recover it. 00:33:14.675 [2024-07-26 09:06:32.835262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.675 [2024-07-26 09:06:32.835288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.675 qpair failed and we were unable to recover it. 00:33:14.675 [2024-07-26 09:06:32.835445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.675 [2024-07-26 09:06:32.835472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.675 qpair failed and we were unable to recover it. 00:33:14.675 [2024-07-26 09:06:32.835592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.675 [2024-07-26 09:06:32.835618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.675 qpair failed and we were unable to recover it. 00:33:14.675 [2024-07-26 09:06:32.835745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.675 [2024-07-26 09:06:32.835772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.675 qpair failed and we were unable to recover it. 00:33:14.675 [2024-07-26 09:06:32.835896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.675 [2024-07-26 09:06:32.835924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.675 qpair failed and we were unable to recover it. 00:33:14.675 [2024-07-26 09:06:32.836080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.675 [2024-07-26 09:06:32.836108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.675 qpair failed and we were unable to recover it. 00:33:14.675 [2024-07-26 09:06:32.836229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.675 [2024-07-26 09:06:32.836255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.675 qpair failed and we were unable to recover it. 00:33:14.675 [2024-07-26 09:06:32.836379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.675 [2024-07-26 09:06:32.836405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.675 qpair failed and we were unable to recover it. 00:33:14.675 [2024-07-26 09:06:32.836590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.675 [2024-07-26 09:06:32.836617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.675 qpair failed and we were unable to recover it. 00:33:14.675 [2024-07-26 09:06:32.836743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.675 [2024-07-26 09:06:32.836768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.675 qpair failed and we were unable to recover it. 00:33:14.675 [2024-07-26 09:06:32.836885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.675 [2024-07-26 09:06:32.836911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.675 qpair failed and we were unable to recover it. 00:33:14.675 [2024-07-26 09:06:32.837051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.675 [2024-07-26 09:06:32.837082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.675 qpair failed and we were unable to recover it. 00:33:14.675 [2024-07-26 09:06:32.837200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.675 [2024-07-26 09:06:32.837226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.675 qpair failed and we were unable to recover it. 00:33:14.675 [2024-07-26 09:06:32.837373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.675 [2024-07-26 09:06:32.837399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.675 qpair failed and we were unable to recover it. 00:33:14.675 [2024-07-26 09:06:32.837510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.675 [2024-07-26 09:06:32.837536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.675 qpair failed and we were unable to recover it. 00:33:14.675 [2024-07-26 09:06:32.837655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.675 [2024-07-26 09:06:32.837680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.675 qpair failed and we were unable to recover it. 00:33:14.675 [2024-07-26 09:06:32.837827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.675 [2024-07-26 09:06:32.837852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.675 qpair failed and we were unable to recover it. 00:33:14.675 [2024-07-26 09:06:32.838017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.675 [2024-07-26 09:06:32.838056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.675 qpair failed and we were unable to recover it. 00:33:14.675 [2024-07-26 09:06:32.838190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.675 [2024-07-26 09:06:32.838217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.675 qpair failed and we were unable to recover it. 00:33:14.675 [2024-07-26 09:06:32.838340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.675 [2024-07-26 09:06:32.838367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.675 qpair failed and we were unable to recover it. 00:33:14.675 [2024-07-26 09:06:32.838482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.675 [2024-07-26 09:06:32.838508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.675 qpair failed and we were unable to recover it. 00:33:14.675 [2024-07-26 09:06:32.838627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.675 [2024-07-26 09:06:32.838658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.675 qpair failed and we were unable to recover it. 00:33:14.675 [2024-07-26 09:06:32.838808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.675 [2024-07-26 09:06:32.838836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.675 qpair failed and we were unable to recover it. 00:33:14.675 [2024-07-26 09:06:32.839010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.676 [2024-07-26 09:06:32.839038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.676 qpair failed and we were unable to recover it. 00:33:14.676 [2024-07-26 09:06:32.839183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.676 [2024-07-26 09:06:32.839221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.676 qpair failed and we were unable to recover it. 00:33:14.676 [2024-07-26 09:06:32.839372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.676 [2024-07-26 09:06:32.839400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.676 qpair failed and we were unable to recover it. 00:33:14.676 [2024-07-26 09:06:32.839542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.676 [2024-07-26 09:06:32.839568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.676 qpair failed and we were unable to recover it. 00:33:14.676 [2024-07-26 09:06:32.839680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.676 [2024-07-26 09:06:32.839706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.676 qpair failed and we were unable to recover it. 00:33:14.676 [2024-07-26 09:06:32.839833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.676 [2024-07-26 09:06:32.839860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.676 qpair failed and we were unable to recover it. 00:33:14.676 [2024-07-26 09:06:32.840011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.676 [2024-07-26 09:06:32.840048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.676 qpair failed and we were unable to recover it. 00:33:14.676 [2024-07-26 09:06:32.840180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.676 [2024-07-26 09:06:32.840206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.676 qpair failed and we were unable to recover it. 00:33:14.676 [2024-07-26 09:06:32.840355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.676 [2024-07-26 09:06:32.840382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.676 qpair failed and we were unable to recover it. 00:33:14.676 [2024-07-26 09:06:32.840529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.676 [2024-07-26 09:06:32.840556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.676 qpair failed and we were unable to recover it. 00:33:14.676 [2024-07-26 09:06:32.840741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.676 [2024-07-26 09:06:32.840767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.676 qpair failed and we were unable to recover it. 00:33:14.676 [2024-07-26 09:06:32.840881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.676 [2024-07-26 09:06:32.840907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.676 qpair failed and we were unable to recover it. 00:33:14.676 [2024-07-26 09:06:32.841073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.676 [2024-07-26 09:06:32.841100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.676 qpair failed and we were unable to recover it. 00:33:14.676 [2024-07-26 09:06:32.841216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.676 [2024-07-26 09:06:32.841243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.676 qpair failed and we were unable to recover it. 00:33:14.676 [2024-07-26 09:06:32.841362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.676 [2024-07-26 09:06:32.841388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.676 qpair failed and we were unable to recover it. 00:33:14.676 [2024-07-26 09:06:32.841512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.676 [2024-07-26 09:06:32.841538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.676 qpair failed and we were unable to recover it. 00:33:14.676 [2024-07-26 09:06:32.841709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.676 [2024-07-26 09:06:32.841735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.676 qpair failed and we were unable to recover it. 00:33:14.676 [2024-07-26 09:06:32.841851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.676 [2024-07-26 09:06:32.841879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.676 qpair failed and we were unable to recover it. 00:33:14.676 [2024-07-26 09:06:32.842022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.676 [2024-07-26 09:06:32.842049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.676 qpair failed and we were unable to recover it. 00:33:14.676 [2024-07-26 09:06:32.842233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.676 [2024-07-26 09:06:32.842260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.676 qpair failed and we were unable to recover it. 00:33:14.676 [2024-07-26 09:06:32.842395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.676 [2024-07-26 09:06:32.842422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.676 qpair failed and we were unable to recover it. 00:33:14.676 [2024-07-26 09:06:32.842571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.676 [2024-07-26 09:06:32.842597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.676 qpair failed and we were unable to recover it. 00:33:14.676 [2024-07-26 09:06:32.842743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.676 [2024-07-26 09:06:32.842769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.676 qpair failed and we were unable to recover it. 00:33:14.676 [2024-07-26 09:06:32.842892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.676 [2024-07-26 09:06:32.842918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.676 qpair failed and we were unable to recover it. 00:33:14.676 [2024-07-26 09:06:32.843035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.676 [2024-07-26 09:06:32.843067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.676 qpair failed and we were unable to recover it. 00:33:14.676 [2024-07-26 09:06:32.843201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.676 [2024-07-26 09:06:32.843228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.676 qpair failed and we were unable to recover it. 00:33:14.676 [2024-07-26 09:06:32.843388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.676 [2024-07-26 09:06:32.843418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.676 qpair failed and we were unable to recover it. 00:33:14.676 [2024-07-26 09:06:32.843574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.676 [2024-07-26 09:06:32.843601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.676 qpair failed and we were unable to recover it. 00:33:14.676 [2024-07-26 09:06:32.843724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.676 [2024-07-26 09:06:32.843750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.676 qpair failed and we were unable to recover it. 00:33:14.676 [2024-07-26 09:06:32.843874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.676 [2024-07-26 09:06:32.843900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.676 qpair failed and we were unable to recover it. 00:33:14.676 [2024-07-26 09:06:32.844043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.676 [2024-07-26 09:06:32.844083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.676 qpair failed and we were unable to recover it. 00:33:14.676 [2024-07-26 09:06:32.844201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.676 [2024-07-26 09:06:32.844225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.676 qpair failed and we were unable to recover it. 00:33:14.676 [2024-07-26 09:06:32.844345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.676 [2024-07-26 09:06:32.844377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.676 qpair failed and we were unable to recover it. 00:33:14.676 [2024-07-26 09:06:32.844499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.676 [2024-07-26 09:06:32.844523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.676 qpair failed and we were unable to recover it. 00:33:14.676 [2024-07-26 09:06:32.844640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.676 [2024-07-26 09:06:32.844666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.676 qpair failed and we were unable to recover it. 00:33:14.676 [2024-07-26 09:06:32.844814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.676 [2024-07-26 09:06:32.844840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.676 qpair failed and we were unable to recover it. 00:33:14.676 [2024-07-26 09:06:32.844950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.676 [2024-07-26 09:06:32.844974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.676 qpair failed and we were unable to recover it. 00:33:14.676 [2024-07-26 09:06:32.845135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.677 [2024-07-26 09:06:32.845174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.677 qpair failed and we were unable to recover it. 00:33:14.677 [2024-07-26 09:06:32.845332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.677 [2024-07-26 09:06:32.845365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.677 qpair failed and we were unable to recover it. 00:33:14.677 [2024-07-26 09:06:32.845484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.677 [2024-07-26 09:06:32.845510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.677 qpair failed and we were unable to recover it. 00:33:14.677 [2024-07-26 09:06:32.845654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.677 [2024-07-26 09:06:32.845680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.677 qpair failed and we were unable to recover it. 00:33:14.677 [2024-07-26 09:06:32.845824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.677 [2024-07-26 09:06:32.845850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.677 qpair failed and we were unable to recover it. 00:33:14.677 [2024-07-26 09:06:32.846010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.677 [2024-07-26 09:06:32.846049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.677 qpair failed and we were unable to recover it. 00:33:14.677 [2024-07-26 09:06:32.846235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.677 [2024-07-26 09:06:32.846261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.677 qpair failed and we were unable to recover it. 00:33:14.677 [2024-07-26 09:06:32.846426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.677 [2024-07-26 09:06:32.846454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.677 qpair failed and we were unable to recover it. 00:33:14.677 [2024-07-26 09:06:32.846605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.677 [2024-07-26 09:06:32.846631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.677 qpair failed and we were unable to recover it. 00:33:14.677 [2024-07-26 09:06:32.846768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.677 [2024-07-26 09:06:32.846794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.677 qpair failed and we were unable to recover it. 00:33:14.677 [2024-07-26 09:06:32.846917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.677 [2024-07-26 09:06:32.846943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.677 qpair failed and we were unable to recover it. 00:33:14.677 [2024-07-26 09:06:32.847068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.677 [2024-07-26 09:06:32.847095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.677 qpair failed and we were unable to recover it. 00:33:14.677 [2024-07-26 09:06:32.847240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.677 [2024-07-26 09:06:32.847266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.677 qpair failed and we were unable to recover it. 00:33:14.677 [2024-07-26 09:06:32.847395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.677 [2024-07-26 09:06:32.847421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.677 qpair failed and we were unable to recover it. 00:33:14.677 [2024-07-26 09:06:32.847571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.677 [2024-07-26 09:06:32.847597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.677 qpair failed and we were unable to recover it. 00:33:14.677 [2024-07-26 09:06:32.847747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.677 [2024-07-26 09:06:32.847773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.677 qpair failed and we were unable to recover it. 00:33:14.677 [2024-07-26 09:06:32.847935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.677 [2024-07-26 09:06:32.847974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.677 qpair failed and we were unable to recover it. 00:33:14.677 [2024-07-26 09:06:32.848132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.677 [2024-07-26 09:06:32.848172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.677 qpair failed and we were unable to recover it. 00:33:14.677 [2024-07-26 09:06:32.848346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.677 [2024-07-26 09:06:32.848373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.677 qpair failed and we were unable to recover it. 00:33:14.677 [2024-07-26 09:06:32.848543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.677 [2024-07-26 09:06:32.848569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.677 qpair failed and we were unable to recover it. 00:33:14.677 [2024-07-26 09:06:32.848716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.677 [2024-07-26 09:06:32.848742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.677 qpair failed and we were unable to recover it. 00:33:14.677 [2024-07-26 09:06:32.848889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.677 [2024-07-26 09:06:32.848915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.677 qpair failed and we were unable to recover it. 00:33:14.677 [2024-07-26 09:06:32.849092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.677 [2024-07-26 09:06:32.849119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.677 qpair failed and we were unable to recover it. 00:33:14.677 [2024-07-26 09:06:32.849239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.677 [2024-07-26 09:06:32.849265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.677 qpair failed and we were unable to recover it. 00:33:14.677 [2024-07-26 09:06:32.849397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.677 [2024-07-26 09:06:32.849423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.677 qpair failed and we were unable to recover it. 00:33:14.677 [2024-07-26 09:06:32.849547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.677 [2024-07-26 09:06:32.849589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.677 qpair failed and we were unable to recover it. 00:33:14.677 [2024-07-26 09:06:32.849755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.677 [2024-07-26 09:06:32.849793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.677 qpair failed and we were unable to recover it. 00:33:14.677 [2024-07-26 09:06:32.849943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.677 [2024-07-26 09:06:32.849970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.677 qpair failed and we were unable to recover it. 00:33:14.677 [2024-07-26 09:06:32.850106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.677 [2024-07-26 09:06:32.850135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.677 qpair failed and we were unable to recover it. 00:33:14.677 [2024-07-26 09:06:32.850261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.677 [2024-07-26 09:06:32.850288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.677 qpair failed and we were unable to recover it. 00:33:14.677 [2024-07-26 09:06:32.850439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.677 [2024-07-26 09:06:32.850465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.677 qpair failed and we were unable to recover it. 00:33:14.677 [2024-07-26 09:06:32.850650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.678 [2024-07-26 09:06:32.850676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.678 qpair failed and we were unable to recover it. 00:33:14.678 [2024-07-26 09:06:32.850800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.678 [2024-07-26 09:06:32.850827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.678 qpair failed and we were unable to recover it. 00:33:14.678 [2024-07-26 09:06:32.850949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.678 [2024-07-26 09:06:32.850977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.678 qpair failed and we were unable to recover it. 00:33:14.678 [2024-07-26 09:06:32.851118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.678 [2024-07-26 09:06:32.851145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.678 qpair failed and we were unable to recover it. 00:33:14.678 [2024-07-26 09:06:32.851269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.678 [2024-07-26 09:06:32.851295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.678 qpair failed and we were unable to recover it. 00:33:14.678 [2024-07-26 09:06:32.851451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.678 [2024-07-26 09:06:32.851477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.678 qpair failed and we were unable to recover it. 00:33:14.678 [2024-07-26 09:06:32.851632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.678 [2024-07-26 09:06:32.851658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.678 qpair failed and we were unable to recover it. 00:33:14.678 [2024-07-26 09:06:32.851804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.678 [2024-07-26 09:06:32.851830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.678 qpair failed and we were unable to recover it. 00:33:14.678 [2024-07-26 09:06:32.852010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.678 [2024-07-26 09:06:32.852050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.678 qpair failed and we were unable to recover it. 00:33:14.678 [2024-07-26 09:06:32.852193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.678 [2024-07-26 09:06:32.852231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.678 qpair failed and we were unable to recover it. 00:33:14.678 [2024-07-26 09:06:32.852357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.678 [2024-07-26 09:06:32.852389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.678 qpair failed and we were unable to recover it. 00:33:14.678 [2024-07-26 09:06:32.852535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.678 [2024-07-26 09:06:32.852561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.678 qpair failed and we were unable to recover it. 00:33:14.678 [2024-07-26 09:06:32.852673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.678 [2024-07-26 09:06:32.852699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.678 qpair failed and we were unable to recover it. 00:33:14.678 [2024-07-26 09:06:32.852819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.678 [2024-07-26 09:06:32.852845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.678 qpair failed and we were unable to recover it. 00:33:14.678 [2024-07-26 09:06:32.852957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.678 [2024-07-26 09:06:32.852983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.678 qpair failed and we were unable to recover it. 00:33:14.678 [2024-07-26 09:06:32.853151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.678 [2024-07-26 09:06:32.853178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.678 qpair failed and we were unable to recover it. 00:33:14.678 [2024-07-26 09:06:32.853302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.678 [2024-07-26 09:06:32.853329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.678 qpair failed and we were unable to recover it. 00:33:14.678 [2024-07-26 09:06:32.853498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.678 [2024-07-26 09:06:32.853523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.678 qpair failed and we were unable to recover it. 00:33:14.678 [2024-07-26 09:06:32.853671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.678 [2024-07-26 09:06:32.853696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.678 qpair failed and we were unable to recover it. 00:33:14.678 [2024-07-26 09:06:32.853865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.678 [2024-07-26 09:06:32.853891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.678 qpair failed and we were unable to recover it. 00:33:14.678 [2024-07-26 09:06:32.854041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.678 [2024-07-26 09:06:32.854092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.678 qpair failed and we were unable to recover it. 00:33:14.678 [2024-07-26 09:06:32.854254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.678 [2024-07-26 09:06:32.854281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.678 qpair failed and we were unable to recover it. 00:33:14.678 [2024-07-26 09:06:32.854438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.678 [2024-07-26 09:06:32.854464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.678 qpair failed and we were unable to recover it. 00:33:14.678 [2024-07-26 09:06:32.854613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.678 [2024-07-26 09:06:32.854640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.678 qpair failed and we were unable to recover it. 00:33:14.678 [2024-07-26 09:06:32.854781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.678 [2024-07-26 09:06:32.854819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.678 qpair failed and we were unable to recover it. 00:33:14.678 [2024-07-26 09:06:32.854977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.678 [2024-07-26 09:06:32.855005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.678 qpair failed and we were unable to recover it. 00:33:14.678 [2024-07-26 09:06:32.855154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.678 [2024-07-26 09:06:32.855192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.678 qpair failed and we were unable to recover it. 00:33:14.678 [2024-07-26 09:06:32.855347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.678 [2024-07-26 09:06:32.855374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.678 qpair failed and we were unable to recover it. 00:33:14.678 [2024-07-26 09:06:32.855525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.678 [2024-07-26 09:06:32.855550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.678 qpair failed and we were unable to recover it. 00:33:14.678 [2024-07-26 09:06:32.855690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.678 [2024-07-26 09:06:32.855715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.678 qpair failed and we were unable to recover it. 00:33:14.678 [2024-07-26 09:06:32.855857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.678 [2024-07-26 09:06:32.855883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.678 qpair failed and we were unable to recover it. 00:33:14.678 [2024-07-26 09:06:32.856005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.678 [2024-07-26 09:06:32.856030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.678 qpair failed and we were unable to recover it. 00:33:14.678 [2024-07-26 09:06:32.856203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.678 [2024-07-26 09:06:32.856230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.678 qpair failed and we were unable to recover it. 00:33:14.678 [2024-07-26 09:06:32.856372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.678 [2024-07-26 09:06:32.856400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.678 qpair failed and we were unable to recover it. 00:33:14.678 [2024-07-26 09:06:32.856552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.678 [2024-07-26 09:06:32.856579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.678 qpair failed and we were unable to recover it. 00:33:14.678 [2024-07-26 09:06:32.856728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.678 [2024-07-26 09:06:32.856754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.678 qpair failed and we were unable to recover it. 00:33:14.678 [2024-07-26 09:06:32.856867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.678 [2024-07-26 09:06:32.856893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.678 qpair failed and we were unable to recover it. 00:33:14.678 [2024-07-26 09:06:32.857018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.678 [2024-07-26 09:06:32.857050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.678 qpair failed and we were unable to recover it. 00:33:14.679 [2024-07-26 09:06:32.857208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.679 [2024-07-26 09:06:32.857234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.679 qpair failed and we were unable to recover it. 00:33:14.679 [2024-07-26 09:06:32.857353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.679 [2024-07-26 09:06:32.857379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.679 qpair failed and we were unable to recover it. 00:33:14.679 [2024-07-26 09:06:32.857501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.679 [2024-07-26 09:06:32.857527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.679 qpair failed and we were unable to recover it. 00:33:14.679 [2024-07-26 09:06:32.857703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.679 [2024-07-26 09:06:32.857728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.679 qpair failed and we were unable to recover it. 00:33:14.679 [2024-07-26 09:06:32.857851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.679 [2024-07-26 09:06:32.857877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.679 qpair failed and we were unable to recover it. 00:33:14.679 [2024-07-26 09:06:32.858028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.679 [2024-07-26 09:06:32.858054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.679 qpair failed and we were unable to recover it. 00:33:14.679 [2024-07-26 09:06:32.858191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.679 [2024-07-26 09:06:32.858229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.679 qpair failed and we were unable to recover it. 00:33:14.679 [2024-07-26 09:06:32.858356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.679 [2024-07-26 09:06:32.858383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.679 qpair failed and we were unable to recover it. 00:33:14.679 [2024-07-26 09:06:32.858526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.679 [2024-07-26 09:06:32.858552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.679 qpair failed and we were unable to recover it. 00:33:14.679 [2024-07-26 09:06:32.858699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.679 [2024-07-26 09:06:32.858725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.679 qpair failed and we were unable to recover it. 00:33:14.679 [2024-07-26 09:06:32.858863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.679 [2024-07-26 09:06:32.858888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.679 qpair failed and we were unable to recover it. 00:33:14.679 [2024-07-26 09:06:32.859029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.679 [2024-07-26 09:06:32.859054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.679 qpair failed and we were unable to recover it. 00:33:14.679 [2024-07-26 09:06:32.859208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.679 [2024-07-26 09:06:32.859233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.679 qpair failed and we were unable to recover it. 00:33:14.679 [2024-07-26 09:06:32.859382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.679 [2024-07-26 09:06:32.859407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.679 qpair failed and we were unable to recover it. 00:33:14.679 [2024-07-26 09:06:32.859551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.679 [2024-07-26 09:06:32.859577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.679 qpair failed and we were unable to recover it. 00:33:14.679 [2024-07-26 09:06:32.859721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.679 [2024-07-26 09:06:32.859746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.679 qpair failed and we were unable to recover it. 00:33:14.679 [2024-07-26 09:06:32.859890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.679 [2024-07-26 09:06:32.859915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.679 qpair failed and we were unable to recover it. 00:33:14.679 [2024-07-26 09:06:32.860068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.679 [2024-07-26 09:06:32.860094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.679 qpair failed and we were unable to recover it. 00:33:14.679 [2024-07-26 09:06:32.860237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.679 [2024-07-26 09:06:32.860261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.679 qpair failed and we were unable to recover it. 00:33:14.679 [2024-07-26 09:06:32.860385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.679 [2024-07-26 09:06:32.860410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.679 qpair failed and we were unable to recover it. 00:33:14.679 [2024-07-26 09:06:32.860549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.679 [2024-07-26 09:06:32.860575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.679 qpair failed and we were unable to recover it. 00:33:14.679 [2024-07-26 09:06:32.860746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.679 [2024-07-26 09:06:32.860771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.679 qpair failed and we were unable to recover it. 00:33:14.679 [2024-07-26 09:06:32.860915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.679 [2024-07-26 09:06:32.860940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.679 qpair failed and we were unable to recover it. 00:33:14.679 [2024-07-26 09:06:32.861050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.679 [2024-07-26 09:06:32.861083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.679 qpair failed and we were unable to recover it. 00:33:14.679 [2024-07-26 09:06:32.861224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.679 [2024-07-26 09:06:32.861249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.679 qpair failed and we were unable to recover it. 00:33:14.679 [2024-07-26 09:06:32.861422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.679 [2024-07-26 09:06:32.861448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.679 qpair failed and we were unable to recover it. 00:33:14.679 [2024-07-26 09:06:32.861589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.679 [2024-07-26 09:06:32.861621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.679 qpair failed and we were unable to recover it. 00:33:14.679 [2024-07-26 09:06:32.861781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.679 [2024-07-26 09:06:32.861806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.679 qpair failed and we were unable to recover it. 00:33:14.679 [2024-07-26 09:06:32.861955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.679 [2024-07-26 09:06:32.861980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.679 qpair failed and we were unable to recover it. 00:33:14.679 [2024-07-26 09:06:32.862142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.679 [2024-07-26 09:06:32.862169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.679 qpair failed and we were unable to recover it. 00:33:14.679 [2024-07-26 09:06:32.862290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.679 [2024-07-26 09:06:32.862315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.679 qpair failed and we were unable to recover it. 00:33:14.679 [2024-07-26 09:06:32.862497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.679 [2024-07-26 09:06:32.862522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.679 qpair failed and we were unable to recover it. 00:33:14.679 [2024-07-26 09:06:32.862698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.679 [2024-07-26 09:06:32.862724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.679 qpair failed and we were unable to recover it. 00:33:14.679 [2024-07-26 09:06:32.862848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.679 [2024-07-26 09:06:32.862873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.679 qpair failed and we were unable to recover it. 00:33:14.679 [2024-07-26 09:06:32.862989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.680 [2024-07-26 09:06:32.863014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.680 qpair failed and we were unable to recover it. 00:33:14.680 [2024-07-26 09:06:32.863149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.680 [2024-07-26 09:06:32.863175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.680 qpair failed and we were unable to recover it. 00:33:14.680 [2024-07-26 09:06:32.863322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.680 [2024-07-26 09:06:32.863347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.680 qpair failed and we were unable to recover it. 00:33:14.680 [2024-07-26 09:06:32.863468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.680 [2024-07-26 09:06:32.863494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.680 qpair failed and we were unable to recover it. 00:33:14.680 [2024-07-26 09:06:32.863615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.680 [2024-07-26 09:06:32.863640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.680 qpair failed and we were unable to recover it. 00:33:14.680 [2024-07-26 09:06:32.863761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.680 [2024-07-26 09:06:32.863787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.680 qpair failed and we were unable to recover it. 00:33:14.680 [2024-07-26 09:06:32.863939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.680 [2024-07-26 09:06:32.863964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.680 qpair failed and we were unable to recover it. 00:33:14.680 [2024-07-26 09:06:32.864122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.680 [2024-07-26 09:06:32.864147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.680 qpair failed and we were unable to recover it. 00:33:14.680 [2024-07-26 09:06:32.864284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.680 [2024-07-26 09:06:32.864310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.680 qpair failed and we were unable to recover it. 00:33:14.680 [2024-07-26 09:06:32.864492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.680 [2024-07-26 09:06:32.864518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.680 qpair failed and we were unable to recover it. 00:33:14.680 EAL: No free 2048 kB hugepages reported on node 1 00:33:14.680 [2024-07-26 09:06:32.864642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.680 [2024-07-26 09:06:32.864668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.680 qpair failed and we were unable to recover it. 00:33:14.680 [2024-07-26 09:06:32.864788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.680 [2024-07-26 09:06:32.864815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.680 qpair failed and we were unable to recover it. 00:33:14.680 [2024-07-26 09:06:32.864985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.680 [2024-07-26 09:06:32.865011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.680 qpair failed and we were unable to recover it. 00:33:14.680 [2024-07-26 09:06:32.865145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.680 [2024-07-26 09:06:32.865171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.680 qpair failed and we were unable to recover it. 00:33:14.680 [2024-07-26 09:06:32.865298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.680 [2024-07-26 09:06:32.865325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.680 qpair failed and we were unable to recover it. 00:33:14.680 [2024-07-26 09:06:32.865461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.680 [2024-07-26 09:06:32.865486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.680 qpair failed and we were unable to recover it. 00:33:14.680 [2024-07-26 09:06:32.865630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.680 [2024-07-26 09:06:32.865655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.680 qpair failed and we were unable to recover it. 00:33:14.680 [2024-07-26 09:06:32.865801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.680 [2024-07-26 09:06:32.865826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.680 qpair failed and we were unable to recover it. 00:33:14.680 [2024-07-26 09:06:32.865964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.680 [2024-07-26 09:06:32.865989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.680 qpair failed and we were unable to recover it. 00:33:14.680 [2024-07-26 09:06:32.866151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.680 [2024-07-26 09:06:32.866190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.680 qpair failed and we were unable to recover it. 00:33:14.680 [2024-07-26 09:06:32.866324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.680 [2024-07-26 09:06:32.866351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.680 qpair failed and we were unable to recover it. 00:33:14.680 [2024-07-26 09:06:32.866498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.680 [2024-07-26 09:06:32.866525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.680 qpair failed and we were unable to recover it. 00:33:14.680 [2024-07-26 09:06:32.866670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.680 [2024-07-26 09:06:32.866697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.680 qpair failed and we were unable to recover it. 00:33:14.680 [2024-07-26 09:06:32.866840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.680 [2024-07-26 09:06:32.866866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.680 qpair failed and we were unable to recover it. 00:33:14.680 [2024-07-26 09:06:32.867008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.680 [2024-07-26 09:06:32.867034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.680 qpair failed and we were unable to recover it. 00:33:14.680 [2024-07-26 09:06:32.867163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.680 [2024-07-26 09:06:32.867189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.680 qpair failed and we were unable to recover it. 00:33:14.680 [2024-07-26 09:06:32.867349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.680 [2024-07-26 09:06:32.867375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.680 qpair failed and we were unable to recover it. 00:33:14.680 [2024-07-26 09:06:32.867498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.680 [2024-07-26 09:06:32.867524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.680 qpair failed and we were unable to recover it. 00:33:14.680 [2024-07-26 09:06:32.867758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.680 [2024-07-26 09:06:32.867787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.680 qpair failed and we were unable to recover it. 00:33:14.680 [2024-07-26 09:06:32.867938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.680 [2024-07-26 09:06:32.867963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.680 qpair failed and we were unable to recover it. 00:33:14.680 [2024-07-26 09:06:32.868128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.680 [2024-07-26 09:06:32.868153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.680 qpair failed and we were unable to recover it. 00:33:14.680 [2024-07-26 09:06:32.868301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.680 [2024-07-26 09:06:32.868327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.680 qpair failed and we were unable to recover it. 00:33:14.680 [2024-07-26 09:06:32.868477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.680 [2024-07-26 09:06:32.868503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.680 qpair failed and we were unable to recover it. 00:33:14.680 [2024-07-26 09:06:32.868657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.680 [2024-07-26 09:06:32.868682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.680 qpair failed and we were unable to recover it. 00:33:14.680 [2024-07-26 09:06:32.868805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.680 [2024-07-26 09:06:32.868831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.680 qpair failed and we were unable to recover it. 00:33:14.680 [2024-07-26 09:06:32.868955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.681 [2024-07-26 09:06:32.868981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.681 qpair failed and we were unable to recover it. 00:33:14.681 [2024-07-26 09:06:32.869107] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:33:14.681 [2024-07-26 09:06:32.869126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.681 [2024-07-26 09:06:32.869162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.681 qpair failed and we were unable to recover it. 00:33:14.681 [2024-07-26 09:06:32.869295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.681 [2024-07-26 09:06:32.869324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.681 qpair failed and we were unable to recover it. 00:33:14.681 [2024-07-26 09:06:32.869443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.681 [2024-07-26 09:06:32.869469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.681 qpair failed and we were unable to recover it. 00:33:14.681 [2024-07-26 09:06:32.869638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.681 [2024-07-26 09:06:32.869664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.681 qpair failed and we were unable to recover it. 00:33:14.681 [2024-07-26 09:06:32.869782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.681 [2024-07-26 09:06:32.869809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.681 qpair failed and we were unable to recover it. 00:33:14.681 [2024-07-26 09:06:32.869985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.681 [2024-07-26 09:06:32.870011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.681 qpair failed and we were unable to recover it. 00:33:14.681 [2024-07-26 09:06:32.870175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.681 [2024-07-26 09:06:32.870202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.681 qpair failed and we were unable to recover it. 00:33:14.681 [2024-07-26 09:06:32.870321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.681 [2024-07-26 09:06:32.870347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.681 qpair failed and we were unable to recover it. 00:33:14.681 [2024-07-26 09:06:32.870498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.681 [2024-07-26 09:06:32.870523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.681 qpair failed and we were unable to recover it. 00:33:14.681 [2024-07-26 09:06:32.870678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.681 [2024-07-26 09:06:32.870705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.681 qpair failed and we were unable to recover it. 00:33:14.681 [2024-07-26 09:06:32.870853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.681 [2024-07-26 09:06:32.870882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.681 qpair failed and we were unable to recover it. 00:33:14.681 [2024-07-26 09:06:32.871019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.681 [2024-07-26 09:06:32.871045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.681 qpair failed and we were unable to recover it. 00:33:14.681 [2024-07-26 09:06:32.871194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.681 [2024-07-26 09:06:32.871219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.681 qpair failed and we were unable to recover it. 00:33:14.681 [2024-07-26 09:06:32.871375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.681 [2024-07-26 09:06:32.871400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.681 qpair failed and we were unable to recover it. 00:33:14.681 [2024-07-26 09:06:32.871516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.681 [2024-07-26 09:06:32.871541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.681 qpair failed and we were unable to recover it. 00:33:14.681 [2024-07-26 09:06:32.871653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.681 [2024-07-26 09:06:32.871679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.681 qpair failed and we were unable to recover it. 00:33:14.681 [2024-07-26 09:06:32.871821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.681 [2024-07-26 09:06:32.871846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.681 qpair failed and we were unable to recover it. 00:33:14.681 [2024-07-26 09:06:32.871968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.681 [2024-07-26 09:06:32.871993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.681 qpair failed and we were unable to recover it. 00:33:14.681 [2024-07-26 09:06:32.872120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.681 [2024-07-26 09:06:32.872147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.681 qpair failed and we were unable to recover it. 00:33:14.681 [2024-07-26 09:06:32.872271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.681 [2024-07-26 09:06:32.872297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.681 qpair failed and we were unable to recover it. 00:33:14.681 [2024-07-26 09:06:32.872448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.681 [2024-07-26 09:06:32.872474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.681 qpair failed and we were unable to recover it. 00:33:14.681 [2024-07-26 09:06:32.872642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.681 [2024-07-26 09:06:32.872667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.681 qpair failed and we were unable to recover it. 00:33:14.681 [2024-07-26 09:06:32.872794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.681 [2024-07-26 09:06:32.872825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.681 qpair failed and we were unable to recover it. 00:33:14.681 [2024-07-26 09:06:32.872974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.681 [2024-07-26 09:06:32.872999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.681 qpair failed and we were unable to recover it. 00:33:14.681 [2024-07-26 09:06:32.873140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.681 [2024-07-26 09:06:32.873179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.681 qpair failed and we were unable to recover it. 00:33:14.681 [2024-07-26 09:06:32.873336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.681 [2024-07-26 09:06:32.873363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.681 qpair failed and we were unable to recover it. 00:33:14.681 [2024-07-26 09:06:32.873507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.681 [2024-07-26 09:06:32.873532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.681 qpair failed and we were unable to recover it. 00:33:14.681 [2024-07-26 09:06:32.873653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.681 [2024-07-26 09:06:32.873680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.681 qpair failed and we were unable to recover it. 00:33:14.681 [2024-07-26 09:06:32.873796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.681 [2024-07-26 09:06:32.873822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.681 qpair failed and we were unable to recover it. 00:33:14.681 [2024-07-26 09:06:32.873936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.681 [2024-07-26 09:06:32.873961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.681 qpair failed and we were unable to recover it. 00:33:14.681 [2024-07-26 09:06:32.874109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.681 [2024-07-26 09:06:32.874136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.681 qpair failed and we were unable to recover it. 00:33:14.681 [2024-07-26 09:06:32.874280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.681 [2024-07-26 09:06:32.874306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.681 qpair failed and we were unable to recover it. 00:33:14.681 [2024-07-26 09:06:32.874425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.681 [2024-07-26 09:06:32.874450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.681 qpair failed and we were unable to recover it. 00:33:14.681 [2024-07-26 09:06:32.874601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.681 [2024-07-26 09:06:32.874627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.681 qpair failed and we were unable to recover it. 00:33:14.681 [2024-07-26 09:06:32.874755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.682 [2024-07-26 09:06:32.874794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.682 qpair failed and we were unable to recover it. 00:33:14.682 [2024-07-26 09:06:32.874917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.682 [2024-07-26 09:06:32.874945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.682 qpair failed and we were unable to recover it. 00:33:14.682 [2024-07-26 09:06:32.875104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.682 [2024-07-26 09:06:32.875132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.682 qpair failed and we were unable to recover it. 00:33:14.682 [2024-07-26 09:06:32.875246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.682 [2024-07-26 09:06:32.875272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.682 qpair failed and we were unable to recover it. 00:33:14.682 [2024-07-26 09:06:32.875406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.682 [2024-07-26 09:06:32.875432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.682 qpair failed and we were unable to recover it. 00:33:14.682 [2024-07-26 09:06:32.875577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.682 [2024-07-26 09:06:32.875602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.682 qpair failed and we were unable to recover it. 00:33:14.682 [2024-07-26 09:06:32.875716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.682 [2024-07-26 09:06:32.875742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.682 qpair failed and we were unable to recover it. 00:33:14.682 [2024-07-26 09:06:32.875911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.682 [2024-07-26 09:06:32.875937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.682 qpair failed and we were unable to recover it. 00:33:14.682 [2024-07-26 09:06:32.876072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.682 [2024-07-26 09:06:32.876113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.682 qpair failed and we were unable to recover it. 00:33:14.682 [2024-07-26 09:06:32.876240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.682 [2024-07-26 09:06:32.876267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.682 qpair failed and we were unable to recover it. 00:33:14.682 [2024-07-26 09:06:32.876389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.682 [2024-07-26 09:06:32.876415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.682 qpair failed and we were unable to recover it. 00:33:14.682 [2024-07-26 09:06:32.876558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.682 [2024-07-26 09:06:32.876584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.682 qpair failed and we were unable to recover it. 00:33:14.682 [2024-07-26 09:06:32.876701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.682 [2024-07-26 09:06:32.876726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.682 qpair failed and we were unable to recover it. 00:33:14.682 [2024-07-26 09:06:32.876867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.682 [2024-07-26 09:06:32.876893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.682 qpair failed and we were unable to recover it. 00:33:14.682 [2024-07-26 09:06:32.877006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.682 [2024-07-26 09:06:32.877032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.682 qpair failed and we were unable to recover it. 00:33:14.682 [2024-07-26 09:06:32.877158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.682 [2024-07-26 09:06:32.877190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.682 qpair failed and we were unable to recover it. 00:33:14.682 [2024-07-26 09:06:32.877309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.682 [2024-07-26 09:06:32.877335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.682 qpair failed and we were unable to recover it. 00:33:14.682 [2024-07-26 09:06:32.877475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.682 [2024-07-26 09:06:32.877501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.682 qpair failed and we were unable to recover it. 00:33:14.682 [2024-07-26 09:06:32.877613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.682 [2024-07-26 09:06:32.877638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.682 qpair failed and we were unable to recover it. 00:33:14.682 [2024-07-26 09:06:32.877760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.682 [2024-07-26 09:06:32.877785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.682 qpair failed and we were unable to recover it. 00:33:14.682 [2024-07-26 09:06:32.877916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.682 [2024-07-26 09:06:32.877942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.682 qpair failed and we were unable to recover it. 00:33:14.682 [2024-07-26 09:06:32.878079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.682 [2024-07-26 09:06:32.878105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.682 qpair failed and we were unable to recover it. 00:33:14.682 [2024-07-26 09:06:32.878244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.682 [2024-07-26 09:06:32.878270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.682 qpair failed and we were unable to recover it. 00:33:14.682 [2024-07-26 09:06:32.878422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.682 [2024-07-26 09:06:32.878447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.682 qpair failed and we were unable to recover it. 00:33:14.682 [2024-07-26 09:06:32.878593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.682 [2024-07-26 09:06:32.878619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.682 qpair failed and we were unable to recover it. 00:33:14.682 [2024-07-26 09:06:32.878734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.682 [2024-07-26 09:06:32.878759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.682 qpair failed and we were unable to recover it. 00:33:14.682 [2024-07-26 09:06:32.878899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.682 [2024-07-26 09:06:32.878924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.682 qpair failed and we were unable to recover it. 00:33:14.682 [2024-07-26 09:06:32.879105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.682 [2024-07-26 09:06:32.879131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.682 qpair failed and we were unable to recover it. 00:33:14.682 [2024-07-26 09:06:32.879278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.682 [2024-07-26 09:06:32.879304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.682 qpair failed and we were unable to recover it. 00:33:14.682 [2024-07-26 09:06:32.879467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.682 [2024-07-26 09:06:32.879492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.682 qpair failed and we were unable to recover it. 00:33:14.683 [2024-07-26 09:06:32.879660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.683 [2024-07-26 09:06:32.879685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.683 qpair failed and we were unable to recover it. 00:33:14.683 [2024-07-26 09:06:32.879837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.683 [2024-07-26 09:06:32.879863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.683 qpair failed and we were unable to recover it. 00:33:14.683 [2024-07-26 09:06:32.879978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.683 [2024-07-26 09:06:32.880003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.683 qpair failed and we were unable to recover it. 00:33:14.683 [2024-07-26 09:06:32.880160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.683 [2024-07-26 09:06:32.880187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.683 qpair failed and we were unable to recover it. 00:33:14.683 [2024-07-26 09:06:32.880338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.683 [2024-07-26 09:06:32.880363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.683 qpair failed and we were unable to recover it. 00:33:14.683 [2024-07-26 09:06:32.880503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.683 [2024-07-26 09:06:32.880529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.683 qpair failed and we were unable to recover it. 00:33:14.683 [2024-07-26 09:06:32.880679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.683 [2024-07-26 09:06:32.880704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.683 qpair failed and we were unable to recover it. 00:33:14.683 [2024-07-26 09:06:32.880834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.683 [2024-07-26 09:06:32.880860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.683 qpair failed and we were unable to recover it. 00:33:14.683 [2024-07-26 09:06:32.881008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.683 [2024-07-26 09:06:32.881033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.683 qpair failed and we were unable to recover it. 00:33:14.683 [2024-07-26 09:06:32.881186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.683 [2024-07-26 09:06:32.881212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.683 qpair failed and we were unable to recover it. 00:33:14.683 [2024-07-26 09:06:32.881351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.683 [2024-07-26 09:06:32.881376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.683 qpair failed and we were unable to recover it. 00:33:14.683 [2024-07-26 09:06:32.881523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.683 [2024-07-26 09:06:32.881549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.683 qpair failed and we were unable to recover it. 00:33:14.683 [2024-07-26 09:06:32.881700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.683 [2024-07-26 09:06:32.881730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.683 qpair failed and we were unable to recover it. 00:33:14.683 [2024-07-26 09:06:32.881876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.683 [2024-07-26 09:06:32.881901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.683 qpair failed and we were unable to recover it. 00:33:14.683 [2024-07-26 09:06:32.882070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.683 [2024-07-26 09:06:32.882096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.683 qpair failed and we were unable to recover it. 00:33:14.683 [2024-07-26 09:06:32.882236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.683 [2024-07-26 09:06:32.882261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.683 qpair failed and we were unable to recover it. 00:33:14.683 [2024-07-26 09:06:32.882422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.683 [2024-07-26 09:06:32.882447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.683 qpair failed and we were unable to recover it. 00:33:14.683 [2024-07-26 09:06:32.882568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.683 [2024-07-26 09:06:32.882595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.683 qpair failed and we were unable to recover it. 00:33:14.683 [2024-07-26 09:06:32.882769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.683 [2024-07-26 09:06:32.882794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.683 qpair failed and we were unable to recover it. 00:33:14.683 [2024-07-26 09:06:32.882917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.683 [2024-07-26 09:06:32.882944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.683 qpair failed and we were unable to recover it. 00:33:14.683 [2024-07-26 09:06:32.883081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.683 [2024-07-26 09:06:32.883107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.683 qpair failed and we were unable to recover it. 00:33:14.683 [2024-07-26 09:06:32.883249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.683 [2024-07-26 09:06:32.883274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.683 qpair failed and we were unable to recover it. 00:33:14.683 [2024-07-26 09:06:32.883404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.683 [2024-07-26 09:06:32.883429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.683 qpair failed and we were unable to recover it. 00:33:14.683 [2024-07-26 09:06:32.883594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.683 [2024-07-26 09:06:32.883619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.683 qpair failed and we were unable to recover it. 00:33:14.683 [2024-07-26 09:06:32.883763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.683 [2024-07-26 09:06:32.883788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.683 qpair failed and we were unable to recover it. 00:33:14.683 [2024-07-26 09:06:32.883931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.683 [2024-07-26 09:06:32.883957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.683 qpair failed and we were unable to recover it. 00:33:14.683 [2024-07-26 09:06:32.884093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.683 [2024-07-26 09:06:32.884132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.683 qpair failed and we were unable to recover it. 00:33:14.683 [2024-07-26 09:06:32.884310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.683 [2024-07-26 09:06:32.884337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.683 qpair failed and we were unable to recover it. 00:33:14.683 [2024-07-26 09:06:32.884463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.683 [2024-07-26 09:06:32.884490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.683 qpair failed and we were unable to recover it. 00:33:14.683 [2024-07-26 09:06:32.884640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.683 [2024-07-26 09:06:32.884666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.683 qpair failed and we were unable to recover it. 00:33:14.683 [2024-07-26 09:06:32.884808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.683 [2024-07-26 09:06:32.884834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.683 qpair failed and we were unable to recover it. 00:33:14.683 [2024-07-26 09:06:32.884982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.683 [2024-07-26 09:06:32.885008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.683 qpair failed and we were unable to recover it. 00:33:14.683 [2024-07-26 09:06:32.885146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.683 [2024-07-26 09:06:32.885174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.683 qpair failed and we were unable to recover it. 00:33:14.683 [2024-07-26 09:06:32.885317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.683 [2024-07-26 09:06:32.885343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.683 qpair failed and we were unable to recover it. 00:33:14.683 [2024-07-26 09:06:32.885475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.683 [2024-07-26 09:06:32.885500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.683 qpair failed and we were unable to recover it. 00:33:14.683 [2024-07-26 09:06:32.885667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.683 [2024-07-26 09:06:32.885693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.684 qpair failed and we were unable to recover it. 00:33:14.684 [2024-07-26 09:06:32.885836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.684 [2024-07-26 09:06:32.885861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.684 qpair failed and we were unable to recover it. 00:33:14.684 [2024-07-26 09:06:32.886002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.684 [2024-07-26 09:06:32.886027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.684 qpair failed and we were unable to recover it. 00:33:14.684 [2024-07-26 09:06:32.886182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.684 [2024-07-26 09:06:32.886208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.684 qpair failed and we were unable to recover it. 00:33:14.684 [2024-07-26 09:06:32.886356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.684 [2024-07-26 09:06:32.886381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.684 qpair failed and we were unable to recover it. 00:33:14.684 [2024-07-26 09:06:32.886523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.684 [2024-07-26 09:06:32.886549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.684 qpair failed and we were unable to recover it. 00:33:14.684 [2024-07-26 09:06:32.886682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.684 [2024-07-26 09:06:32.886707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.684 qpair failed and we were unable to recover it. 00:33:14.684 [2024-07-26 09:06:32.886846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.684 [2024-07-26 09:06:32.886870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.684 qpair failed and we were unable to recover it. 00:33:14.684 [2024-07-26 09:06:32.887006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.684 [2024-07-26 09:06:32.887032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.684 qpair failed and we were unable to recover it. 00:33:14.684 [2024-07-26 09:06:32.887161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.684 [2024-07-26 09:06:32.887186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.684 qpair failed and we were unable to recover it. 00:33:14.684 [2024-07-26 09:06:32.887329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.684 [2024-07-26 09:06:32.887353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.684 qpair failed and we were unable to recover it. 00:33:14.684 [2024-07-26 09:06:32.887490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.684 [2024-07-26 09:06:32.887515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.684 qpair failed and we were unable to recover it. 00:33:14.684 [2024-07-26 09:06:32.887632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.684 [2024-07-26 09:06:32.887656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.684 qpair failed and we were unable to recover it. 00:33:14.684 [2024-07-26 09:06:32.887774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.684 [2024-07-26 09:06:32.887799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.684 qpair failed and we were unable to recover it. 00:33:14.684 [2024-07-26 09:06:32.887948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.684 [2024-07-26 09:06:32.887973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.684 qpair failed and we were unable to recover it. 00:33:14.684 [2024-07-26 09:06:32.888110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.684 [2024-07-26 09:06:32.888136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.684 qpair failed and we were unable to recover it. 00:33:14.684 [2024-07-26 09:06:32.888316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.684 [2024-07-26 09:06:32.888355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.684 qpair failed and we were unable to recover it. 00:33:14.684 [2024-07-26 09:06:32.888508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.684 [2024-07-26 09:06:32.888535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.684 qpair failed and we were unable to recover it. 00:33:14.684 [2024-07-26 09:06:32.888677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.684 [2024-07-26 09:06:32.888703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.684 qpair failed and we were unable to recover it. 00:33:14.684 [2024-07-26 09:06:32.888829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.684 [2024-07-26 09:06:32.888855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.684 qpair failed and we were unable to recover it. 00:33:14.684 [2024-07-26 09:06:32.889021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.684 [2024-07-26 09:06:32.889047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.684 qpair failed and we were unable to recover it. 00:33:14.684 [2024-07-26 09:06:32.889179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.684 [2024-07-26 09:06:32.889204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.684 qpair failed and we were unable to recover it. 00:33:14.684 [2024-07-26 09:06:32.889352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.684 [2024-07-26 09:06:32.889377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.684 qpair failed and we were unable to recover it. 00:33:14.684 [2024-07-26 09:06:32.889501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.684 [2024-07-26 09:06:32.889527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.684 qpair failed and we were unable to recover it. 00:33:14.684 [2024-07-26 09:06:32.889663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.684 [2024-07-26 09:06:32.889689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.684 qpair failed and we were unable to recover it. 00:33:14.684 [2024-07-26 09:06:32.889818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.684 [2024-07-26 09:06:32.889845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.684 qpair failed and we were unable to recover it. 00:33:14.684 [2024-07-26 09:06:32.889993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.684 [2024-07-26 09:06:32.890018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.684 qpair failed and we were unable to recover it. 00:33:14.684 [2024-07-26 09:06:32.890140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.684 [2024-07-26 09:06:32.890165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.684 qpair failed and we were unable to recover it. 00:33:14.684 [2024-07-26 09:06:32.890311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.684 [2024-07-26 09:06:32.890338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.684 qpair failed and we were unable to recover it. 00:33:14.684 [2024-07-26 09:06:32.890462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.684 [2024-07-26 09:06:32.890487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.684 qpair failed and we were unable to recover it. 00:33:14.684 [2024-07-26 09:06:32.890631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.684 [2024-07-26 09:06:32.890655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.684 qpair failed and we were unable to recover it. 00:33:14.684 [2024-07-26 09:06:32.890770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.684 [2024-07-26 09:06:32.890795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.684 qpair failed and we were unable to recover it. 00:33:14.684 [2024-07-26 09:06:32.890974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.684 [2024-07-26 09:06:32.890999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.684 qpair failed and we were unable to recover it. 00:33:14.684 [2024-07-26 09:06:32.891169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.684 [2024-07-26 09:06:32.891208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.684 qpair failed and we were unable to recover it. 00:33:14.684 [2024-07-26 09:06:32.891359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.684 [2024-07-26 09:06:32.891386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.684 qpair failed and we were unable to recover it. 00:33:14.684 [2024-07-26 09:06:32.891536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.685 [2024-07-26 09:06:32.891562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.685 qpair failed and we were unable to recover it. 00:33:14.685 [2024-07-26 09:06:32.891736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.685 [2024-07-26 09:06:32.891761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.685 qpair failed and we were unable to recover it. 00:33:14.685 [2024-07-26 09:06:32.891881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.685 [2024-07-26 09:06:32.891906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.685 qpair failed and we were unable to recover it. 00:33:14.685 [2024-07-26 09:06:32.892034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.685 [2024-07-26 09:06:32.892066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.685 qpair failed and we were unable to recover it. 00:33:14.685 [2024-07-26 09:06:32.892210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.685 [2024-07-26 09:06:32.892235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.685 qpair failed and we were unable to recover it. 00:33:14.685 [2024-07-26 09:06:32.892381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.685 [2024-07-26 09:06:32.892407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.685 qpair failed and we were unable to recover it. 00:33:14.685 [2024-07-26 09:06:32.892530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.685 [2024-07-26 09:06:32.892556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.685 qpair failed and we were unable to recover it. 00:33:14.685 [2024-07-26 09:06:32.892733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.685 [2024-07-26 09:06:32.892759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.685 qpair failed and we were unable to recover it. 00:33:14.685 [2024-07-26 09:06:32.892882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.685 [2024-07-26 09:06:32.892907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.685 qpair failed and we were unable to recover it. 00:33:14.685 [2024-07-26 09:06:32.893055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.685 [2024-07-26 09:06:32.893088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.685 qpair failed and we were unable to recover it. 00:33:14.685 [2024-07-26 09:06:32.893214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.685 [2024-07-26 09:06:32.893239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.685 qpair failed and we were unable to recover it. 00:33:14.685 [2024-07-26 09:06:32.893374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.685 [2024-07-26 09:06:32.893399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.685 qpair failed and we were unable to recover it. 00:33:14.685 [2024-07-26 09:06:32.893542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.685 [2024-07-26 09:06:32.893568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.685 qpair failed and we were unable to recover it. 00:33:14.685 [2024-07-26 09:06:32.893735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.685 [2024-07-26 09:06:32.893761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.685 qpair failed and we were unable to recover it. 00:33:14.685 [2024-07-26 09:06:32.893883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.685 [2024-07-26 09:06:32.893908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.685 qpair failed and we were unable to recover it. 00:33:14.685 [2024-07-26 09:06:32.894034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.685 [2024-07-26 09:06:32.894068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.685 qpair failed and we were unable to recover it. 00:33:14.685 [2024-07-26 09:06:32.894192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.685 [2024-07-26 09:06:32.894217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.685 qpair failed and we were unable to recover it. 00:33:14.685 [2024-07-26 09:06:32.894343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.685 [2024-07-26 09:06:32.894369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.685 qpair failed and we were unable to recover it. 00:33:14.685 [2024-07-26 09:06:32.894543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.685 [2024-07-26 09:06:32.894568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.685 qpair failed and we were unable to recover it. 00:33:14.685 [2024-07-26 09:06:32.894720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.685 [2024-07-26 09:06:32.894745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.685 qpair failed and we were unable to recover it. 00:33:14.685 [2024-07-26 09:06:32.894864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.685 [2024-07-26 09:06:32.894889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.685 qpair failed and we were unable to recover it. 00:33:14.685 [2024-07-26 09:06:32.895030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.685 [2024-07-26 09:06:32.895056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.685 qpair failed and we were unable to recover it. 00:33:14.685 [2024-07-26 09:06:32.895180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.685 [2024-07-26 09:06:32.895205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.685 qpair failed and we were unable to recover it. 00:33:14.685 [2024-07-26 09:06:32.895355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.685 [2024-07-26 09:06:32.895385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.685 qpair failed and we were unable to recover it. 00:33:14.685 [2024-07-26 09:06:32.895499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.685 [2024-07-26 09:06:32.895525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.685 qpair failed and we were unable to recover it. 00:33:14.685 [2024-07-26 09:06:32.895693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.685 [2024-07-26 09:06:32.895718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.685 qpair failed and we were unable to recover it. 00:33:14.685 [2024-07-26 09:06:32.895959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.685 [2024-07-26 09:06:32.895984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.685 qpair failed and we were unable to recover it. 00:33:14.685 [2024-07-26 09:06:32.896150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.685 [2024-07-26 09:06:32.896176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.685 qpair failed and we were unable to recover it. 00:33:14.685 [2024-07-26 09:06:32.896320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.685 [2024-07-26 09:06:32.896344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.685 qpair failed and we were unable to recover it. 00:33:14.685 [2024-07-26 09:06:32.896463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.685 [2024-07-26 09:06:32.896489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.685 qpair failed and we were unable to recover it. 00:33:14.685 [2024-07-26 09:06:32.896633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.685 [2024-07-26 09:06:32.896658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.685 qpair failed and we were unable to recover it. 00:33:14.685 [2024-07-26 09:06:32.896803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.685 [2024-07-26 09:06:32.896829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.685 qpair failed and we were unable to recover it. 00:33:14.685 [2024-07-26 09:06:32.896950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.685 [2024-07-26 09:06:32.896976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.685 qpair failed and we were unable to recover it. 00:33:14.685 [2024-07-26 09:06:32.897126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.685 [2024-07-26 09:06:32.897151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.685 qpair failed and we were unable to recover it. 00:33:14.686 [2024-07-26 09:06:32.897296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.686 [2024-07-26 09:06:32.897321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.686 qpair failed and we were unable to recover it. 00:33:14.686 [2024-07-26 09:06:32.897468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.686 [2024-07-26 09:06:32.897494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.686 qpair failed and we were unable to recover it. 00:33:14.686 [2024-07-26 09:06:32.897645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.686 [2024-07-26 09:06:32.897670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.686 qpair failed and we were unable to recover it. 00:33:14.686 [2024-07-26 09:06:32.897822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.686 [2024-07-26 09:06:32.897847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.686 qpair failed and we were unable to recover it. 00:33:14.686 [2024-07-26 09:06:32.897966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.686 [2024-07-26 09:06:32.897992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.686 qpair failed and we were unable to recover it. 00:33:14.686 [2024-07-26 09:06:32.898105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.686 [2024-07-26 09:06:32.898131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.686 qpair failed and we were unable to recover it. 00:33:14.686 [2024-07-26 09:06:32.898243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.686 [2024-07-26 09:06:32.898269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.686 qpair failed and we were unable to recover it. 00:33:14.686 [2024-07-26 09:06:32.898419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.686 [2024-07-26 09:06:32.898444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.686 qpair failed and we were unable to recover it. 00:33:14.686 [2024-07-26 09:06:32.898562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.686 [2024-07-26 09:06:32.898588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.686 qpair failed and we were unable to recover it. 00:33:14.686 [2024-07-26 09:06:32.898741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.686 [2024-07-26 09:06:32.898767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.686 [2024-07-26 09:06:32.898767] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:14.686 qpair failed and we were unable to recover it. 00:33:14.686 [2024-07-26 09:06:32.898923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.686 [2024-07-26 09:06:32.898961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.686 qpair failed and we were unable to recover it. 00:33:14.686 [2024-07-26 09:06:32.899114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.686 [2024-07-26 09:06:32.899142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.686 qpair failed and we were unable to recover it. 00:33:14.686 [2024-07-26 09:06:32.899328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.686 [2024-07-26 09:06:32.899354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.686 qpair failed and we were unable to recover it. 00:33:14.686 [2024-07-26 09:06:32.899534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.686 [2024-07-26 09:06:32.899560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.686 qpair failed and we were unable to recover it. 00:33:14.686 [2024-07-26 09:06:32.899727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.686 [2024-07-26 09:06:32.899753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.686 qpair failed and we were unable to recover it. 00:33:14.686 [2024-07-26 09:06:32.899899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.686 [2024-07-26 09:06:32.899924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.686 qpair failed and we were unable to recover it. 00:33:14.686 [2024-07-26 09:06:32.900053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.686 [2024-07-26 09:06:32.900085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.686 qpair failed and we were unable to recover it. 00:33:14.686 [2024-07-26 09:06:32.900236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.686 [2024-07-26 09:06:32.900261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.686 qpair failed and we were unable to recover it. 00:33:14.686 [2024-07-26 09:06:32.900407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.686 [2024-07-26 09:06:32.900431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.686 qpair failed and we were unable to recover it. 00:33:14.686 [2024-07-26 09:06:32.900549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.686 [2024-07-26 09:06:32.900575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.686 qpair failed and we were unable to recover it. 00:33:14.686 [2024-07-26 09:06:32.900714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.686 [2024-07-26 09:06:32.900739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.686 qpair failed and we were unable to recover it. 00:33:14.686 [2024-07-26 09:06:32.900887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.686 [2024-07-26 09:06:32.900912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.686 qpair failed and we were unable to recover it. 00:33:14.686 [2024-07-26 09:06:32.901068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.686 [2024-07-26 09:06:32.901094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.686 qpair failed and we were unable to recover it. 00:33:14.686 [2024-07-26 09:06:32.901240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.686 [2024-07-26 09:06:32.901266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.686 qpair failed and we were unable to recover it. 00:33:14.686 [2024-07-26 09:06:32.901391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.686 [2024-07-26 09:06:32.901417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.686 qpair failed and we were unable to recover it. 00:33:14.686 [2024-07-26 09:06:32.901544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.686 [2024-07-26 09:06:32.901570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.686 qpair failed and we were unable to recover it. 00:33:14.686 [2024-07-26 09:06:32.901713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.686 [2024-07-26 09:06:32.901738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.686 qpair failed and we were unable to recover it. 00:33:14.686 [2024-07-26 09:06:32.901857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.686 [2024-07-26 09:06:32.901883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.686 qpair failed and we were unable to recover it. 00:33:14.686 [2024-07-26 09:06:32.902018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.686 [2024-07-26 09:06:32.902057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.686 qpair failed and we were unable to recover it. 00:33:14.686 [2024-07-26 09:06:32.902250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.686 [2024-07-26 09:06:32.902277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.686 qpair failed and we were unable to recover it. 00:33:14.686 [2024-07-26 09:06:32.902398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.686 [2024-07-26 09:06:32.902425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.686 qpair failed and we were unable to recover it. 00:33:14.686 [2024-07-26 09:06:32.902597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.686 [2024-07-26 09:06:32.902623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.686 qpair failed and we were unable to recover it. 00:33:14.686 [2024-07-26 09:06:32.902772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.686 [2024-07-26 09:06:32.902798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.686 qpair failed and we were unable to recover it. 00:33:14.686 [2024-07-26 09:06:32.902943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.686 [2024-07-26 09:06:32.902969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.686 qpair failed and we were unable to recover it. 00:33:14.686 [2024-07-26 09:06:32.903142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.686 [2024-07-26 09:06:32.903168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.686 qpair failed and we were unable to recover it. 00:33:14.686 [2024-07-26 09:06:32.903311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.687 [2024-07-26 09:06:32.903337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.687 qpair failed and we were unable to recover it. 00:33:14.687 [2024-07-26 09:06:32.903474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.687 [2024-07-26 09:06:32.903500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.687 qpair failed and we were unable to recover it. 00:33:14.687 [2024-07-26 09:06:32.903707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.687 [2024-07-26 09:06:32.903733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.687 qpair failed and we were unable to recover it. 00:33:14.687 [2024-07-26 09:06:32.903904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.687 [2024-07-26 09:06:32.903929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.687 qpair failed and we were unable to recover it. 00:33:14.687 [2024-07-26 09:06:32.904091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.687 [2024-07-26 09:06:32.904131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.687 qpair failed and we were unable to recover it. 00:33:14.687 [2024-07-26 09:06:32.904296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.687 [2024-07-26 09:06:32.904335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.687 qpair failed and we were unable to recover it. 00:33:14.687 [2024-07-26 09:06:32.904490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.687 [2024-07-26 09:06:32.904516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.687 qpair failed and we were unable to recover it. 00:33:14.687 [2024-07-26 09:06:32.904633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.687 [2024-07-26 09:06:32.904665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.687 qpair failed and we were unable to recover it. 00:33:14.687 [2024-07-26 09:06:32.904855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.687 [2024-07-26 09:06:32.904881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.687 qpair failed and we were unable to recover it. 00:33:14.687 [2024-07-26 09:06:32.904993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.687 [2024-07-26 09:06:32.905017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.687 qpair failed and we were unable to recover it. 00:33:14.687 [2024-07-26 09:06:32.905201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.687 [2024-07-26 09:06:32.905228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.687 qpair failed and we were unable to recover it. 00:33:14.687 [2024-07-26 09:06:32.905382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.687 [2024-07-26 09:06:32.905410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.687 qpair failed and we were unable to recover it. 00:33:14.687 [2024-07-26 09:06:32.905530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.687 [2024-07-26 09:06:32.905556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.687 qpair failed and we were unable to recover it. 00:33:14.687 [2024-07-26 09:06:32.905676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.687 [2024-07-26 09:06:32.905702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.687 qpair failed and we were unable to recover it. 00:33:14.687 [2024-07-26 09:06:32.905853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.687 [2024-07-26 09:06:32.905878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.687 qpair failed and we were unable to recover it. 00:33:14.687 [2024-07-26 09:06:32.905996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.687 [2024-07-26 09:06:32.906020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.687 qpair failed and we were unable to recover it. 00:33:14.687 [2024-07-26 09:06:32.906180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.687 [2024-07-26 09:06:32.906207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.687 qpair failed and we were unable to recover it. 00:33:14.687 [2024-07-26 09:06:32.906329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.687 [2024-07-26 09:06:32.906355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.687 qpair failed and we were unable to recover it. 00:33:14.687 [2024-07-26 09:06:32.906507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.687 [2024-07-26 09:06:32.906532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.687 qpair failed and we were unable to recover it. 00:33:14.687 [2024-07-26 09:06:32.906681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.687 [2024-07-26 09:06:32.906706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.687 qpair failed and we were unable to recover it. 00:33:14.687 [2024-07-26 09:06:32.906834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.687 [2024-07-26 09:06:32.906859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.687 qpair failed and we were unable to recover it. 00:33:14.687 [2024-07-26 09:06:32.906988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.687 [2024-07-26 09:06:32.907014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.687 qpair failed and we were unable to recover it. 00:33:14.687 [2024-07-26 09:06:32.907199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.687 [2024-07-26 09:06:32.907226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.687 qpair failed and we were unable to recover it. 00:33:14.687 [2024-07-26 09:06:32.907350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.687 [2024-07-26 09:06:32.907376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.687 qpair failed and we were unable to recover it. 00:33:14.687 [2024-07-26 09:06:32.907519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.687 [2024-07-26 09:06:32.907544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.687 qpair failed and we were unable to recover it. 00:33:14.687 [2024-07-26 09:06:32.907691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.687 [2024-07-26 09:06:32.907717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.687 qpair failed and we were unable to recover it. 00:33:14.687 [2024-07-26 09:06:32.907896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.687 [2024-07-26 09:06:32.907922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.687 qpair failed and we were unable to recover it. 00:33:14.687 [2024-07-26 09:06:32.908044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.687 [2024-07-26 09:06:32.908074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.687 qpair failed and we were unable to recover it. 00:33:14.687 [2024-07-26 09:06:32.908215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.687 [2024-07-26 09:06:32.908240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.687 qpair failed and we were unable to recover it. 00:33:14.687 [2024-07-26 09:06:32.908410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.687 [2024-07-26 09:06:32.908436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.687 qpair failed and we were unable to recover it. 00:33:14.687 [2024-07-26 09:06:32.908585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.687 [2024-07-26 09:06:32.908610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.687 qpair failed and we were unable to recover it. 00:33:14.687 [2024-07-26 09:06:32.908733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.687 [2024-07-26 09:06:32.908760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.688 qpair failed and we were unable to recover it. 00:33:14.688 [2024-07-26 09:06:32.908909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.688 [2024-07-26 09:06:32.908934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.688 qpair failed and we were unable to recover it. 00:33:14.688 [2024-07-26 09:06:32.909050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.688 [2024-07-26 09:06:32.909081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.688 qpair failed and we were unable to recover it. 00:33:14.688 [2024-07-26 09:06:32.909232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.688 [2024-07-26 09:06:32.909262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.688 qpair failed and we were unable to recover it. 00:33:14.688 [2024-07-26 09:06:32.909378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.688 [2024-07-26 09:06:32.909403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.688 qpair failed and we were unable to recover it. 00:33:14.688 [2024-07-26 09:06:32.909611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.688 [2024-07-26 09:06:32.909636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.688 qpair failed and we were unable to recover it. 00:33:14.688 [2024-07-26 09:06:32.909782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.688 [2024-07-26 09:06:32.909807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.688 qpair failed and we were unable to recover it. 00:33:14.688 [2024-07-26 09:06:32.909918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.688 [2024-07-26 09:06:32.909943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.688 qpair failed and we were unable to recover it. 00:33:14.688 [2024-07-26 09:06:32.910089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.688 [2024-07-26 09:06:32.910115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.688 qpair failed and we were unable to recover it. 00:33:14.688 [2024-07-26 09:06:32.910229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.688 [2024-07-26 09:06:32.910254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.688 qpair failed and we were unable to recover it. 00:33:14.688 [2024-07-26 09:06:32.910376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.688 [2024-07-26 09:06:32.910402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.688 qpair failed and we were unable to recover it. 00:33:14.688 [2024-07-26 09:06:32.910547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.688 [2024-07-26 09:06:32.910572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.688 qpair failed and we were unable to recover it. 00:33:14.688 [2024-07-26 09:06:32.910719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.688 [2024-07-26 09:06:32.910745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.688 qpair failed and we were unable to recover it. 00:33:14.688 [2024-07-26 09:06:32.910861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.688 [2024-07-26 09:06:32.910887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.688 qpair failed and we were unable to recover it. 00:33:14.688 [2024-07-26 09:06:32.911007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.688 [2024-07-26 09:06:32.911032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.688 qpair failed and we were unable to recover it. 00:33:14.688 [2024-07-26 09:06:32.911186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.688 [2024-07-26 09:06:32.911212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.688 qpair failed and we were unable to recover it. 00:33:14.688 [2024-07-26 09:06:32.911339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.688 [2024-07-26 09:06:32.911364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.688 qpair failed and we were unable to recover it. 00:33:14.688 [2024-07-26 09:06:32.911576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.688 [2024-07-26 09:06:32.911602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.688 qpair failed and we were unable to recover it. 00:33:14.688 [2024-07-26 09:06:32.911750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.688 [2024-07-26 09:06:32.911776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.688 qpair failed and we were unable to recover it. 00:33:14.688 [2024-07-26 09:06:32.911925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.688 [2024-07-26 09:06:32.911964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.688 qpair failed and we were unable to recover it. 00:33:14.688 [2024-07-26 09:06:32.912129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.688 [2024-07-26 09:06:32.912158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.688 qpair failed and we were unable to recover it. 00:33:14.688 [2024-07-26 09:06:32.912309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.688 [2024-07-26 09:06:32.912336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.688 qpair failed and we were unable to recover it. 00:33:14.688 [2024-07-26 09:06:32.912510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.688 [2024-07-26 09:06:32.912537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.688 qpair failed and we were unable to recover it. 00:33:14.688 [2024-07-26 09:06:32.912709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.688 [2024-07-26 09:06:32.912735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.688 qpair failed and we were unable to recover it. 00:33:14.688 [2024-07-26 09:06:32.912854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.688 [2024-07-26 09:06:32.912880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.688 qpair failed and we were unable to recover it. 00:33:14.688 [2024-07-26 09:06:32.913063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.688 [2024-07-26 09:06:32.913090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.688 qpair failed and we were unable to recover it. 00:33:14.688 [2024-07-26 09:06:32.913237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.688 [2024-07-26 09:06:32.913262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.688 qpair failed and we were unable to recover it. 00:33:14.688 [2024-07-26 09:06:32.913404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.688 [2024-07-26 09:06:32.913429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.688 qpair failed and we were unable to recover it. 00:33:14.688 [2024-07-26 09:06:32.913594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.688 [2024-07-26 09:06:32.913620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.688 qpair failed and we were unable to recover it. 00:33:14.688 [2024-07-26 09:06:32.913770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.688 [2024-07-26 09:06:32.913796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.688 qpair failed and we were unable to recover it. 00:33:14.688 [2024-07-26 09:06:32.913926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.688 [2024-07-26 09:06:32.913950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.688 qpair failed and we were unable to recover it. 00:33:14.688 [2024-07-26 09:06:32.914072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.688 [2024-07-26 09:06:32.914097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.688 qpair failed and we were unable to recover it. 00:33:14.688 [2024-07-26 09:06:32.914244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.688 [2024-07-26 09:06:32.914271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.688 qpair failed and we were unable to recover it. 00:33:14.688 [2024-07-26 09:06:32.914397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.688 [2024-07-26 09:06:32.914421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.688 qpair failed and we were unable to recover it. 00:33:14.688 [2024-07-26 09:06:32.914593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.688 [2024-07-26 09:06:32.914619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.688 qpair failed and we were unable to recover it. 00:33:14.688 [2024-07-26 09:06:32.914766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.688 [2024-07-26 09:06:32.914792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.688 qpair failed and we were unable to recover it. 00:33:14.688 [2024-07-26 09:06:32.914910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.688 [2024-07-26 09:06:32.914935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.688 qpair failed and we were unable to recover it. 00:33:14.688 [2024-07-26 09:06:32.915098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.688 [2024-07-26 09:06:32.915124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.688 qpair failed and we were unable to recover it. 00:33:14.688 [2024-07-26 09:06:32.915247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.689 [2024-07-26 09:06:32.915272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.689 qpair failed and we were unable to recover it. 00:33:14.689 [2024-07-26 09:06:32.915441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.689 [2024-07-26 09:06:32.915466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.689 qpair failed and we were unable to recover it. 00:33:14.689 [2024-07-26 09:06:32.915607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.689 [2024-07-26 09:06:32.915633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.689 qpair failed and we were unable to recover it. 00:33:14.689 [2024-07-26 09:06:32.915803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.689 [2024-07-26 09:06:32.915829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.689 qpair failed and we were unable to recover it. 00:33:14.689 [2024-07-26 09:06:32.915951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.689 [2024-07-26 09:06:32.915976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.689 qpair failed and we were unable to recover it. 00:33:14.689 [2024-07-26 09:06:32.916125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.689 [2024-07-26 09:06:32.916155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.689 qpair failed and we were unable to recover it. 00:33:14.689 [2024-07-26 09:06:32.916326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.689 [2024-07-26 09:06:32.916351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.689 qpair failed and we were unable to recover it. 00:33:14.689 [2024-07-26 09:06:32.916485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.689 [2024-07-26 09:06:32.916510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.689 qpair failed and we were unable to recover it. 00:33:14.689 [2024-07-26 09:06:32.916650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.689 [2024-07-26 09:06:32.916675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.689 qpair failed and we were unable to recover it. 00:33:14.689 [2024-07-26 09:06:32.916816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.689 [2024-07-26 09:06:32.916842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.689 qpair failed and we were unable to recover it. 00:33:14.689 [2024-07-26 09:06:32.916957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.689 [2024-07-26 09:06:32.916983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.689 qpair failed and we were unable to recover it. 00:33:14.689 [2024-07-26 09:06:32.917103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.689 [2024-07-26 09:06:32.917130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.689 qpair failed and we were unable to recover it. 00:33:14.689 [2024-07-26 09:06:32.917280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.689 [2024-07-26 09:06:32.917305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.689 qpair failed and we were unable to recover it. 00:33:14.689 [2024-07-26 09:06:32.917456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.689 [2024-07-26 09:06:32.917483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.689 qpair failed and we were unable to recover it. 00:33:14.689 [2024-07-26 09:06:32.917610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.689 [2024-07-26 09:06:32.917637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.689 qpair failed and we were unable to recover it. 00:33:14.689 [2024-07-26 09:06:32.917752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.689 [2024-07-26 09:06:32.917777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.689 qpair failed and we were unable to recover it. 00:33:14.689 [2024-07-26 09:06:32.917949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.689 [2024-07-26 09:06:32.917974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.689 qpair failed and we were unable to recover it. 00:33:14.689 [2024-07-26 09:06:32.918117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.689 [2024-07-26 09:06:32.918156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.689 qpair failed and we were unable to recover it. 00:33:14.689 [2024-07-26 09:06:32.918337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.689 [2024-07-26 09:06:32.918364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.689 qpair failed and we were unable to recover it. 00:33:14.689 [2024-07-26 09:06:32.918514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.689 [2024-07-26 09:06:32.918540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.689 qpair failed and we were unable to recover it. 00:33:14.689 [2024-07-26 09:06:32.918657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.689 [2024-07-26 09:06:32.918682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.689 qpair failed and we were unable to recover it. 00:33:14.689 [2024-07-26 09:06:32.918819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.689 [2024-07-26 09:06:32.918845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.689 qpair failed and we were unable to recover it. 00:33:14.689 [2024-07-26 09:06:32.918983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.689 [2024-07-26 09:06:32.919023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.689 qpair failed and we were unable to recover it. 00:33:14.689 [2024-07-26 09:06:32.919179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.689 [2024-07-26 09:06:32.919205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.689 qpair failed and we were unable to recover it. 00:33:14.689 [2024-07-26 09:06:32.919327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.689 [2024-07-26 09:06:32.919352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.689 qpair failed and we were unable to recover it. 00:33:14.689 [2024-07-26 09:06:32.919464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.689 [2024-07-26 09:06:32.919490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.689 qpair failed and we were unable to recover it. 00:33:14.689 [2024-07-26 09:06:32.919626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.689 [2024-07-26 09:06:32.919651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.689 qpair failed and we were unable to recover it. 00:33:14.689 [2024-07-26 09:06:32.919801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.689 [2024-07-26 09:06:32.919826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.689 qpair failed and we were unable to recover it. 00:33:14.689 [2024-07-26 09:06:32.919967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.689 [2024-07-26 09:06:32.919993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.689 qpair failed and we were unable to recover it. 00:33:14.689 [2024-07-26 09:06:32.920127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.689 [2024-07-26 09:06:32.920165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.689 qpair failed and we were unable to recover it. 00:33:14.689 [2024-07-26 09:06:32.920305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.689 [2024-07-26 09:06:32.920331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.689 qpair failed and we were unable to recover it. 00:33:14.689 [2024-07-26 09:06:32.920477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.689 [2024-07-26 09:06:32.920503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.689 qpair failed and we were unable to recover it. 00:33:14.689 [2024-07-26 09:06:32.920660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.689 [2024-07-26 09:06:32.920686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.689 qpair failed and we were unable to recover it. 00:33:14.689 [2024-07-26 09:06:32.920797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.689 [2024-07-26 09:06:32.920821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.689 qpair failed and we were unable to recover it. 00:33:14.689 [2024-07-26 09:06:32.920941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.689 [2024-07-26 09:06:32.920966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.689 qpair failed and we were unable to recover it. 00:33:14.689 [2024-07-26 09:06:32.921120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.689 [2024-07-26 09:06:32.921145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.689 qpair failed and we were unable to recover it. 00:33:14.689 [2024-07-26 09:06:32.921288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.689 [2024-07-26 09:06:32.921313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.689 qpair failed and we were unable to recover it. 00:33:14.689 [2024-07-26 09:06:32.921453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.689 [2024-07-26 09:06:32.921478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.690 qpair failed and we were unable to recover it. 00:33:14.690 [2024-07-26 09:06:32.921613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.690 [2024-07-26 09:06:32.921638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.690 qpair failed and we were unable to recover it. 00:33:14.690 [2024-07-26 09:06:32.921786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.690 [2024-07-26 09:06:32.921811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.690 qpair failed and we were unable to recover it. 00:33:14.690 [2024-07-26 09:06:32.921956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.690 [2024-07-26 09:06:32.921980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.690 qpair failed and we were unable to recover it. 00:33:14.690 [2024-07-26 09:06:32.922176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.690 [2024-07-26 09:06:32.922215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.690 qpair failed and we were unable to recover it. 00:33:14.690 [2024-07-26 09:06:32.922396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.690 [2024-07-26 09:06:32.922424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.690 qpair failed and we were unable to recover it. 00:33:14.690 [2024-07-26 09:06:32.922544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.690 [2024-07-26 09:06:32.922570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.690 qpair failed and we were unable to recover it. 00:33:14.690 [2024-07-26 09:06:32.922694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.690 [2024-07-26 09:06:32.922719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.690 qpair failed and we were unable to recover it. 00:33:14.690 [2024-07-26 09:06:32.922865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.690 [2024-07-26 09:06:32.922891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.690 qpair failed and we were unable to recover it. 00:33:14.690 [2024-07-26 09:06:32.923021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.690 [2024-07-26 09:06:32.923047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.690 qpair failed and we were unable to recover it. 00:33:14.690 [2024-07-26 09:06:32.923206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.690 [2024-07-26 09:06:32.923232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.690 qpair failed and we were unable to recover it. 00:33:14.690 [2024-07-26 09:06:32.923372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.690 [2024-07-26 09:06:32.923398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.690 qpair failed and we were unable to recover it. 00:33:14.690 [2024-07-26 09:06:32.923522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.690 [2024-07-26 09:06:32.923547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.690 qpair failed and we were unable to recover it. 00:33:14.690 [2024-07-26 09:06:32.923673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.690 [2024-07-26 09:06:32.923698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.690 qpair failed and we were unable to recover it. 00:33:14.690 [2024-07-26 09:06:32.923846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.690 [2024-07-26 09:06:32.923870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.690 qpair failed and we were unable to recover it. 00:33:14.690 [2024-07-26 09:06:32.923987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.690 [2024-07-26 09:06:32.924012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.690 qpair failed and we were unable to recover it. 00:33:14.690 [2024-07-26 09:06:32.924166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.690 [2024-07-26 09:06:32.924193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.690 qpair failed and we were unable to recover it. 00:33:14.690 [2024-07-26 09:06:32.924321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.690 [2024-07-26 09:06:32.924346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.690 qpair failed and we were unable to recover it. 00:33:14.690 [2024-07-26 09:06:32.924480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.690 [2024-07-26 09:06:32.924506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.690 qpair failed and we were unable to recover it. 00:33:14.690 [2024-07-26 09:06:32.924657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.690 [2024-07-26 09:06:32.924684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.690 qpair failed and we were unable to recover it. 00:33:14.690 [2024-07-26 09:06:32.924833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.690 [2024-07-26 09:06:32.924858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.690 qpair failed and we were unable to recover it. 00:33:14.690 [2024-07-26 09:06:32.924999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.690 [2024-07-26 09:06:32.925023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.690 qpair failed and we were unable to recover it. 00:33:14.690 [2024-07-26 09:06:32.925191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.690 [2024-07-26 09:06:32.925218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.690 qpair failed and we were unable to recover it. 00:33:14.690 [2024-07-26 09:06:32.925344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.690 [2024-07-26 09:06:32.925369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.690 qpair failed and we were unable to recover it. 00:33:14.690 [2024-07-26 09:06:32.925548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.690 [2024-07-26 09:06:32.925573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.690 qpair failed and we were unable to recover it. 00:33:14.690 [2024-07-26 09:06:32.925697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.690 [2024-07-26 09:06:32.925723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.690 qpair failed and we were unable to recover it. 00:33:14.690 [2024-07-26 09:06:32.925891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.690 [2024-07-26 09:06:32.925917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.690 qpair failed and we were unable to recover it. 00:33:14.690 [2024-07-26 09:06:32.926043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.690 [2024-07-26 09:06:32.926074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.690 qpair failed and we were unable to recover it. 00:33:14.690 [2024-07-26 09:06:32.926188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.690 [2024-07-26 09:06:32.926214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.690 qpair failed and we were unable to recover it. 00:33:14.690 [2024-07-26 09:06:32.926355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.690 [2024-07-26 09:06:32.926380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.690 qpair failed and we were unable to recover it. 00:33:14.690 [2024-07-26 09:06:32.926530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.690 [2024-07-26 09:06:32.926555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.690 qpair failed and we were unable to recover it. 00:33:14.690 [2024-07-26 09:06:32.926700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.690 [2024-07-26 09:06:32.926726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.690 qpair failed and we were unable to recover it. 00:33:14.690 [2024-07-26 09:06:32.926877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.690 [2024-07-26 09:06:32.926904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.690 qpair failed and we were unable to recover it. 00:33:14.690 [2024-07-26 09:06:32.927020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.690 [2024-07-26 09:06:32.927045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.690 qpair failed and we were unable to recover it. 00:33:14.690 [2024-07-26 09:06:32.927172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.690 [2024-07-26 09:06:32.927198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.690 qpair failed and we were unable to recover it. 00:33:14.690 [2024-07-26 09:06:32.927316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.690 [2024-07-26 09:06:32.927340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.690 qpair failed and we were unable to recover it. 00:33:14.690 [2024-07-26 09:06:32.927466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.690 [2024-07-26 09:06:32.927491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.690 qpair failed and we were unable to recover it. 00:33:14.690 [2024-07-26 09:06:32.927638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.690 [2024-07-26 09:06:32.927663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.690 qpair failed and we were unable to recover it. 00:33:14.690 [2024-07-26 09:06:32.927798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.691 [2024-07-26 09:06:32.927822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.691 qpair failed and we were unable to recover it. 00:33:14.691 [2024-07-26 09:06:32.927962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.691 [2024-07-26 09:06:32.927987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.691 qpair failed and we were unable to recover it. 00:33:14.691 [2024-07-26 09:06:32.928101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.691 [2024-07-26 09:06:32.928127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.691 qpair failed and we were unable to recover it. 00:33:14.691 [2024-07-26 09:06:32.928255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.691 [2024-07-26 09:06:32.928281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.691 qpair failed and we were unable to recover it. 00:33:14.691 [2024-07-26 09:06:32.928400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.691 [2024-07-26 09:06:32.928425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.691 qpair failed and we were unable to recover it. 00:33:14.691 [2024-07-26 09:06:32.928570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.691 [2024-07-26 09:06:32.928596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.691 qpair failed and we were unable to recover it. 00:33:14.691 [2024-07-26 09:06:32.928740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.691 [2024-07-26 09:06:32.928765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.691 qpair failed and we were unable to recover it. 00:33:14.691 [2024-07-26 09:06:32.928890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.691 [2024-07-26 09:06:32.928916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.691 qpair failed and we were unable to recover it. 00:33:14.691 [2024-07-26 09:06:32.929029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.691 [2024-07-26 09:06:32.929055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.691 qpair failed and we were unable to recover it. 00:33:14.691 [2024-07-26 09:06:32.929203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.691 [2024-07-26 09:06:32.929229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.691 qpair failed and we were unable to recover it. 00:33:14.691 [2024-07-26 09:06:32.929348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.691 [2024-07-26 09:06:32.929373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.691 qpair failed and we were unable to recover it. 00:33:14.691 [2024-07-26 09:06:32.929514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.691 [2024-07-26 09:06:32.929543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.691 qpair failed and we were unable to recover it. 00:33:14.691 [2024-07-26 09:06:32.929686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.691 [2024-07-26 09:06:32.929712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.691 qpair failed and we were unable to recover it. 00:33:14.691 [2024-07-26 09:06:32.929836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.691 [2024-07-26 09:06:32.929862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.691 qpair failed and we were unable to recover it. 00:33:14.691 [2024-07-26 09:06:32.930007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.691 [2024-07-26 09:06:32.930033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.691 qpair failed and we were unable to recover it. 00:33:14.691 [2024-07-26 09:06:32.930187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.691 [2024-07-26 09:06:32.930213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.691 qpair failed and we were unable to recover it. 00:33:14.691 [2024-07-26 09:06:32.930339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.691 [2024-07-26 09:06:32.930364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.691 qpair failed and we were unable to recover it. 00:33:14.691 [2024-07-26 09:06:32.930479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.691 [2024-07-26 09:06:32.930504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.691 qpair failed and we were unable to recover it. 00:33:14.691 [2024-07-26 09:06:32.930625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.691 [2024-07-26 09:06:32.930650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.691 qpair failed and we were unable to recover it. 00:33:14.691 [2024-07-26 09:06:32.930791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.691 [2024-07-26 09:06:32.930816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.691 qpair failed and we were unable to recover it. 00:33:14.691 [2024-07-26 09:06:32.930989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.691 [2024-07-26 09:06:32.931014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.691 qpair failed and we were unable to recover it. 00:33:14.691 [2024-07-26 09:06:32.931139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.691 [2024-07-26 09:06:32.931165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.691 qpair failed and we were unable to recover it. 00:33:14.691 [2024-07-26 09:06:32.931316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.691 [2024-07-26 09:06:32.931341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.691 qpair failed and we were unable to recover it. 00:33:14.691 [2024-07-26 09:06:32.931463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.691 [2024-07-26 09:06:32.931488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.691 qpair failed and we were unable to recover it. 00:33:14.691 [2024-07-26 09:06:32.931601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.691 [2024-07-26 09:06:32.931627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.691 qpair failed and we were unable to recover it. 00:33:14.691 [2024-07-26 09:06:32.931801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.691 [2024-07-26 09:06:32.931827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.691 qpair failed and we were unable to recover it. 00:33:14.691 [2024-07-26 09:06:32.931975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.691 [2024-07-26 09:06:32.932002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.691 qpair failed and we were unable to recover it. 00:33:14.691 [2024-07-26 09:06:32.932177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.691 [2024-07-26 09:06:32.932204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.691 qpair failed and we were unable to recover it. 00:33:14.691 [2024-07-26 09:06:32.932367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.691 [2024-07-26 09:06:32.932406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.691 qpair failed and we were unable to recover it. 00:33:14.691 [2024-07-26 09:06:32.932561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.691 [2024-07-26 09:06:32.932588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.691 qpair failed and we were unable to recover it. 00:33:14.691 [2024-07-26 09:06:32.932712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.691 [2024-07-26 09:06:32.932739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.691 qpair failed and we were unable to recover it. 00:33:14.691 [2024-07-26 09:06:32.932888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.691 [2024-07-26 09:06:32.932914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.691 qpair failed and we were unable to recover it. 00:33:14.691 [2024-07-26 09:06:32.933077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.691 [2024-07-26 09:06:32.933103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.692 qpair failed and we were unable to recover it. 00:33:14.692 [2024-07-26 09:06:32.933251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.692 [2024-07-26 09:06:32.933277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.692 qpair failed and we were unable to recover it. 00:33:14.692 [2024-07-26 09:06:32.933393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.692 [2024-07-26 09:06:32.933419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.692 qpair failed and we were unable to recover it. 00:33:14.692 [2024-07-26 09:06:32.933542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.692 [2024-07-26 09:06:32.933568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.692 qpair failed and we were unable to recover it. 00:33:14.692 [2024-07-26 09:06:32.933680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.692 [2024-07-26 09:06:32.933706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.692 qpair failed and we were unable to recover it. 00:33:14.692 [2024-07-26 09:06:32.933835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.692 [2024-07-26 09:06:32.933862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.692 qpair failed and we were unable to recover it. 00:33:14.692 [2024-07-26 09:06:32.934037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.692 [2024-07-26 09:06:32.934073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.692 qpair failed and we were unable to recover it. 00:33:14.692 [2024-07-26 09:06:32.934203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.692 [2024-07-26 09:06:32.934229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.692 qpair failed and we were unable to recover it. 00:33:14.692 [2024-07-26 09:06:32.934354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.692 [2024-07-26 09:06:32.934380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.692 qpair failed and we were unable to recover it. 00:33:14.692 [2024-07-26 09:06:32.934504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.692 [2024-07-26 09:06:32.934530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.692 qpair failed and we were unable to recover it. 00:33:14.692 [2024-07-26 09:06:32.934701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.692 [2024-07-26 09:06:32.934726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.692 qpair failed and we were unable to recover it. 00:33:14.692 [2024-07-26 09:06:32.934844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.692 [2024-07-26 09:06:32.934870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.692 qpair failed and we were unable to recover it. 00:33:14.692 [2024-07-26 09:06:32.935048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.692 [2024-07-26 09:06:32.935079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.692 qpair failed and we were unable to recover it. 00:33:14.692 [2024-07-26 09:06:32.935219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.692 [2024-07-26 09:06:32.935244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.692 qpair failed and we were unable to recover it. 00:33:14.692 [2024-07-26 09:06:32.935378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.692 [2024-07-26 09:06:32.935405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.692 qpair failed and we were unable to recover it. 00:33:14.692 [2024-07-26 09:06:32.935553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.692 [2024-07-26 09:06:32.935580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.692 qpair failed and we were unable to recover it. 00:33:14.692 [2024-07-26 09:06:32.935719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.692 [2024-07-26 09:06:32.935745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.692 qpair failed and we were unable to recover it. 00:33:14.692 [2024-07-26 09:06:32.935862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.692 [2024-07-26 09:06:32.935887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.692 qpair failed and we were unable to recover it. 00:33:14.692 [2024-07-26 09:06:32.936028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.692 [2024-07-26 09:06:32.936055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.692 qpair failed and we were unable to recover it. 00:33:14.692 [2024-07-26 09:06:32.936185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.692 [2024-07-26 09:06:32.936211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.692 qpair failed and we were unable to recover it. 00:33:14.692 [2024-07-26 09:06:32.936381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.692 [2024-07-26 09:06:32.936407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.692 qpair failed and we were unable to recover it. 00:33:14.692 [2024-07-26 09:06:32.936558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.692 [2024-07-26 09:06:32.936584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.692 qpair failed and we were unable to recover it. 00:33:14.692 [2024-07-26 09:06:32.936702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.692 [2024-07-26 09:06:32.936729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.692 qpair failed and we were unable to recover it. 00:33:14.692 [2024-07-26 09:06:32.936887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.692 [2024-07-26 09:06:32.936915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.692 qpair failed and we were unable to recover it. 00:33:14.692 [2024-07-26 09:06:32.937072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.692 [2024-07-26 09:06:32.937099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.692 qpair failed and we were unable to recover it. 00:33:14.692 [2024-07-26 09:06:32.937230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.692 [2024-07-26 09:06:32.937256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.692 qpair failed and we were unable to recover it. 00:33:14.692 [2024-07-26 09:06:32.937404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.692 [2024-07-26 09:06:32.937429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.692 qpair failed and we were unable to recover it. 00:33:14.692 [2024-07-26 09:06:32.937599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.692 [2024-07-26 09:06:32.937625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.692 qpair failed and we were unable to recover it. 00:33:14.692 [2024-07-26 09:06:32.937772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.692 [2024-07-26 09:06:32.937797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.692 qpair failed and we were unable to recover it. 00:33:14.692 [2024-07-26 09:06:32.937917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.692 [2024-07-26 09:06:32.937942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.692 qpair failed and we were unable to recover it. 00:33:14.692 [2024-07-26 09:06:32.938070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.692 [2024-07-26 09:06:32.938097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.692 qpair failed and we were unable to recover it. 00:33:14.692 [2024-07-26 09:06:32.938272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.692 [2024-07-26 09:06:32.938298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.692 qpair failed and we were unable to recover it. 00:33:14.692 [2024-07-26 09:06:32.938425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.692 [2024-07-26 09:06:32.938452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.692 qpair failed and we were unable to recover it. 00:33:14.692 [2024-07-26 09:06:32.938601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.692 [2024-07-26 09:06:32.938631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.692 qpair failed and we were unable to recover it. 00:33:14.692 [2024-07-26 09:06:32.938771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.692 [2024-07-26 09:06:32.938797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.692 qpair failed and we were unable to recover it. 00:33:14.692 [2024-07-26 09:06:32.938909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.692 [2024-07-26 09:06:32.938935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.692 qpair failed and we were unable to recover it. 00:33:14.692 [2024-07-26 09:06:32.939078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.692 [2024-07-26 09:06:32.939105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.692 qpair failed and we were unable to recover it. 00:33:14.692 [2024-07-26 09:06:32.939255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.692 [2024-07-26 09:06:32.939280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.692 qpair failed and we were unable to recover it. 00:33:14.692 [2024-07-26 09:06:32.939392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.693 [2024-07-26 09:06:32.939417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.693 qpair failed and we were unable to recover it. 00:33:14.693 [2024-07-26 09:06:32.939567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.693 [2024-07-26 09:06:32.939592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.693 qpair failed and we were unable to recover it. 00:33:14.693 [2024-07-26 09:06:32.939722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.693 [2024-07-26 09:06:32.939747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.693 qpair failed and we were unable to recover it. 00:33:14.693 [2024-07-26 09:06:32.939904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.693 [2024-07-26 09:06:32.939943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.693 qpair failed and we were unable to recover it. 00:33:14.693 [2024-07-26 09:06:32.940126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.693 [2024-07-26 09:06:32.940154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.693 qpair failed and we were unable to recover it. 00:33:14.693 [2024-07-26 09:06:32.940300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.693 [2024-07-26 09:06:32.940327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.693 qpair failed and we were unable to recover it. 00:33:14.693 [2024-07-26 09:06:32.940445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.693 [2024-07-26 09:06:32.940472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.693 qpair failed and we were unable to recover it. 00:33:14.693 [2024-07-26 09:06:32.940620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.693 [2024-07-26 09:06:32.940646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.693 qpair failed and we were unable to recover it. 00:33:14.693 [2024-07-26 09:06:32.940771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.693 [2024-07-26 09:06:32.940796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.693 qpair failed and we were unable to recover it. 00:33:14.693 [2024-07-26 09:06:32.940950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.693 [2024-07-26 09:06:32.940976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.693 qpair failed and we were unable to recover it. 00:33:14.693 [2024-07-26 09:06:32.941094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.693 [2024-07-26 09:06:32.941121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.693 qpair failed and we were unable to recover it. 00:33:14.693 [2024-07-26 09:06:32.941259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.693 [2024-07-26 09:06:32.941285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.693 qpair failed and we were unable to recover it. 00:33:14.693 [2024-07-26 09:06:32.941404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.693 [2024-07-26 09:06:32.941430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.693 qpair failed and we were unable to recover it. 00:33:14.693 [2024-07-26 09:06:32.941580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.693 [2024-07-26 09:06:32.941606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.693 qpair failed and we were unable to recover it. 00:33:14.693 [2024-07-26 09:06:32.941779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.693 [2024-07-26 09:06:32.941805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.693 qpair failed and we were unable to recover it. 00:33:14.693 [2024-07-26 09:06:32.941950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.693 [2024-07-26 09:06:32.941978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.693 qpair failed and we were unable to recover it. 00:33:14.693 [2024-07-26 09:06:32.942104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.693 [2024-07-26 09:06:32.942130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.693 qpair failed and we were unable to recover it. 00:33:14.693 [2024-07-26 09:06:32.942275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.693 [2024-07-26 09:06:32.942301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.693 qpair failed and we were unable to recover it. 00:33:14.693 [2024-07-26 09:06:32.942426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.693 [2024-07-26 09:06:32.942452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.693 qpair failed and we were unable to recover it. 00:33:14.693 [2024-07-26 09:06:32.942574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.693 [2024-07-26 09:06:32.942599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.693 qpair failed and we were unable to recover it. 00:33:14.693 [2024-07-26 09:06:32.942769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.693 [2024-07-26 09:06:32.942794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.693 qpair failed and we were unable to recover it. 00:33:14.693 [2024-07-26 09:06:32.942916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.693 [2024-07-26 09:06:32.942943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.693 qpair failed and we were unable to recover it. 00:33:14.693 [2024-07-26 09:06:32.943088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.693 [2024-07-26 09:06:32.943119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.693 qpair failed and we were unable to recover it. 00:33:14.693 [2024-07-26 09:06:32.943245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.693 [2024-07-26 09:06:32.943271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.693 qpair failed and we were unable to recover it. 00:33:14.693 [2024-07-26 09:06:32.943419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.693 [2024-07-26 09:06:32.943445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.693 qpair failed and we were unable to recover it. 00:33:14.693 [2024-07-26 09:06:32.943584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.693 [2024-07-26 09:06:32.943610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.693 qpair failed and we were unable to recover it. 00:33:14.693 [2024-07-26 09:06:32.943798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.693 [2024-07-26 09:06:32.943824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.693 qpair failed and we were unable to recover it. 00:33:14.693 [2024-07-26 09:06:32.943948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.693 [2024-07-26 09:06:32.943974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.693 qpair failed and we were unable to recover it. 00:33:14.693 [2024-07-26 09:06:32.944125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.693 [2024-07-26 09:06:32.944151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.693 qpair failed and we were unable to recover it. 00:33:14.693 [2024-07-26 09:06:32.944333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.693 [2024-07-26 09:06:32.944358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.693 qpair failed and we were unable to recover it. 00:33:14.693 [2024-07-26 09:06:32.944480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.693 [2024-07-26 09:06:32.944506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.693 qpair failed and we were unable to recover it. 00:33:14.693 [2024-07-26 09:06:32.944621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.693 [2024-07-26 09:06:32.944647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.693 qpair failed and we were unable to recover it. 00:33:14.693 [2024-07-26 09:06:32.944764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.693 [2024-07-26 09:06:32.944790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.693 qpair failed and we were unable to recover it. 00:33:14.693 [2024-07-26 09:06:32.944918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.693 [2024-07-26 09:06:32.944945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.693 qpair failed and we were unable to recover it. 00:33:14.693 [2024-07-26 09:06:32.945098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.693 [2024-07-26 09:06:32.945124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.693 qpair failed and we were unable to recover it. 00:33:14.693 [2024-07-26 09:06:32.945249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.693 [2024-07-26 09:06:32.945275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.693 qpair failed and we were unable to recover it. 00:33:14.693 [2024-07-26 09:06:32.945394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.693 [2024-07-26 09:06:32.945419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.693 qpair failed and we were unable to recover it. 00:33:14.693 [2024-07-26 09:06:32.945539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.694 [2024-07-26 09:06:32.945564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.694 qpair failed and we were unable to recover it. 00:33:14.694 [2024-07-26 09:06:32.945710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.694 [2024-07-26 09:06:32.945735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.694 qpair failed and we were unable to recover it. 00:33:14.694 [2024-07-26 09:06:32.945852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.694 [2024-07-26 09:06:32.945878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.694 qpair failed and we were unable to recover it. 00:33:14.694 [2024-07-26 09:06:32.946053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.694 [2024-07-26 09:06:32.946084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.694 qpair failed and we were unable to recover it. 00:33:14.694 [2024-07-26 09:06:32.946214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.694 [2024-07-26 09:06:32.946240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.694 qpair failed and we were unable to recover it. 00:33:14.694 [2024-07-26 09:06:32.946395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.694 [2024-07-26 09:06:32.946421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.694 qpair failed and we were unable to recover it. 00:33:14.694 [2024-07-26 09:06:32.946595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.694 [2024-07-26 09:06:32.946620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.694 qpair failed and we were unable to recover it. 00:33:14.694 [2024-07-26 09:06:32.946791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.694 [2024-07-26 09:06:32.946817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.694 qpair failed and we were unable to recover it. 00:33:14.694 [2024-07-26 09:06:32.946945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.694 [2024-07-26 09:06:32.946971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.694 qpair failed and we were unable to recover it. 00:33:14.694 [2024-07-26 09:06:32.947101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.694 [2024-07-26 09:06:32.947128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.694 qpair failed and we were unable to recover it. 00:33:14.694 [2024-07-26 09:06:32.947303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.694 [2024-07-26 09:06:32.947328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.694 qpair failed and we were unable to recover it. 00:33:14.694 [2024-07-26 09:06:32.947472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.694 [2024-07-26 09:06:32.947497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.694 qpair failed and we were unable to recover it. 00:33:14.694 [2024-07-26 09:06:32.947648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.694 [2024-07-26 09:06:32.947673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.694 qpair failed and we were unable to recover it. 00:33:14.694 [2024-07-26 09:06:32.947818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.694 [2024-07-26 09:06:32.947844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.694 qpair failed and we were unable to recover it. 00:33:14.694 [2024-07-26 09:06:32.947978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.694 [2024-07-26 09:06:32.948016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.694 qpair failed and we were unable to recover it. 00:33:14.694 [2024-07-26 09:06:32.948176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.694 [2024-07-26 09:06:32.948204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.694 qpair failed and we were unable to recover it. 00:33:14.694 [2024-07-26 09:06:32.948336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.694 [2024-07-26 09:06:32.948364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.694 qpair failed and we were unable to recover it. 00:33:14.694 [2024-07-26 09:06:32.948511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.694 [2024-07-26 09:06:32.948537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.694 qpair failed and we were unable to recover it. 00:33:14.694 [2024-07-26 09:06:32.948662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.694 [2024-07-26 09:06:32.948688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.694 qpair failed and we were unable to recover it. 00:33:14.694 [2024-07-26 09:06:32.948830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.694 [2024-07-26 09:06:32.948856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.694 qpair failed and we were unable to recover it. 00:33:14.694 [2024-07-26 09:06:32.949091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.694 [2024-07-26 09:06:32.949119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.694 qpair failed and we were unable to recover it. 00:33:14.694 [2024-07-26 09:06:32.949265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.694 [2024-07-26 09:06:32.949290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.694 qpair failed and we were unable to recover it. 00:33:14.694 [2024-07-26 09:06:32.949442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.694 [2024-07-26 09:06:32.949467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.694 qpair failed and we were unable to recover it. 00:33:14.694 [2024-07-26 09:06:32.949613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.694 [2024-07-26 09:06:32.949638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.694 qpair failed and we were unable to recover it. 00:33:14.694 [2024-07-26 09:06:32.949811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.694 [2024-07-26 09:06:32.949836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.694 qpair failed and we were unable to recover it. 00:33:14.694 [2024-07-26 09:06:32.949953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.694 [2024-07-26 09:06:32.949978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.694 qpair failed and we were unable to recover it. 00:33:14.694 [2024-07-26 09:06:32.950131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.694 [2024-07-26 09:06:32.950158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.694 qpair failed and we were unable to recover it. 00:33:14.694 [2024-07-26 09:06:32.950283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.694 [2024-07-26 09:06:32.950311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.694 qpair failed and we were unable to recover it. 00:33:14.694 [2024-07-26 09:06:32.950459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.694 [2024-07-26 09:06:32.950485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.694 qpair failed and we were unable to recover it. 00:33:14.694 [2024-07-26 09:06:32.950610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.694 [2024-07-26 09:06:32.950637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.694 qpair failed and we were unable to recover it. 00:33:14.694 [2024-07-26 09:06:32.950793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.694 [2024-07-26 09:06:32.950819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.694 qpair failed and we were unable to recover it. 00:33:14.694 [2024-07-26 09:06:32.950936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.694 [2024-07-26 09:06:32.950963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.694 qpair failed and we were unable to recover it. 00:33:14.694 [2024-07-26 09:06:32.951115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.694 [2024-07-26 09:06:32.951141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.694 qpair failed and we were unable to recover it. 00:33:14.694 [2024-07-26 09:06:32.951285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.694 [2024-07-26 09:06:32.951311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.694 qpair failed and we were unable to recover it. 00:33:14.694 [2024-07-26 09:06:32.951456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.694 [2024-07-26 09:06:32.951481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.694 qpair failed and we were unable to recover it. 00:33:14.694 [2024-07-26 09:06:32.951627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.694 [2024-07-26 09:06:32.951654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.694 qpair failed and we were unable to recover it. 00:33:14.694 [2024-07-26 09:06:32.951781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.694 [2024-07-26 09:06:32.951806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.694 qpair failed and we were unable to recover it. 00:33:14.694 [2024-07-26 09:06:32.951976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.695 [2024-07-26 09:06:32.952001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.695 qpair failed and we were unable to recover it. 00:33:14.695 [2024-07-26 09:06:32.952207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.695 [2024-07-26 09:06:32.952233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.695 qpair failed and we were unable to recover it. 00:33:14.695 [2024-07-26 09:06:32.952391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.695 [2024-07-26 09:06:32.952417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.695 qpair failed and we were unable to recover it. 00:33:14.695 [2024-07-26 09:06:32.952562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.695 [2024-07-26 09:06:32.952588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.695 qpair failed and we were unable to recover it. 00:33:14.695 [2024-07-26 09:06:32.952760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.695 [2024-07-26 09:06:32.952785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.695 qpair failed and we were unable to recover it. 00:33:14.695 [2024-07-26 09:06:32.952902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.695 [2024-07-26 09:06:32.952927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.695 qpair failed and we were unable to recover it. 00:33:14.695 [2024-07-26 09:06:32.953084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.695 [2024-07-26 09:06:32.953110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.695 qpair failed and we were unable to recover it. 00:33:14.695 [2024-07-26 09:06:32.953283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.695 [2024-07-26 09:06:32.953310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.695 qpair failed and we were unable to recover it. 00:33:14.695 [2024-07-26 09:06:32.953441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.695 [2024-07-26 09:06:32.953466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.695 qpair failed and we were unable to recover it. 00:33:14.695 [2024-07-26 09:06:32.953589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.695 [2024-07-26 09:06:32.953615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.695 qpair failed and we were unable to recover it. 00:33:14.695 [2024-07-26 09:06:32.953727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.695 [2024-07-26 09:06:32.953752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.695 qpair failed and we were unable to recover it. 00:33:14.695 [2024-07-26 09:06:32.953925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.695 [2024-07-26 09:06:32.953951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.695 qpair failed and we were unable to recover it. 00:33:14.695 [2024-07-26 09:06:32.954070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.695 [2024-07-26 09:06:32.954096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.695 qpair failed and we were unable to recover it. 00:33:14.695 [2024-07-26 09:06:32.954244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.695 [2024-07-26 09:06:32.954272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.695 qpair failed and we were unable to recover it. 00:33:14.695 [2024-07-26 09:06:32.954397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.695 [2024-07-26 09:06:32.954423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.695 qpair failed and we were unable to recover it. 00:33:14.695 [2024-07-26 09:06:32.954541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.695 [2024-07-26 09:06:32.954567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.695 qpair failed and we were unable to recover it. 00:33:14.695 [2024-07-26 09:06:32.954811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.695 [2024-07-26 09:06:32.954838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.695 qpair failed and we were unable to recover it. 00:33:14.695 [2024-07-26 09:06:32.954962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.695 [2024-07-26 09:06:32.954989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.695 qpair failed and we were unable to recover it. 00:33:14.695 [2024-07-26 09:06:32.955106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.695 [2024-07-26 09:06:32.955133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.695 qpair failed and we were unable to recover it. 00:33:14.695 [2024-07-26 09:06:32.955284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.695 [2024-07-26 09:06:32.955310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.695 qpair failed and we were unable to recover it. 00:33:14.695 [2024-07-26 09:06:32.955455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.695 [2024-07-26 09:06:32.955481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.695 qpair failed and we were unable to recover it. 00:33:14.695 [2024-07-26 09:06:32.955634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.695 [2024-07-26 09:06:32.955659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.695 qpair failed and we were unable to recover it. 00:33:14.695 [2024-07-26 09:06:32.955807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.695 [2024-07-26 09:06:32.955834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.695 qpair failed and we were unable to recover it. 00:33:14.695 [2024-07-26 09:06:32.955983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.695 [2024-07-26 09:06:32.956009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.695 qpair failed and we were unable to recover it. 00:33:14.695 [2024-07-26 09:06:32.956193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.695 [2024-07-26 09:06:32.956219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.695 qpair failed and we were unable to recover it. 00:33:14.695 [2024-07-26 09:06:32.956330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.695 [2024-07-26 09:06:32.956356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.695 qpair failed and we were unable to recover it. 00:33:14.695 [2024-07-26 09:06:32.956503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.695 [2024-07-26 09:06:32.956530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.695 qpair failed and we were unable to recover it. 00:33:14.695 [2024-07-26 09:06:32.956676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.695 [2024-07-26 09:06:32.956702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.695 qpair failed and we were unable to recover it. 00:33:14.695 [2024-07-26 09:06:32.956852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.695 [2024-07-26 09:06:32.956880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.695 qpair failed and we were unable to recover it. 00:33:14.695 [2024-07-26 09:06:32.957034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.695 [2024-07-26 09:06:32.957065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.695 qpair failed and we were unable to recover it. 00:33:14.695 [2024-07-26 09:06:32.957214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.695 [2024-07-26 09:06:32.957240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.695 qpair failed and we were unable to recover it. 00:33:14.695 [2024-07-26 09:06:32.957360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.695 [2024-07-26 09:06:32.957385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.695 qpair failed and we were unable to recover it. 00:33:14.695 [2024-07-26 09:06:32.957532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.695 [2024-07-26 09:06:32.957558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.695 qpair failed and we were unable to recover it. 00:33:14.695 [2024-07-26 09:06:32.957709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.695 [2024-07-26 09:06:32.957735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.695 qpair failed and we were unable to recover it. 00:33:14.695 [2024-07-26 09:06:32.957883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.695 [2024-07-26 09:06:32.957910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.695 qpair failed and we were unable to recover it. 00:33:14.695 [2024-07-26 09:06:32.958025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.695 [2024-07-26 09:06:32.958050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.695 qpair failed and we were unable to recover it. 00:33:14.695 [2024-07-26 09:06:32.958207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.695 [2024-07-26 09:06:32.958233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.695 qpair failed and we were unable to recover it. 00:33:14.695 [2024-07-26 09:06:32.958351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.695 [2024-07-26 09:06:32.958377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.695 qpair failed and we were unable to recover it. 00:33:14.695 [2024-07-26 09:06:32.958528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.696 [2024-07-26 09:06:32.958554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.696 qpair failed and we were unable to recover it. 00:33:14.696 [2024-07-26 09:06:32.958680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.696 [2024-07-26 09:06:32.958707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.696 qpair failed and we were unable to recover it. 00:33:14.696 [2024-07-26 09:06:32.958845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.696 [2024-07-26 09:06:32.958871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.696 qpair failed and we were unable to recover it. 00:33:14.696 [2024-07-26 09:06:32.959025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.696 [2024-07-26 09:06:32.959051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.696 qpair failed and we were unable to recover it. 00:33:14.696 [2024-07-26 09:06:32.959204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.696 [2024-07-26 09:06:32.959234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.696 qpair failed and we were unable to recover it. 00:33:14.696 [2024-07-26 09:06:32.959410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.696 [2024-07-26 09:06:32.959435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.696 qpair failed and we were unable to recover it. 00:33:14.696 [2024-07-26 09:06:32.959581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.696 [2024-07-26 09:06:32.959606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.696 qpair failed and we were unable to recover it. 00:33:14.696 [2024-07-26 09:06:32.959745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.696 [2024-07-26 09:06:32.959771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.696 qpair failed and we were unable to recover it. 00:33:14.696 [2024-07-26 09:06:32.959894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.696 [2024-07-26 09:06:32.959921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.696 qpair failed and we were unable to recover it. 00:33:14.696 [2024-07-26 09:06:32.960043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.696 [2024-07-26 09:06:32.960076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.696 qpair failed and we were unable to recover it. 00:33:14.696 [2024-07-26 09:06:32.960241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.696 [2024-07-26 09:06:32.960267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.696 qpair failed and we were unable to recover it. 00:33:14.696 [2024-07-26 09:06:32.960416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.696 [2024-07-26 09:06:32.960442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.696 qpair failed and we were unable to recover it. 00:33:14.696 [2024-07-26 09:06:32.960592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.696 [2024-07-26 09:06:32.960619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.696 qpair failed and we were unable to recover it. 00:33:14.696 [2024-07-26 09:06:32.960762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.696 [2024-07-26 09:06:32.960788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.696 qpair failed and we were unable to recover it. 00:33:14.696 [2024-07-26 09:06:32.960935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.696 [2024-07-26 09:06:32.960962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.696 qpair failed and we were unable to recover it. 00:33:14.696 [2024-07-26 09:06:32.961082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.696 [2024-07-26 09:06:32.961108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.696 qpair failed and we were unable to recover it. 00:33:14.696 [2024-07-26 09:06:32.961237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.696 [2024-07-26 09:06:32.961263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.696 qpair failed and we were unable to recover it. 00:33:14.696 [2024-07-26 09:06:32.961410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.696 [2024-07-26 09:06:32.961435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.696 qpair failed and we were unable to recover it. 00:33:14.696 [2024-07-26 09:06:32.961554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.696 [2024-07-26 09:06:32.961580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.696 qpair failed and we were unable to recover it. 00:33:14.696 [2024-07-26 09:06:32.961721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.696 [2024-07-26 09:06:32.961746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.696 qpair failed and we were unable to recover it. 00:33:14.696 [2024-07-26 09:06:32.961868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.696 [2024-07-26 09:06:32.961894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.696 qpair failed and we were unable to recover it. 00:33:14.696 [2024-07-26 09:06:32.962008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.696 [2024-07-26 09:06:32.962034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.696 qpair failed and we were unable to recover it. 00:33:14.696 [2024-07-26 09:06:32.962160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.696 [2024-07-26 09:06:32.962186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.696 qpair failed and we were unable to recover it. 00:33:14.696 [2024-07-26 09:06:32.962352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.696 [2024-07-26 09:06:32.962377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.696 qpair failed and we were unable to recover it. 00:33:14.696 [2024-07-26 09:06:32.962500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.696 [2024-07-26 09:06:32.962525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.696 qpair failed and we were unable to recover it. 00:33:14.696 [2024-07-26 09:06:32.962649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.696 [2024-07-26 09:06:32.962675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.696 qpair failed and we were unable to recover it. 00:33:14.696 [2024-07-26 09:06:32.962821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.696 [2024-07-26 09:06:32.962847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.696 qpair failed and we were unable to recover it. 00:33:14.696 [2024-07-26 09:06:32.963020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.696 [2024-07-26 09:06:32.963046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.696 qpair failed and we were unable to recover it. 00:33:14.696 [2024-07-26 09:06:32.963175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.696 [2024-07-26 09:06:32.963201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.696 qpair failed and we were unable to recover it. 00:33:14.696 [2024-07-26 09:06:32.963320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.696 [2024-07-26 09:06:32.963348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.696 qpair failed and we were unable to recover it. 00:33:14.696 [2024-07-26 09:06:32.963469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.696 [2024-07-26 09:06:32.963495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.696 qpair failed and we were unable to recover it. 00:33:14.696 [2024-07-26 09:06:32.963644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.696 [2024-07-26 09:06:32.963675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.696 qpair failed and we were unable to recover it. 00:33:14.697 [2024-07-26 09:06:32.963815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.697 [2024-07-26 09:06:32.963841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.697 qpair failed and we were unable to recover it. 00:33:14.697 [2024-07-26 09:06:32.963967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.697 [2024-07-26 09:06:32.963992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.697 qpair failed and we were unable to recover it. 00:33:14.697 [2024-07-26 09:06:32.964128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.697 [2024-07-26 09:06:32.964154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.697 qpair failed and we were unable to recover it. 00:33:14.697 [2024-07-26 09:06:32.964276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.697 [2024-07-26 09:06:32.964301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.697 qpair failed and we were unable to recover it. 00:33:14.697 [2024-07-26 09:06:32.964451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.697 [2024-07-26 09:06:32.964479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.697 qpair failed and we were unable to recover it. 00:33:14.697 [2024-07-26 09:06:32.964650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.697 [2024-07-26 09:06:32.964676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.697 qpair failed and we were unable to recover it. 00:33:14.697 [2024-07-26 09:06:32.964792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.697 [2024-07-26 09:06:32.964817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.697 qpair failed and we were unable to recover it. 00:33:14.697 [2024-07-26 09:06:32.964941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.697 [2024-07-26 09:06:32.964966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.697 qpair failed and we were unable to recover it. 00:33:14.697 [2024-07-26 09:06:32.965097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.697 [2024-07-26 09:06:32.965123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.697 qpair failed and we were unable to recover it. 00:33:14.697 [2024-07-26 09:06:32.965252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.697 [2024-07-26 09:06:32.965279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.697 qpair failed and we were unable to recover it. 00:33:14.697 [2024-07-26 09:06:32.965428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.697 [2024-07-26 09:06:32.965454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.697 qpair failed and we were unable to recover it. 00:33:14.697 [2024-07-26 09:06:32.965602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.697 [2024-07-26 09:06:32.965628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.697 qpair failed and we were unable to recover it. 00:33:14.697 [2024-07-26 09:06:32.965768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.697 [2024-07-26 09:06:32.965794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.697 qpair failed and we were unable to recover it. 00:33:14.697 [2024-07-26 09:06:32.965912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.697 [2024-07-26 09:06:32.965938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.697 qpair failed and we were unable to recover it. 00:33:14.697 [2024-07-26 09:06:32.966082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.697 [2024-07-26 09:06:32.966108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.697 qpair failed and we were unable to recover it. 00:33:14.697 [2024-07-26 09:06:32.966252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.697 [2024-07-26 09:06:32.966277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.697 qpair failed and we were unable to recover it. 00:33:14.697 [2024-07-26 09:06:32.966440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.697 [2024-07-26 09:06:32.966466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.697 qpair failed and we were unable to recover it. 00:33:14.697 [2024-07-26 09:06:32.966610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.697 [2024-07-26 09:06:32.966635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.697 qpair failed and we were unable to recover it. 00:33:14.697 [2024-07-26 09:06:32.966781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.697 [2024-07-26 09:06:32.966807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.697 qpair failed and we were unable to recover it. 00:33:14.697 [2024-07-26 09:06:32.966957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.697 [2024-07-26 09:06:32.966982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.697 qpair failed and we were unable to recover it. 00:33:14.697 [2024-07-26 09:06:32.967093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.697 [2024-07-26 09:06:32.967120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.697 qpair failed and we were unable to recover it. 00:33:14.697 [2024-07-26 09:06:32.967244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.697 [2024-07-26 09:06:32.967271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.697 qpair failed and we were unable to recover it. 00:33:14.697 [2024-07-26 09:06:32.967414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.697 [2024-07-26 09:06:32.967440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.697 qpair failed and we were unable to recover it. 00:33:14.697 [2024-07-26 09:06:32.967559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.697 [2024-07-26 09:06:32.967586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.697 qpair failed and we were unable to recover it. 00:33:14.697 [2024-07-26 09:06:32.967727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.697 [2024-07-26 09:06:32.967753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.697 qpair failed and we were unable to recover it. 00:33:14.697 [2024-07-26 09:06:32.967877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.697 [2024-07-26 09:06:32.967904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.697 qpair failed and we were unable to recover it. 00:33:14.697 [2024-07-26 09:06:32.968056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.697 [2024-07-26 09:06:32.968098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.697 qpair failed and we were unable to recover it. 00:33:14.697 [2024-07-26 09:06:32.968218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.697 [2024-07-26 09:06:32.968244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.697 qpair failed and we were unable to recover it. 00:33:14.697 [2024-07-26 09:06:32.968414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.697 [2024-07-26 09:06:32.968440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.697 qpair failed and we were unable to recover it. 00:33:14.697 [2024-07-26 09:06:32.968589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.697 [2024-07-26 09:06:32.968615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.697 qpair failed and we were unable to recover it. 00:33:14.697 [2024-07-26 09:06:32.968765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.697 [2024-07-26 09:06:32.968792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.697 qpair failed and we were unable to recover it. 00:33:14.697 [2024-07-26 09:06:32.968933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.697 [2024-07-26 09:06:32.968958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.697 qpair failed and we were unable to recover it. 00:33:14.697 [2024-07-26 09:06:32.969078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.697 [2024-07-26 09:06:32.969103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.697 qpair failed and we were unable to recover it. 00:33:14.697 [2024-07-26 09:06:32.969266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.697 [2024-07-26 09:06:32.969291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.697 qpair failed and we were unable to recover it. 00:33:14.697 [2024-07-26 09:06:32.969434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.697 [2024-07-26 09:06:32.969459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.697 qpair failed and we were unable to recover it. 00:33:14.697 [2024-07-26 09:06:32.969599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.697 [2024-07-26 09:06:32.969625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.697 qpair failed and we were unable to recover it. 00:33:14.697 [2024-07-26 09:06:32.969771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.697 [2024-07-26 09:06:32.969799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.697 qpair failed and we were unable to recover it. 00:33:14.697 [2024-07-26 09:06:32.969948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.697 [2024-07-26 09:06:32.969974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.698 qpair failed and we were unable to recover it. 00:33:14.698 [2024-07-26 09:06:32.970119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.698 [2024-07-26 09:06:32.970145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.698 qpair failed and we were unable to recover it. 00:33:14.698 [2024-07-26 09:06:32.970293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.698 [2024-07-26 09:06:32.970319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.698 qpair failed and we were unable to recover it. 00:33:14.698 [2024-07-26 09:06:32.970468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.698 [2024-07-26 09:06:32.970495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.698 qpair failed and we were unable to recover it. 00:33:14.698 [2024-07-26 09:06:32.970645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.698 [2024-07-26 09:06:32.970671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.698 qpair failed and we were unable to recover it. 00:33:14.698 [2024-07-26 09:06:32.970785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.698 [2024-07-26 09:06:32.970811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.698 qpair failed and we were unable to recover it. 00:33:14.698 [2024-07-26 09:06:32.970931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.698 [2024-07-26 09:06:32.970958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.698 qpair failed and we were unable to recover it. 00:33:14.698 [2024-07-26 09:06:32.971111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.698 [2024-07-26 09:06:32.971137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.698 qpair failed and we were unable to recover it. 00:33:14.698 [2024-07-26 09:06:32.971300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.698 [2024-07-26 09:06:32.971326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.698 qpair failed and we were unable to recover it. 00:33:14.698 [2024-07-26 09:06:32.971446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.698 [2024-07-26 09:06:32.971472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.698 qpair failed and we were unable to recover it. 00:33:14.698 [2024-07-26 09:06:32.971643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.698 [2024-07-26 09:06:32.971669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.698 qpair failed and we were unable to recover it. 00:33:14.698 [2024-07-26 09:06:32.971788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.698 [2024-07-26 09:06:32.971815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.698 qpair failed and we were unable to recover it. 00:33:14.698 [2024-07-26 09:06:32.971997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.698 [2024-07-26 09:06:32.972022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.698 qpair failed and we were unable to recover it. 00:33:14.698 [2024-07-26 09:06:32.972160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.698 [2024-07-26 09:06:32.972187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.698 qpair failed and we were unable to recover it. 00:33:14.698 [2024-07-26 09:06:32.972306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.698 [2024-07-26 09:06:32.972331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.698 qpair failed and we were unable to recover it. 00:33:14.698 [2024-07-26 09:06:32.972454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.698 [2024-07-26 09:06:32.972479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.698 qpair failed and we were unable to recover it. 00:33:14.698 [2024-07-26 09:06:32.972658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.698 [2024-07-26 09:06:32.972688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.698 qpair failed and we were unable to recover it. 00:33:14.698 [2024-07-26 09:06:32.972807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.698 [2024-07-26 09:06:32.972834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.698 qpair failed and we were unable to recover it. 00:33:14.698 [2024-07-26 09:06:32.972983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.698 [2024-07-26 09:06:32.973009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.698 qpair failed and we were unable to recover it. 00:33:14.698 [2024-07-26 09:06:32.973152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.698 [2024-07-26 09:06:32.973178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.698 qpair failed and we were unable to recover it. 00:33:14.698 [2024-07-26 09:06:32.973303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.698 [2024-07-26 09:06:32.973329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.698 qpair failed and we were unable to recover it. 00:33:14.698 [2024-07-26 09:06:32.973472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.698 [2024-07-26 09:06:32.973498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.698 qpair failed and we were unable to recover it. 00:33:14.698 [2024-07-26 09:06:32.973670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.698 [2024-07-26 09:06:32.973695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.698 qpair failed and we were unable to recover it. 00:33:14.698 [2024-07-26 09:06:32.973824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.698 [2024-07-26 09:06:32.973851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.698 qpair failed and we were unable to recover it. 00:33:14.698 [2024-07-26 09:06:32.973972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.698 [2024-07-26 09:06:32.973997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.698 qpair failed and we were unable to recover it. 00:33:14.698 [2024-07-26 09:06:32.974117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.698 [2024-07-26 09:06:32.974144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.698 qpair failed and we were unable to recover it. 00:33:14.698 [2024-07-26 09:06:32.974287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.698 [2024-07-26 09:06:32.974313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.698 qpair failed and we were unable to recover it. 00:33:14.698 [2024-07-26 09:06:32.974436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.698 [2024-07-26 09:06:32.974461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.698 qpair failed and we were unable to recover it. 00:33:14.698 [2024-07-26 09:06:32.974619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.698 [2024-07-26 09:06:32.974644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.698 qpair failed and we were unable to recover it. 00:33:14.698 [2024-07-26 09:06:32.974782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.698 [2024-07-26 09:06:32.974807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.698 qpair failed and we were unable to recover it. 00:33:14.698 [2024-07-26 09:06:32.974923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.698 [2024-07-26 09:06:32.974948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.698 qpair failed and we were unable to recover it. 00:33:14.698 [2024-07-26 09:06:32.975055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.698 [2024-07-26 09:06:32.975096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.698 qpair failed and we were unable to recover it. 00:33:14.698 [2024-07-26 09:06:32.975230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.698 [2024-07-26 09:06:32.975255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.698 qpair failed and we were unable to recover it. 00:33:14.698 [2024-07-26 09:06:32.975371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.698 [2024-07-26 09:06:32.975397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.698 qpair failed and we were unable to recover it. 00:33:14.698 [2024-07-26 09:06:32.975538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.698 [2024-07-26 09:06:32.975563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.698 qpair failed and we were unable to recover it. 00:33:14.698 [2024-07-26 09:06:32.975682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.698 [2024-07-26 09:06:32.975710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.698 qpair failed and we were unable to recover it. 00:33:14.698 [2024-07-26 09:06:32.975831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.698 [2024-07-26 09:06:32.975858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.698 qpair failed and we were unable to recover it. 00:33:14.698 [2024-07-26 09:06:32.976028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.698 [2024-07-26 09:06:32.976054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.698 qpair failed and we were unable to recover it. 00:33:14.698 [2024-07-26 09:06:32.976209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.699 [2024-07-26 09:06:32.976234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.699 qpair failed and we were unable to recover it. 00:33:14.699 [2024-07-26 09:06:32.976385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.699 [2024-07-26 09:06:32.976411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.699 qpair failed and we were unable to recover it. 00:33:14.699 [2024-07-26 09:06:32.976531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.699 [2024-07-26 09:06:32.976558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.699 qpair failed and we were unable to recover it. 00:33:14.699 [2024-07-26 09:06:32.976681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.699 [2024-07-26 09:06:32.976707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.699 qpair failed and we were unable to recover it. 00:33:14.699 [2024-07-26 09:06:32.976851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.699 [2024-07-26 09:06:32.976877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.699 qpair failed and we were unable to recover it. 00:33:14.699 [2024-07-26 09:06:32.976997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.699 [2024-07-26 09:06:32.977026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.699 qpair failed and we were unable to recover it. 00:33:14.699 [2024-07-26 09:06:32.977177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.699 [2024-07-26 09:06:32.977204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.699 qpair failed and we were unable to recover it. 00:33:14.699 [2024-07-26 09:06:32.977358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.699 [2024-07-26 09:06:32.977383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.699 qpair failed and we were unable to recover it. 00:33:14.699 [2024-07-26 09:06:32.977526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.699 [2024-07-26 09:06:32.977551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.699 qpair failed and we were unable to recover it. 00:33:14.699 [2024-07-26 09:06:32.977666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.699 [2024-07-26 09:06:32.977691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.699 qpair failed and we were unable to recover it. 00:33:14.699 [2024-07-26 09:06:32.977803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.699 [2024-07-26 09:06:32.977829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.699 qpair failed and we were unable to recover it. 00:33:14.699 [2024-07-26 09:06:32.977969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.699 [2024-07-26 09:06:32.977994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.699 qpair failed and we were unable to recover it. 00:33:14.699 [2024-07-26 09:06:32.978113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.699 [2024-07-26 09:06:32.978141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.699 qpair failed and we were unable to recover it. 00:33:14.699 [2024-07-26 09:06:32.978308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.699 [2024-07-26 09:06:32.978334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.699 qpair failed and we were unable to recover it. 00:33:14.699 [2024-07-26 09:06:32.978449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.699 [2024-07-26 09:06:32.978475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.699 qpair failed and we were unable to recover it. 00:33:14.699 [2024-07-26 09:06:32.978616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.699 [2024-07-26 09:06:32.978641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.699 qpair failed and we were unable to recover it. 00:33:14.699 [2024-07-26 09:06:32.978771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.699 [2024-07-26 09:06:32.978798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.699 qpair failed and we were unable to recover it. 00:33:14.699 [2024-07-26 09:06:32.978939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.699 [2024-07-26 09:06:32.978965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.699 qpair failed and we were unable to recover it. 00:33:14.699 [2024-07-26 09:06:32.979087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.699 [2024-07-26 09:06:32.979114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.699 qpair failed and we were unable to recover it. 00:33:14.699 [2024-07-26 09:06:32.979244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.699 [2024-07-26 09:06:32.979270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.699 qpair failed and we were unable to recover it. 00:33:14.699 [2024-07-26 09:06:32.979408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.699 [2024-07-26 09:06:32.979433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.699 qpair failed and we were unable to recover it. 00:33:14.699 [2024-07-26 09:06:32.979552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.699 [2024-07-26 09:06:32.979577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.699 qpair failed and we were unable to recover it. 00:33:14.699 [2024-07-26 09:06:32.979698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.699 [2024-07-26 09:06:32.979724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.699 qpair failed and we were unable to recover it. 00:33:14.699 [2024-07-26 09:06:32.979898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.699 [2024-07-26 09:06:32.979923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.699 qpair failed and we were unable to recover it. 00:33:14.699 [2024-07-26 09:06:32.980034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.699 [2024-07-26 09:06:32.980064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.699 qpair failed and we were unable to recover it. 00:33:14.699 [2024-07-26 09:06:32.980183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.699 [2024-07-26 09:06:32.980209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.699 qpair failed and we were unable to recover it. 00:33:14.699 [2024-07-26 09:06:32.980361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.699 [2024-07-26 09:06:32.980387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.699 qpair failed and we were unable to recover it. 00:33:14.699 [2024-07-26 09:06:32.980496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.699 [2024-07-26 09:06:32.980522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.699 qpair failed and we were unable to recover it. 00:33:14.699 [2024-07-26 09:06:32.980643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.699 [2024-07-26 09:06:32.980668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.699 qpair failed and we were unable to recover it. 00:33:14.699 [2024-07-26 09:06:32.980809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.699 [2024-07-26 09:06:32.980834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.699 qpair failed and we were unable to recover it. 00:33:14.699 [2024-07-26 09:06:32.980974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.699 [2024-07-26 09:06:32.981000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.699 qpair failed and we were unable to recover it. 00:33:14.699 [2024-07-26 09:06:32.981149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.699 [2024-07-26 09:06:32.981175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.699 qpair failed and we were unable to recover it. 00:33:14.699 [2024-07-26 09:06:32.981314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.699 [2024-07-26 09:06:32.981344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.699 qpair failed and we were unable to recover it. 00:33:14.699 [2024-07-26 09:06:32.981486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.699 [2024-07-26 09:06:32.981512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.699 qpair failed and we were unable to recover it. 00:33:14.699 [2024-07-26 09:06:32.981665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.699 [2024-07-26 09:06:32.981690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.699 qpair failed and we were unable to recover it. 00:33:14.699 [2024-07-26 09:06:32.981832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.699 [2024-07-26 09:06:32.981857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.699 qpair failed and we were unable to recover it. 00:33:14.699 [2024-07-26 09:06:32.981970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.699 [2024-07-26 09:06:32.981995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.699 qpair failed and we were unable to recover it. 00:33:14.699 [2024-07-26 09:06:32.982145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.699 [2024-07-26 09:06:32.982171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.699 qpair failed and we were unable to recover it. 00:33:14.700 [2024-07-26 09:06:32.982313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.700 [2024-07-26 09:06:32.982339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.700 qpair failed and we were unable to recover it. 00:33:14.700 [2024-07-26 09:06:32.982465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.700 [2024-07-26 09:06:32.982491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.700 qpair failed and we were unable to recover it. 00:33:14.700 [2024-07-26 09:06:32.982617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.700 [2024-07-26 09:06:32.982643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.700 qpair failed and we were unable to recover it. 00:33:14.700 [2024-07-26 09:06:32.982761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.700 [2024-07-26 09:06:32.982788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.700 qpair failed and we were unable to recover it. 00:33:14.700 [2024-07-26 09:06:32.982915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.700 [2024-07-26 09:06:32.982940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.700 qpair failed and we were unable to recover it. 00:33:14.700 [2024-07-26 09:06:32.983113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.700 [2024-07-26 09:06:32.983139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.700 qpair failed and we were unable to recover it. 00:33:14.700 [2024-07-26 09:06:32.983259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.700 [2024-07-26 09:06:32.983284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.700 qpair failed and we were unable to recover it. 00:33:14.700 [2024-07-26 09:06:32.983452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.700 [2024-07-26 09:06:32.983478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.700 qpair failed and we were unable to recover it. 00:33:14.700 [2024-07-26 09:06:32.983606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.700 [2024-07-26 09:06:32.983631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.700 qpair failed and we were unable to recover it. 00:33:14.700 [2024-07-26 09:06:32.983746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.700 [2024-07-26 09:06:32.983771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.700 qpair failed and we were unable to recover it. 00:33:14.700 [2024-07-26 09:06:32.983913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.700 [2024-07-26 09:06:32.983939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.700 qpair failed and we were unable to recover it. 00:33:14.700 [2024-07-26 09:06:32.984077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.700 [2024-07-26 09:06:32.984103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.700 qpair failed and we were unable to recover it. 00:33:14.700 [2024-07-26 09:06:32.984263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.700 [2024-07-26 09:06:32.984288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.700 qpair failed and we were unable to recover it. 00:33:14.700 [2024-07-26 09:06:32.984414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.700 [2024-07-26 09:06:32.984439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.700 qpair failed and we were unable to recover it. 00:33:14.700 [2024-07-26 09:06:32.984616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.700 [2024-07-26 09:06:32.984642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.700 qpair failed and we were unable to recover it. 00:33:14.700 [2024-07-26 09:06:32.984760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.700 [2024-07-26 09:06:32.984785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.700 qpair failed and we were unable to recover it. 00:33:14.700 [2024-07-26 09:06:32.984948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.700 [2024-07-26 09:06:32.984974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.700 qpair failed and we were unable to recover it. 00:33:14.700 [2024-07-26 09:06:32.985098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.700 [2024-07-26 09:06:32.985125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.700 qpair failed and we were unable to recover it. 00:33:14.700 [2024-07-26 09:06:32.985298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.700 [2024-07-26 09:06:32.985323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.700 qpair failed and we were unable to recover it. 00:33:14.700 [2024-07-26 09:06:32.985459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.700 [2024-07-26 09:06:32.985485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.700 qpair failed and we were unable to recover it. 00:33:14.700 [2024-07-26 09:06:32.985593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.700 [2024-07-26 09:06:32.985619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.700 qpair failed and we were unable to recover it. 00:33:14.700 [2024-07-26 09:06:32.985765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.700 [2024-07-26 09:06:32.985791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.700 qpair failed and we were unable to recover it. 00:33:14.700 [2024-07-26 09:06:32.985937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.700 [2024-07-26 09:06:32.985962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.700 qpair failed and we were unable to recover it. 00:33:14.700 [2024-07-26 09:06:32.986082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.700 [2024-07-26 09:06:32.986108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.700 qpair failed and we were unable to recover it. 00:33:14.700 [2024-07-26 09:06:32.986282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.700 [2024-07-26 09:06:32.986307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.700 qpair failed and we were unable to recover it. 00:33:14.700 [2024-07-26 09:06:32.986427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.700 [2024-07-26 09:06:32.986452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.700 qpair failed and we were unable to recover it. 00:33:14.700 [2024-07-26 09:06:32.986568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.700 [2024-07-26 09:06:32.986594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.700 qpair failed and we were unable to recover it. 00:33:14.700 [2024-07-26 09:06:32.986743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.700 [2024-07-26 09:06:32.986769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.700 qpair failed and we were unable to recover it. 00:33:14.700 [2024-07-26 09:06:32.986904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.700 [2024-07-26 09:06:32.986944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.700 qpair failed and we were unable to recover it. 00:33:14.700 [2024-07-26 09:06:32.987094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.700 [2024-07-26 09:06:32.987123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.700 qpair failed and we were unable to recover it. 00:33:14.700 [2024-07-26 09:06:32.987250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.700 [2024-07-26 09:06:32.987277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.700 qpair failed and we were unable to recover it. 00:33:14.700 [2024-07-26 09:06:32.987392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.700 [2024-07-26 09:06:32.987418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.700 qpair failed and we were unable to recover it. 00:33:14.700 [2024-07-26 09:06:32.987562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.700 [2024-07-26 09:06:32.987588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.700 qpair failed and we were unable to recover it. 00:33:14.700 [2024-07-26 09:06:32.987758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.700 [2024-07-26 09:06:32.987783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.700 qpair failed and we were unable to recover it. 00:33:14.700 [2024-07-26 09:06:32.987921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.700 [2024-07-26 09:06:32.987947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.700 qpair failed and we were unable to recover it. 00:33:14.700 [2024-07-26 09:06:32.988082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.700 [2024-07-26 09:06:32.988109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.700 qpair failed and we were unable to recover it. 00:33:14.700 [2024-07-26 09:06:32.988232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.700 [2024-07-26 09:06:32.988260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.700 qpair failed and we were unable to recover it. 00:33:14.700 [2024-07-26 09:06:32.988412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.700 [2024-07-26 09:06:32.988438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.701 qpair failed and we were unable to recover it. 00:33:14.701 [2024-07-26 09:06:32.988558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.701 [2024-07-26 09:06:32.988584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.701 qpair failed and we were unable to recover it. 00:33:14.701 [2024-07-26 09:06:32.988707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.701 [2024-07-26 09:06:32.988733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.701 qpair failed and we were unable to recover it. 00:33:14.701 [2024-07-26 09:06:32.988882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.701 [2024-07-26 09:06:32.988908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.701 qpair failed and we were unable to recover it. 00:33:14.701 [2024-07-26 09:06:32.989054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.701 [2024-07-26 09:06:32.989091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.701 qpair failed and we were unable to recover it. 00:33:14.701 [2024-07-26 09:06:32.989214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.701 [2024-07-26 09:06:32.989239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.701 qpair failed and we were unable to recover it. 00:33:14.701 [2024-07-26 09:06:32.989356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.701 [2024-07-26 09:06:32.989381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.701 qpair failed and we were unable to recover it. 00:33:14.701 [2024-07-26 09:06:32.989505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.701 [2024-07-26 09:06:32.989531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.701 qpair failed and we were unable to recover it. 00:33:14.701 [2024-07-26 09:06:32.989661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.701 [2024-07-26 09:06:32.989686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.701 qpair failed and we were unable to recover it. 00:33:14.701 [2024-07-26 09:06:32.989806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.701 [2024-07-26 09:06:32.989833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.701 qpair failed and we were unable to recover it. 00:33:14.701 [2024-07-26 09:06:32.989952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.701 [2024-07-26 09:06:32.989979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.701 qpair failed and we were unable to recover it. 00:33:14.701 [2024-07-26 09:06:32.990118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.701 [2024-07-26 09:06:32.990149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.701 qpair failed and we were unable to recover it. 00:33:14.701 [2024-07-26 09:06:32.990267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.701 [2024-07-26 09:06:32.990294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.701 qpair failed and we were unable to recover it. 00:33:14.701 [2024-07-26 09:06:32.990336] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:14.701 [2024-07-26 09:06:32.990370] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:14.701 [2024-07-26 09:06:32.990385] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:14.701 [2024-07-26 09:06:32.990397] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:14.701 [2024-07-26 09:06:32.990407] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:14.701 [2024-07-26 09:06:32.990450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.701 [2024-07-26 09:06:32.990475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.701 qpair failed and we were unable to recover it. 00:33:14.701 [2024-07-26 09:06:32.990614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.701 [2024-07-26 09:06:32.990643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.701 qpair failed and we were unable to recover it. 00:33:14.701 [2024-07-26 09:06:32.990603] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:33:14.701 [2024-07-26 09:06:32.990654] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:33:14.701 [2024-07-26 09:06:32.990771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.701 [2024-07-26 09:06:32.990700] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:33:14.701 [2024-07-26 09:06:32.990703] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:33:14.701 [2024-07-26 09:06:32.990798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.701 qpair failed and we were unable to recover it. 00:33:14.701 [2024-07-26 09:06:32.990927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.701 [2024-07-26 09:06:32.990955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.701 qpair failed and we were unable to recover it. 00:33:14.701 [2024-07-26 09:06:32.991080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.701 [2024-07-26 09:06:32.991106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.701 qpair failed and we were unable to recover it. 00:33:14.701 [2024-07-26 09:06:32.991230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.701 [2024-07-26 09:06:32.991256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.701 qpair failed and we were unable to recover it. 00:33:14.701 [2024-07-26 09:06:32.991375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.701 [2024-07-26 09:06:32.991401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.701 qpair failed and we were unable to recover it. 00:33:14.701 [2024-07-26 09:06:32.991544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.701 [2024-07-26 09:06:32.991569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.701 qpair failed and we were unable to recover it. 00:33:14.701 [2024-07-26 09:06:32.991713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.701 [2024-07-26 09:06:32.991739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.701 qpair failed and we were unable to recover it. 00:33:14.701 [2024-07-26 09:06:32.991863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.701 [2024-07-26 09:06:32.991889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.701 qpair failed and we were unable to recover it. 00:33:14.701 [2024-07-26 09:06:32.992005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.701 [2024-07-26 09:06:32.992031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.701 qpair failed and we were unable to recover it. 00:33:14.701 [2024-07-26 09:06:32.992155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.701 [2024-07-26 09:06:32.992183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.701 qpair failed and we were unable to recover it. 00:33:14.701 [2024-07-26 09:06:32.992331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.701 [2024-07-26 09:06:32.992358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.701 qpair failed and we were unable to recover it. 00:33:14.701 [2024-07-26 09:06:32.992490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.701 [2024-07-26 09:06:32.992516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.701 qpair failed and we were unable to recover it. 00:33:14.701 [2024-07-26 09:06:32.992693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.701 [2024-07-26 09:06:32.992719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.701 qpair failed and we were unable to recover it. 00:33:14.701 [2024-07-26 09:06:32.992836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.701 [2024-07-26 09:06:32.992862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.701 qpair failed and we were unable to recover it. 00:33:14.701 [2024-07-26 09:06:32.992985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.701 [2024-07-26 09:06:32.993012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.702 qpair failed and we were unable to recover it. 00:33:14.702 [2024-07-26 09:06:32.993146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.702 [2024-07-26 09:06:32.993174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.702 qpair failed and we were unable to recover it. 00:33:14.702 [2024-07-26 09:06:32.993294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.702 [2024-07-26 09:06:32.993320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.702 qpair failed and we were unable to recover it. 00:33:14.702 [2024-07-26 09:06:32.993438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.702 [2024-07-26 09:06:32.993464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.702 qpair failed and we were unable to recover it. 00:33:14.702 [2024-07-26 09:06:32.993585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.702 [2024-07-26 09:06:32.993610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.702 qpair failed and we were unable to recover it. 00:33:14.702 [2024-07-26 09:06:32.993758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.702 [2024-07-26 09:06:32.993783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.702 qpair failed and we were unable to recover it. 00:33:14.702 [2024-07-26 09:06:32.993908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.702 [2024-07-26 09:06:32.993939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.702 qpair failed and we were unable to recover it. 00:33:14.702 [2024-07-26 09:06:32.994050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.702 [2024-07-26 09:06:32.994082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.702 qpair failed and we were unable to recover it. 00:33:14.702 [2024-07-26 09:06:32.994227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.702 [2024-07-26 09:06:32.994254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.702 qpair failed and we were unable to recover it. 00:33:14.702 [2024-07-26 09:06:32.994459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.702 [2024-07-26 09:06:32.994485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.702 qpair failed and we were unable to recover it. 00:33:14.702 [2024-07-26 09:06:32.994611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.702 [2024-07-26 09:06:32.994637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.702 qpair failed and we were unable to recover it. 00:33:14.702 [2024-07-26 09:06:32.994754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.702 [2024-07-26 09:06:32.994779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.702 qpair failed and we were unable to recover it. 00:33:14.702 [2024-07-26 09:06:32.994893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.702 [2024-07-26 09:06:32.994919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.702 qpair failed and we were unable to recover it. 00:33:14.702 [2024-07-26 09:06:32.995043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.702 [2024-07-26 09:06:32.995082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.702 qpair failed and we were unable to recover it. 00:33:14.702 [2024-07-26 09:06:32.995227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.702 [2024-07-26 09:06:32.995252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.702 qpair failed and we were unable to recover it. 00:33:14.702 [2024-07-26 09:06:32.995377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.702 [2024-07-26 09:06:32.995404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.702 qpair failed and we were unable to recover it. 00:33:14.702 [2024-07-26 09:06:32.995572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.702 [2024-07-26 09:06:32.995598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.702 qpair failed and we were unable to recover it. 00:33:14.702 [2024-07-26 09:06:32.995777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.702 [2024-07-26 09:06:32.995803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.702 qpair failed and we were unable to recover it. 00:33:14.702 [2024-07-26 09:06:32.995931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.702 [2024-07-26 09:06:32.995958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.702 qpair failed and we were unable to recover it. 00:33:14.702 [2024-07-26 09:06:32.996091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.702 [2024-07-26 09:06:32.996119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.702 qpair failed and we were unable to recover it. 00:33:14.702 [2024-07-26 09:06:32.996301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.702 [2024-07-26 09:06:32.996326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.702 qpair failed and we were unable to recover it. 00:33:14.702 [2024-07-26 09:06:32.996453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.702 [2024-07-26 09:06:32.996479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.702 qpair failed and we were unable to recover it. 00:33:14.702 [2024-07-26 09:06:32.996592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.702 [2024-07-26 09:06:32.996617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.702 qpair failed and we were unable to recover it. 00:33:14.702 [2024-07-26 09:06:32.996792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.702 [2024-07-26 09:06:32.996818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.702 qpair failed and we were unable to recover it. 00:33:14.702 [2024-07-26 09:06:32.996943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.702 [2024-07-26 09:06:32.996968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.702 qpair failed and we were unable to recover it. 00:33:14.702 [2024-07-26 09:06:32.997085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.702 [2024-07-26 09:06:32.997110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.702 qpair failed and we were unable to recover it. 00:33:14.702 [2024-07-26 09:06:32.997252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.702 [2024-07-26 09:06:32.997277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.702 qpair failed and we were unable to recover it. 00:33:14.702 [2024-07-26 09:06:32.997429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.702 [2024-07-26 09:06:32.997455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.702 qpair failed and we were unable to recover it. 00:33:14.702 [2024-07-26 09:06:32.997598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.702 [2024-07-26 09:06:32.997625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.702 qpair failed and we were unable to recover it. 00:33:14.702 [2024-07-26 09:06:32.997742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.702 [2024-07-26 09:06:32.997769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.702 qpair failed and we were unable to recover it. 00:33:14.702 [2024-07-26 09:06:32.997885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.702 [2024-07-26 09:06:32.997912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.702 qpair failed and we were unable to recover it. 00:33:14.702 [2024-07-26 09:06:32.998038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.702 [2024-07-26 09:06:32.998069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.702 qpair failed and we were unable to recover it. 00:33:14.702 [2024-07-26 09:06:32.998228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.702 [2024-07-26 09:06:32.998254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.702 qpair failed and we were unable to recover it. 00:33:14.702 [2024-07-26 09:06:32.998380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.702 [2024-07-26 09:06:32.998406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.702 qpair failed and we were unable to recover it. 00:33:14.702 [2024-07-26 09:06:32.998571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.702 [2024-07-26 09:06:32.998597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.702 qpair failed and we were unable to recover it. 00:33:14.702 [2024-07-26 09:06:32.998712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.702 [2024-07-26 09:06:32.998739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.702 qpair failed and we were unable to recover it. 00:33:14.702 [2024-07-26 09:06:32.998882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.702 [2024-07-26 09:06:32.998908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.702 qpair failed and we were unable to recover it. 00:33:14.702 [2024-07-26 09:06:32.999067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.702 [2024-07-26 09:06:32.999095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.703 qpair failed and we were unable to recover it. 00:33:14.703 [2024-07-26 09:06:32.999245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.703 [2024-07-26 09:06:32.999272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.703 qpair failed and we were unable to recover it. 00:33:14.703 [2024-07-26 09:06:32.999397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.703 [2024-07-26 09:06:32.999423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.703 qpair failed and we were unable to recover it. 00:33:14.703 [2024-07-26 09:06:32.999568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.703 [2024-07-26 09:06:32.999594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.703 qpair failed and we were unable to recover it. 00:33:14.703 [2024-07-26 09:06:32.999715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.703 [2024-07-26 09:06:32.999741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.703 qpair failed and we were unable to recover it. 00:33:14.703 [2024-07-26 09:06:32.999887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.703 [2024-07-26 09:06:32.999912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.703 qpair failed and we were unable to recover it. 00:33:14.703 [2024-07-26 09:06:33.000032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.703 [2024-07-26 09:06:33.000064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.703 qpair failed and we were unable to recover it. 00:33:14.703 [2024-07-26 09:06:33.000188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.703 [2024-07-26 09:06:33.000214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.703 qpair failed and we were unable to recover it. 00:33:14.703 [2024-07-26 09:06:33.000325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.703 [2024-07-26 09:06:33.000350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.703 qpair failed and we were unable to recover it. 00:33:14.703 [2024-07-26 09:06:33.000495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.703 [2024-07-26 09:06:33.000521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.703 qpair failed and we were unable to recover it. 00:33:14.703 [2024-07-26 09:06:33.000647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.703 [2024-07-26 09:06:33.000673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.703 qpair failed and we were unable to recover it. 00:33:14.703 [2024-07-26 09:06:33.000854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.703 [2024-07-26 09:06:33.000879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.703 qpair failed and we were unable to recover it. 00:33:14.703 [2024-07-26 09:06:33.001003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.703 [2024-07-26 09:06:33.001031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.703 qpair failed and we were unable to recover it. 00:33:14.703 [2024-07-26 09:06:33.001208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.703 [2024-07-26 09:06:33.001234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.703 qpair failed and we were unable to recover it. 00:33:14.703 [2024-07-26 09:06:33.001396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.703 [2024-07-26 09:06:33.001422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.703 qpair failed and we were unable to recover it. 00:33:14.703 [2024-07-26 09:06:33.001570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.703 [2024-07-26 09:06:33.001597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.703 qpair failed and we were unable to recover it. 00:33:14.703 [2024-07-26 09:06:33.001731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.703 [2024-07-26 09:06:33.001757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.703 qpair failed and we were unable to recover it. 00:33:14.703 [2024-07-26 09:06:33.001871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.703 [2024-07-26 09:06:33.001897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.703 qpair failed and we were unable to recover it. 00:33:14.703 [2024-07-26 09:06:33.002005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.703 [2024-07-26 09:06:33.002031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.703 qpair failed and we were unable to recover it. 00:33:14.703 [2024-07-26 09:06:33.002189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.703 [2024-07-26 09:06:33.002216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.703 qpair failed and we were unable to recover it. 00:33:14.703 [2024-07-26 09:06:33.002336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.703 [2024-07-26 09:06:33.002362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.703 qpair failed and we were unable to recover it. 00:33:14.703 [2024-07-26 09:06:33.002508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.703 [2024-07-26 09:06:33.002534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.703 qpair failed and we were unable to recover it. 00:33:14.703 [2024-07-26 09:06:33.002675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.703 [2024-07-26 09:06:33.002700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.703 qpair failed and we were unable to recover it. 00:33:14.703 [2024-07-26 09:06:33.002854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.703 [2024-07-26 09:06:33.002880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.703 qpair failed and we were unable to recover it. 00:33:14.703 [2024-07-26 09:06:33.003001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.703 [2024-07-26 09:06:33.003028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.703 qpair failed and we were unable to recover it. 00:33:14.703 [2024-07-26 09:06:33.003149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.703 [2024-07-26 09:06:33.003175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.703 qpair failed and we were unable to recover it. 00:33:14.703 [2024-07-26 09:06:33.003308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.703 [2024-07-26 09:06:33.003333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.703 qpair failed and we were unable to recover it. 00:33:14.703 [2024-07-26 09:06:33.003472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.703 [2024-07-26 09:06:33.003497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.703 qpair failed and we were unable to recover it. 00:33:14.703 [2024-07-26 09:06:33.003608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.703 [2024-07-26 09:06:33.003634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.703 qpair failed and we were unable to recover it. 00:33:14.703 [2024-07-26 09:06:33.003743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.703 [2024-07-26 09:06:33.003768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.703 qpair failed and we were unable to recover it. 00:33:14.703 [2024-07-26 09:06:33.003884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.703 [2024-07-26 09:06:33.003910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.703 qpair failed and we were unable to recover it. 00:33:14.703 [2024-07-26 09:06:33.004080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.703 [2024-07-26 09:06:33.004106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.703 qpair failed and we were unable to recover it. 00:33:14.703 [2024-07-26 09:06:33.004222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.703 [2024-07-26 09:06:33.004248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.703 qpair failed and we were unable to recover it. 00:33:14.703 [2024-07-26 09:06:33.004376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.703 [2024-07-26 09:06:33.004402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.703 qpair failed and we were unable to recover it. 00:33:14.703 [2024-07-26 09:06:33.004520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.703 [2024-07-26 09:06:33.004545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.703 qpair failed and we were unable to recover it. 00:33:14.703 [2024-07-26 09:06:33.004668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.703 [2024-07-26 09:06:33.004693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.703 qpair failed and we were unable to recover it. 00:33:14.703 [2024-07-26 09:06:33.004819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.703 [2024-07-26 09:06:33.004845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.703 qpair failed and we were unable to recover it. 00:33:14.703 [2024-07-26 09:06:33.004968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.703 [2024-07-26 09:06:33.004993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.703 qpair failed and we were unable to recover it. 00:33:14.703 [2024-07-26 09:06:33.005113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.704 [2024-07-26 09:06:33.005139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.704 qpair failed and we were unable to recover it. 00:33:14.704 [2024-07-26 09:06:33.005255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.704 [2024-07-26 09:06:33.005281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.704 qpair failed and we were unable to recover it. 00:33:14.704 [2024-07-26 09:06:33.005401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.704 [2024-07-26 09:06:33.005427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.704 qpair failed and we were unable to recover it. 00:33:14.704 [2024-07-26 09:06:33.005538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.704 [2024-07-26 09:06:33.005564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.704 qpair failed and we were unable to recover it. 00:33:14.704 [2024-07-26 09:06:33.005674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.704 [2024-07-26 09:06:33.005699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.704 qpair failed and we were unable to recover it. 00:33:14.704 [2024-07-26 09:06:33.005818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.704 [2024-07-26 09:06:33.005843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.704 qpair failed and we were unable to recover it. 00:33:14.704 [2024-07-26 09:06:33.005963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.704 [2024-07-26 09:06:33.005989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.704 qpair failed and we were unable to recover it. 00:33:14.704 [2024-07-26 09:06:33.006109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.704 [2024-07-26 09:06:33.006137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.704 qpair failed and we were unable to recover it. 00:33:14.704 [2024-07-26 09:06:33.006257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.704 [2024-07-26 09:06:33.006283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.704 qpair failed and we were unable to recover it. 00:33:14.704 [2024-07-26 09:06:33.006461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.704 [2024-07-26 09:06:33.006487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.704 qpair failed and we were unable to recover it. 00:33:14.704 [2024-07-26 09:06:33.006601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.704 [2024-07-26 09:06:33.006627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.704 qpair failed and we were unable to recover it. 00:33:14.704 [2024-07-26 09:06:33.006749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.704 [2024-07-26 09:06:33.006775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.704 qpair failed and we were unable to recover it. 00:33:14.704 [2024-07-26 09:06:33.006929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.704 [2024-07-26 09:06:33.006956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.704 qpair failed and we were unable to recover it. 00:33:14.704 [2024-07-26 09:06:33.007087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.704 [2024-07-26 09:06:33.007115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.704 qpair failed and we were unable to recover it. 00:33:14.704 [2024-07-26 09:06:33.007232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.704 [2024-07-26 09:06:33.007258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.704 qpair failed and we were unable to recover it. 00:33:14.704 [2024-07-26 09:06:33.007388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.704 [2024-07-26 09:06:33.007414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.704 qpair failed and we were unable to recover it. 00:33:14.704 [2024-07-26 09:06:33.007557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.704 [2024-07-26 09:06:33.007582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.704 qpair failed and we were unable to recover it. 00:33:14.704 [2024-07-26 09:06:33.007735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.704 [2024-07-26 09:06:33.007761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.704 qpair failed and we were unable to recover it. 00:33:14.704 [2024-07-26 09:06:33.007879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.704 [2024-07-26 09:06:33.007905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.704 qpair failed and we were unable to recover it. 00:33:14.704 [2024-07-26 09:06:33.008016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.704 [2024-07-26 09:06:33.008041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.704 qpair failed and we were unable to recover it. 00:33:14.704 [2024-07-26 09:06:33.008156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.704 [2024-07-26 09:06:33.008182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.704 qpair failed and we were unable to recover it. 00:33:14.704 [2024-07-26 09:06:33.008370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.704 [2024-07-26 09:06:33.008395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.704 qpair failed and we were unable to recover it. 00:33:14.704 [2024-07-26 09:06:33.008520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.704 [2024-07-26 09:06:33.008546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.704 qpair failed and we were unable to recover it. 00:33:14.704 [2024-07-26 09:06:33.008664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.704 [2024-07-26 09:06:33.008690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.704 qpair failed and we were unable to recover it. 00:33:14.704 [2024-07-26 09:06:33.008808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.704 [2024-07-26 09:06:33.008834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.704 qpair failed and we were unable to recover it. 00:33:14.704 [2024-07-26 09:06:33.008972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.704 [2024-07-26 09:06:33.008998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.704 qpair failed and we were unable to recover it. 00:33:14.704 [2024-07-26 09:06:33.009125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.704 [2024-07-26 09:06:33.009151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.704 qpair failed and we were unable to recover it. 00:33:14.704 [2024-07-26 09:06:33.009271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.704 [2024-07-26 09:06:33.009297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.704 qpair failed and we were unable to recover it. 00:33:14.704 [2024-07-26 09:06:33.009442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.704 [2024-07-26 09:06:33.009468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.704 qpair failed and we were unable to recover it. 00:33:14.704 [2024-07-26 09:06:33.009586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.704 [2024-07-26 09:06:33.009613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.704 qpair failed and we were unable to recover it. 00:33:14.704 [2024-07-26 09:06:33.009731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.704 [2024-07-26 09:06:33.009756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.704 qpair failed and we were unable to recover it. 00:33:14.704 [2024-07-26 09:06:33.009896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.704 [2024-07-26 09:06:33.009922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.704 qpair failed and we were unable to recover it. 00:33:14.704 [2024-07-26 09:06:33.010045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.704 [2024-07-26 09:06:33.010084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.704 qpair failed and we were unable to recover it. 00:33:14.704 [2024-07-26 09:06:33.010207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.704 [2024-07-26 09:06:33.010232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.704 qpair failed and we were unable to recover it. 00:33:14.704 [2024-07-26 09:06:33.010346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.704 [2024-07-26 09:06:33.010372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.704 qpair failed and we were unable to recover it. 00:33:14.704 [2024-07-26 09:06:33.010483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.704 [2024-07-26 09:06:33.010508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.704 qpair failed and we were unable to recover it. 00:33:14.704 [2024-07-26 09:06:33.010652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.704 [2024-07-26 09:06:33.010678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.704 qpair failed and we were unable to recover it. 00:33:14.704 [2024-07-26 09:06:33.010795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.704 [2024-07-26 09:06:33.010821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.704 qpair failed and we were unable to recover it. 00:33:14.704 [2024-07-26 09:06:33.010928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.705 [2024-07-26 09:06:33.010953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.705 qpair failed and we were unable to recover it. 00:33:14.705 [2024-07-26 09:06:33.011075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.705 [2024-07-26 09:06:33.011105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.705 qpair failed and we were unable to recover it. 00:33:14.705 [2024-07-26 09:06:33.011224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.705 [2024-07-26 09:06:33.011250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.705 qpair failed and we were unable to recover it. 00:33:14.705 [2024-07-26 09:06:33.011396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.705 [2024-07-26 09:06:33.011421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.705 qpair failed and we were unable to recover it. 00:33:14.705 [2024-07-26 09:06:33.011540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.705 [2024-07-26 09:06:33.011565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.705 qpair failed and we were unable to recover it. 00:33:14.705 [2024-07-26 09:06:33.011704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.705 [2024-07-26 09:06:33.011730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.705 qpair failed and we were unable to recover it. 00:33:14.705 [2024-07-26 09:06:33.011841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.705 [2024-07-26 09:06:33.011866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.705 qpair failed and we were unable to recover it. 00:33:14.705 [2024-07-26 09:06:33.011972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.705 [2024-07-26 09:06:33.011998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.705 qpair failed and we were unable to recover it. 00:33:14.705 [2024-07-26 09:06:33.012114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.705 [2024-07-26 09:06:33.012140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.705 qpair failed and we were unable to recover it. 00:33:14.705 [2024-07-26 09:06:33.012285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.705 [2024-07-26 09:06:33.012310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.705 qpair failed and we were unable to recover it. 00:33:14.705 [2024-07-26 09:06:33.012420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.705 [2024-07-26 09:06:33.012446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.705 qpair failed and we were unable to recover it. 00:33:14.705 [2024-07-26 09:06:33.012581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.705 [2024-07-26 09:06:33.012607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.705 qpair failed and we were unable to recover it. 00:33:14.705 [2024-07-26 09:06:33.012735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.705 [2024-07-26 09:06:33.012760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.705 qpair failed and we were unable to recover it. 00:33:14.705 [2024-07-26 09:06:33.012876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.705 [2024-07-26 09:06:33.012902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.705 qpair failed and we were unable to recover it. 00:33:14.705 [2024-07-26 09:06:33.013019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.705 [2024-07-26 09:06:33.013045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.705 qpair failed and we were unable to recover it. 00:33:14.705 [2024-07-26 09:06:33.013170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.705 [2024-07-26 09:06:33.013196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.705 qpair failed and we were unable to recover it. 00:33:14.705 [2024-07-26 09:06:33.013379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.705 [2024-07-26 09:06:33.013404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.705 qpair failed and we were unable to recover it. 00:33:14.705 [2024-07-26 09:06:33.013520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.705 [2024-07-26 09:06:33.013546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.705 qpair failed and we were unable to recover it. 00:33:14.705 [2024-07-26 09:06:33.013664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.705 [2024-07-26 09:06:33.013689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.705 qpair failed and we were unable to recover it. 00:33:14.705 [2024-07-26 09:06:33.013832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.705 [2024-07-26 09:06:33.013856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.705 qpair failed and we were unable to recover it. 00:33:14.705 [2024-07-26 09:06:33.013996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.705 [2024-07-26 09:06:33.014037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.705 qpair failed and we were unable to recover it. 00:33:14.705 [2024-07-26 09:06:33.014196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.705 [2024-07-26 09:06:33.014223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.705 qpair failed and we were unable to recover it. 00:33:14.705 [2024-07-26 09:06:33.014401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.705 [2024-07-26 09:06:33.014426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.705 qpair failed and we were unable to recover it. 00:33:14.705 [2024-07-26 09:06:33.014534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.705 [2024-07-26 09:06:33.014560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.705 qpair failed and we were unable to recover it. 00:33:14.705 [2024-07-26 09:06:33.014697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.705 [2024-07-26 09:06:33.014723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.705 qpair failed and we were unable to recover it. 00:33:14.705 [2024-07-26 09:06:33.014834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.705 [2024-07-26 09:06:33.014860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.705 qpair failed and we were unable to recover it. 00:33:14.705 [2024-07-26 09:06:33.015038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.705 [2024-07-26 09:06:33.015071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.705 qpair failed and we were unable to recover it. 00:33:14.705 [2024-07-26 09:06:33.015186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.705 [2024-07-26 09:06:33.015211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.705 qpair failed and we were unable to recover it. 00:33:14.705 [2024-07-26 09:06:33.015332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.705 [2024-07-26 09:06:33.015360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.705 qpair failed and we were unable to recover it. 00:33:14.705 [2024-07-26 09:06:33.015476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.705 [2024-07-26 09:06:33.015501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.705 qpair failed and we were unable to recover it. 00:33:14.705 [2024-07-26 09:06:33.015615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.705 [2024-07-26 09:06:33.015640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.705 qpair failed and we were unable to recover it. 00:33:14.705 [2024-07-26 09:06:33.015816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.705 [2024-07-26 09:06:33.015842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.705 qpair failed and we were unable to recover it. 00:33:14.705 [2024-07-26 09:06:33.015969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.705 [2024-07-26 09:06:33.015995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.705 qpair failed and we were unable to recover it. 00:33:14.706 [2024-07-26 09:06:33.016167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.706 [2024-07-26 09:06:33.016193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.706 qpair failed and we were unable to recover it. 00:33:14.706 [2024-07-26 09:06:33.016312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.706 [2024-07-26 09:06:33.016336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.706 qpair failed and we were unable to recover it. 00:33:14.706 [2024-07-26 09:06:33.016456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.706 [2024-07-26 09:06:33.016482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.706 qpair failed and we were unable to recover it. 00:33:14.706 [2024-07-26 09:06:33.016606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.706 [2024-07-26 09:06:33.016631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.706 qpair failed and we were unable to recover it. 00:33:14.706 [2024-07-26 09:06:33.016798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.706 [2024-07-26 09:06:33.016824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.706 qpair failed and we were unable to recover it. 00:33:14.706 [2024-07-26 09:06:33.016955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.706 [2024-07-26 09:06:33.016994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.706 qpair failed and we were unable to recover it. 00:33:14.706 [2024-07-26 09:06:33.017129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.706 [2024-07-26 09:06:33.017157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.706 qpair failed and we were unable to recover it. 00:33:14.706 [2024-07-26 09:06:33.017282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.706 [2024-07-26 09:06:33.017308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.706 qpair failed and we were unable to recover it. 00:33:14.706 [2024-07-26 09:06:33.017456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.706 [2024-07-26 09:06:33.017483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.706 qpair failed and we were unable to recover it. 00:33:14.706 [2024-07-26 09:06:33.017629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.706 [2024-07-26 09:06:33.017655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.706 qpair failed and we were unable to recover it. 00:33:14.706 [2024-07-26 09:06:33.017780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.706 [2024-07-26 09:06:33.017806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.706 qpair failed and we were unable to recover it. 00:33:14.706 [2024-07-26 09:06:33.017925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.706 [2024-07-26 09:06:33.017952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.706 qpair failed and we were unable to recover it. 00:33:14.706 [2024-07-26 09:06:33.018099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.706 [2024-07-26 09:06:33.018126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.706 qpair failed and we were unable to recover it. 00:33:14.706 [2024-07-26 09:06:33.018245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.706 [2024-07-26 09:06:33.018270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.706 qpair failed and we were unable to recover it. 00:33:14.706 [2024-07-26 09:06:33.018393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.706 [2024-07-26 09:06:33.018419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.706 qpair failed and we were unable to recover it. 00:33:14.706 [2024-07-26 09:06:33.018535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.706 [2024-07-26 09:06:33.018560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.706 qpair failed and we were unable to recover it. 00:33:14.706 [2024-07-26 09:06:33.018669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.706 [2024-07-26 09:06:33.018694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.706 qpair failed and we were unable to recover it. 00:33:14.706 [2024-07-26 09:06:33.018853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.706 [2024-07-26 09:06:33.018881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.706 qpair failed and we were unable to recover it. 00:33:14.706 [2024-07-26 09:06:33.019030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.706 [2024-07-26 09:06:33.019056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.706 qpair failed and we were unable to recover it. 00:33:14.706 [2024-07-26 09:06:33.019226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.706 [2024-07-26 09:06:33.019252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.706 qpair failed and we were unable to recover it. 00:33:14.706 [2024-07-26 09:06:33.019401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.706 [2024-07-26 09:06:33.019427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.706 qpair failed and we were unable to recover it. 00:33:14.706 [2024-07-26 09:06:33.019568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.706 [2024-07-26 09:06:33.019594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.706 qpair failed and we were unable to recover it. 00:33:14.706 [2024-07-26 09:06:33.019768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.706 [2024-07-26 09:06:33.019799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.706 qpair failed and we were unable to recover it. 00:33:14.706 [2024-07-26 09:06:33.019920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.706 [2024-07-26 09:06:33.019947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.706 qpair failed and we were unable to recover it. 00:33:14.706 [2024-07-26 09:06:33.020107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.706 [2024-07-26 09:06:33.020134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.706 qpair failed and we were unable to recover it. 00:33:14.706 [2024-07-26 09:06:33.020251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.706 [2024-07-26 09:06:33.020276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.706 qpair failed and we were unable to recover it. 00:33:14.706 [2024-07-26 09:06:33.020420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.706 [2024-07-26 09:06:33.020445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.706 qpair failed and we were unable to recover it. 00:33:14.706 [2024-07-26 09:06:33.020559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.706 [2024-07-26 09:06:33.020583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.706 qpair failed and we were unable to recover it. 00:33:14.706 [2024-07-26 09:06:33.020732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.706 [2024-07-26 09:06:33.020757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.706 qpair failed and we were unable to recover it. 00:33:14.706 [2024-07-26 09:06:33.020870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.706 [2024-07-26 09:06:33.020895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.706 qpair failed and we were unable to recover it. 00:33:14.706 [2024-07-26 09:06:33.021021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.706 [2024-07-26 09:06:33.021046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.706 qpair failed and we were unable to recover it. 00:33:14.706 [2024-07-26 09:06:33.021173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.706 [2024-07-26 09:06:33.021198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.706 qpair failed and we were unable to recover it. 00:33:14.706 [2024-07-26 09:06:33.021308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.706 [2024-07-26 09:06:33.021332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.706 qpair failed and we were unable to recover it. 00:33:14.706 [2024-07-26 09:06:33.021449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.706 [2024-07-26 09:06:33.021474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.706 qpair failed and we were unable to recover it. 00:33:14.706 [2024-07-26 09:06:33.021615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.706 [2024-07-26 09:06:33.021639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.706 qpair failed and we were unable to recover it. 00:33:14.706 [2024-07-26 09:06:33.021776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.706 [2024-07-26 09:06:33.021801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.706 qpair failed and we were unable to recover it. 00:33:14.706 [2024-07-26 09:06:33.021953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.706 [2024-07-26 09:06:33.021979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.706 qpair failed and we were unable to recover it. 00:33:14.706 [2024-07-26 09:06:33.022103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.707 [2024-07-26 09:06:33.022128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.707 qpair failed and we were unable to recover it. 00:33:14.707 [2024-07-26 09:06:33.022255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.707 [2024-07-26 09:06:33.022282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.707 qpair failed and we were unable to recover it. 00:33:14.707 [2024-07-26 09:06:33.022402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.707 [2024-07-26 09:06:33.022428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.707 qpair failed and we were unable to recover it. 00:33:14.707 [2024-07-26 09:06:33.022583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.707 [2024-07-26 09:06:33.022609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.707 qpair failed and we were unable to recover it. 00:33:14.707 [2024-07-26 09:06:33.022729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.707 [2024-07-26 09:06:33.022755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.707 qpair failed and we were unable to recover it. 00:33:14.707 [2024-07-26 09:06:33.022877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.707 [2024-07-26 09:06:33.022904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.707 qpair failed and we were unable to recover it. 00:33:14.707 [2024-07-26 09:06:33.023075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.707 [2024-07-26 09:06:33.023101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.707 qpair failed and we were unable to recover it. 00:33:14.707 [2024-07-26 09:06:33.023248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.707 [2024-07-26 09:06:33.023274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.707 qpair failed and we were unable to recover it. 00:33:14.707 [2024-07-26 09:06:33.023407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.707 [2024-07-26 09:06:33.023433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.707 qpair failed and we were unable to recover it. 00:33:14.707 [2024-07-26 09:06:33.023577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.707 [2024-07-26 09:06:33.023603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.707 qpair failed and we were unable to recover it. 00:33:14.707 [2024-07-26 09:06:33.023742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.707 [2024-07-26 09:06:33.023769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.707 qpair failed and we were unable to recover it. 00:33:14.707 [2024-07-26 09:06:33.023919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.707 [2024-07-26 09:06:33.023945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.707 qpair failed and we were unable to recover it. 00:33:14.707 [2024-07-26 09:06:33.024069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.707 [2024-07-26 09:06:33.024098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.707 qpair failed and we were unable to recover it. 00:33:14.707 [2024-07-26 09:06:33.024243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.707 [2024-07-26 09:06:33.024269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.707 qpair failed and we were unable to recover it. 00:33:14.707 [2024-07-26 09:06:33.024388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.707 [2024-07-26 09:06:33.024412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.707 qpair failed and we were unable to recover it. 00:33:14.707 [2024-07-26 09:06:33.024541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.707 [2024-07-26 09:06:33.024566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.707 qpair failed and we were unable to recover it. 00:33:14.707 [2024-07-26 09:06:33.024676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.707 [2024-07-26 09:06:33.024701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.707 qpair failed and we were unable to recover it. 00:33:14.707 [2024-07-26 09:06:33.024828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.707 [2024-07-26 09:06:33.024853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.707 qpair failed and we were unable to recover it. 00:33:14.707 [2024-07-26 09:06:33.024997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.707 [2024-07-26 09:06:33.025022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.707 qpair failed and we were unable to recover it. 00:33:14.707 [2024-07-26 09:06:33.025175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.707 [2024-07-26 09:06:33.025200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.707 qpair failed and we were unable to recover it. 00:33:14.707 [2024-07-26 09:06:33.025322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.707 [2024-07-26 09:06:33.025347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.707 qpair failed and we were unable to recover it. 00:33:14.707 [2024-07-26 09:06:33.025490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.707 [2024-07-26 09:06:33.025515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.707 qpair failed and we were unable to recover it. 00:33:14.707 [2024-07-26 09:06:33.025635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.707 [2024-07-26 09:06:33.025660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.707 qpair failed and we were unable to recover it. 00:33:14.707 [2024-07-26 09:06:33.025781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.707 [2024-07-26 09:06:33.025807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.707 qpair failed and we were unable to recover it. 00:33:14.707 [2024-07-26 09:06:33.025927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.707 [2024-07-26 09:06:33.025952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.707 qpair failed and we were unable to recover it. 00:33:14.707 [2024-07-26 09:06:33.026095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.707 [2024-07-26 09:06:33.026121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.707 qpair failed and we were unable to recover it. 00:33:14.707 [2024-07-26 09:06:33.026256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.707 [2024-07-26 09:06:33.026281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.707 qpair failed and we were unable to recover it. 00:33:14.707 [2024-07-26 09:06:33.026407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.707 [2024-07-26 09:06:33.026432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.707 qpair failed and we were unable to recover it. 00:33:14.707 [2024-07-26 09:06:33.026565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.707 [2024-07-26 09:06:33.026591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.707 qpair failed and we were unable to recover it. 00:33:14.707 [2024-07-26 09:06:33.026713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.707 [2024-07-26 09:06:33.026738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.707 qpair failed and we were unable to recover it. 00:33:14.707 [2024-07-26 09:06:33.026863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.707 [2024-07-26 09:06:33.026890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.707 qpair failed and we were unable to recover it. 00:33:14.707 [2024-07-26 09:06:33.027008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.707 [2024-07-26 09:06:33.027034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.707 qpair failed and we were unable to recover it. 00:33:14.707 [2024-07-26 09:06:33.027188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.707 [2024-07-26 09:06:33.027213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.707 qpair failed and we were unable to recover it. 00:33:14.707 [2024-07-26 09:06:33.027388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.707 [2024-07-26 09:06:33.027414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.707 qpair failed and we were unable to recover it. 00:33:14.707 [2024-07-26 09:06:33.027539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.707 [2024-07-26 09:06:33.027565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.707 qpair failed and we were unable to recover it. 00:33:14.707 [2024-07-26 09:06:33.027677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.707 [2024-07-26 09:06:33.027702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.707 qpair failed and we were unable to recover it. 00:33:14.707 [2024-07-26 09:06:33.027824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.707 [2024-07-26 09:06:33.027850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.707 qpair failed and we were unable to recover it. 00:33:14.707 [2024-07-26 09:06:33.028023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.707 [2024-07-26 09:06:33.028048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.707 qpair failed and we were unable to recover it. 00:33:14.707 [2024-07-26 09:06:33.028172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.708 [2024-07-26 09:06:33.028196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.708 qpair failed and we were unable to recover it. 00:33:14.708 [2024-07-26 09:06:33.028316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.708 [2024-07-26 09:06:33.028342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.708 qpair failed and we were unable to recover it. 00:33:14.708 [2024-07-26 09:06:33.028467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.708 [2024-07-26 09:06:33.028492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.708 qpair failed and we were unable to recover it. 00:33:14.708 [2024-07-26 09:06:33.028641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.708 [2024-07-26 09:06:33.028666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.708 qpair failed and we were unable to recover it. 00:33:14.708 [2024-07-26 09:06:33.028787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.708 [2024-07-26 09:06:33.028813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.708 qpair failed and we were unable to recover it. 00:33:14.708 [2024-07-26 09:06:33.028958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.708 [2024-07-26 09:06:33.028983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.708 qpair failed and we were unable to recover it. 00:33:14.708 [2024-07-26 09:06:33.029152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.708 [2024-07-26 09:06:33.029179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.708 qpair failed and we were unable to recover it. 00:33:14.708 [2024-07-26 09:06:33.029302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.708 [2024-07-26 09:06:33.029327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.708 qpair failed and we were unable to recover it. 00:33:14.708 [2024-07-26 09:06:33.029446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.708 [2024-07-26 09:06:33.029472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.708 qpair failed and we were unable to recover it. 00:33:14.708 [2024-07-26 09:06:33.029595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.708 [2024-07-26 09:06:33.029619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.708 qpair failed and we were unable to recover it. 00:33:14.708 [2024-07-26 09:06:33.029731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.708 [2024-07-26 09:06:33.029756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.708 qpair failed and we were unable to recover it. 00:33:14.708 [2024-07-26 09:06:33.029895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.708 [2024-07-26 09:06:33.029920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.708 qpair failed and we were unable to recover it. 00:33:14.708 [2024-07-26 09:06:33.030030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.708 [2024-07-26 09:06:33.030055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.708 qpair failed and we were unable to recover it. 00:33:14.708 [2024-07-26 09:06:33.030199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.708 [2024-07-26 09:06:33.030225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.708 qpair failed and we were unable to recover it. 00:33:14.708 [2024-07-26 09:06:33.030348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.708 [2024-07-26 09:06:33.030373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.708 qpair failed and we were unable to recover it. 00:33:14.708 [2024-07-26 09:06:33.030489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.708 [2024-07-26 09:06:33.030524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.708 qpair failed and we were unable to recover it. 00:33:14.708 [2024-07-26 09:06:33.030667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.708 [2024-07-26 09:06:33.030692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.708 qpair failed and we were unable to recover it. 00:33:14.708 [2024-07-26 09:06:33.030803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.708 [2024-07-26 09:06:33.030828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.708 qpair failed and we were unable to recover it. 00:33:14.708 [2024-07-26 09:06:33.030945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.708 [2024-07-26 09:06:33.030970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.708 qpair failed and we were unable to recover it. 00:33:14.708 [2024-07-26 09:06:33.031093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.708 [2024-07-26 09:06:33.031119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.708 qpair failed and we were unable to recover it. 00:33:14.708 [2024-07-26 09:06:33.031237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.708 [2024-07-26 09:06:33.031262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.708 qpair failed and we were unable to recover it. 00:33:14.708 [2024-07-26 09:06:33.031381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.708 [2024-07-26 09:06:33.031405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.708 qpair failed and we were unable to recover it. 00:33:14.708 [2024-07-26 09:06:33.031579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.708 [2024-07-26 09:06:33.031604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.708 qpair failed and we were unable to recover it. 00:33:14.708 [2024-07-26 09:06:33.031735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.708 [2024-07-26 09:06:33.031760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.708 qpair failed and we were unable to recover it. 00:33:14.708 [2024-07-26 09:06:33.031881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.708 [2024-07-26 09:06:33.031906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.708 qpair failed and we were unable to recover it. 00:33:14.708 [2024-07-26 09:06:33.032018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.708 [2024-07-26 09:06:33.032044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.708 qpair failed and we were unable to recover it. 00:33:14.708 [2024-07-26 09:06:33.032190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.708 [2024-07-26 09:06:33.032231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.708 qpair failed and we were unable to recover it. 00:33:14.708 [2024-07-26 09:06:33.032389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.708 [2024-07-26 09:06:33.032429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.708 qpair failed and we were unable to recover it. 00:33:14.708 [2024-07-26 09:06:33.032557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.708 [2024-07-26 09:06:33.032584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.708 qpair failed and we were unable to recover it. 00:33:14.708 [2024-07-26 09:06:33.032711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.708 [2024-07-26 09:06:33.032737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.708 qpair failed and we were unable to recover it. 00:33:14.708 [2024-07-26 09:06:33.032855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.708 [2024-07-26 09:06:33.032881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.708 qpair failed and we were unable to recover it. 00:33:14.708 [2024-07-26 09:06:33.033000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.708 [2024-07-26 09:06:33.033026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.708 qpair failed and we were unable to recover it. 00:33:14.708 [2024-07-26 09:06:33.033178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.708 [2024-07-26 09:06:33.033204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.708 qpair failed and we were unable to recover it. 00:33:14.708 [2024-07-26 09:06:33.033338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.708 [2024-07-26 09:06:33.033363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.708 qpair failed and we were unable to recover it. 00:33:14.708 [2024-07-26 09:06:33.033537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.708 [2024-07-26 09:06:33.033563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.708 qpair failed and we were unable to recover it. 00:33:14.708 [2024-07-26 09:06:33.033688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.708 [2024-07-26 09:06:33.033714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.708 qpair failed and we were unable to recover it. 00:33:14.708 [2024-07-26 09:06:33.033829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.708 [2024-07-26 09:06:33.033854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.708 qpair failed and we were unable to recover it. 00:33:14.708 [2024-07-26 09:06:33.033970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.708 [2024-07-26 09:06:33.033996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.708 qpair failed and we were unable to recover it. 00:33:14.708 [2024-07-26 09:06:33.034112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.709 [2024-07-26 09:06:33.034138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.709 qpair failed and we were unable to recover it. 00:33:14.709 [2024-07-26 09:06:33.034255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.709 [2024-07-26 09:06:33.034284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.709 qpair failed and we were unable to recover it. 00:33:14.709 [2024-07-26 09:06:33.034398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.709 [2024-07-26 09:06:33.034424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.709 qpair failed and we were unable to recover it. 00:33:14.709 [2024-07-26 09:06:33.034539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.709 [2024-07-26 09:06:33.034568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.709 qpair failed and we were unable to recover it. 00:33:14.709 [2024-07-26 09:06:33.034688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.709 [2024-07-26 09:06:33.034717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.709 qpair failed and we were unable to recover it. 00:33:14.709 [2024-07-26 09:06:33.034831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.709 [2024-07-26 09:06:33.034856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.709 qpair failed and we were unable to recover it. 00:33:14.709 [2024-07-26 09:06:33.034970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.709 [2024-07-26 09:06:33.034995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.709 qpair failed and we were unable to recover it. 00:33:14.709 [2024-07-26 09:06:33.035133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.709 [2024-07-26 09:06:33.035160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.709 qpair failed and we were unable to recover it. 00:33:14.709 [2024-07-26 09:06:33.035287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.709 [2024-07-26 09:06:33.035312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.709 qpair failed and we were unable to recover it. 00:33:14.709 [2024-07-26 09:06:33.035464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.709 [2024-07-26 09:06:33.035490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.709 qpair failed and we were unable to recover it. 00:33:14.709 [2024-07-26 09:06:33.035624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.709 [2024-07-26 09:06:33.035650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.709 qpair failed and we were unable to recover it. 00:33:14.709 [2024-07-26 09:06:33.035769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.709 [2024-07-26 09:06:33.035795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.709 qpair failed and we were unable to recover it. 00:33:14.709 [2024-07-26 09:06:33.035917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.709 [2024-07-26 09:06:33.035956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.709 qpair failed and we were unable to recover it. 00:33:14.709 [2024-07-26 09:06:33.036083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.709 [2024-07-26 09:06:33.036110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.709 qpair failed and we were unable to recover it. 00:33:14.709 [2024-07-26 09:06:33.036229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.709 [2024-07-26 09:06:33.036254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.709 qpair failed and we were unable to recover it. 00:33:14.709 [2024-07-26 09:06:33.036371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.709 [2024-07-26 09:06:33.036396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.709 qpair failed and we were unable to recover it. 00:33:14.709 [2024-07-26 09:06:33.036502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.709 [2024-07-26 09:06:33.036527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.709 qpair failed and we were unable to recover it. 00:33:14.709 [2024-07-26 09:06:33.036655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.709 [2024-07-26 09:06:33.036680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.709 qpair failed and we were unable to recover it. 00:33:14.709 [2024-07-26 09:06:33.036837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.709 [2024-07-26 09:06:33.036863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.709 qpair failed and we were unable to recover it. 00:33:14.709 [2024-07-26 09:06:33.036982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.709 [2024-07-26 09:06:33.037007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.709 qpair failed and we were unable to recover it. 00:33:14.709 [2024-07-26 09:06:33.037152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.709 [2024-07-26 09:06:33.037180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.709 qpair failed and we were unable to recover it. 00:33:14.709 [2024-07-26 09:06:33.037317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.709 [2024-07-26 09:06:33.037344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.709 qpair failed and we were unable to recover it. 00:33:14.709 [2024-07-26 09:06:33.037454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.709 [2024-07-26 09:06:33.037478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.709 qpair failed and we were unable to recover it. 00:33:14.709 [2024-07-26 09:06:33.037630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.709 [2024-07-26 09:06:33.037657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.709 qpair failed and we were unable to recover it. 00:33:14.709 [2024-07-26 09:06:33.037800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.709 [2024-07-26 09:06:33.037827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.709 qpair failed and we were unable to recover it. 00:33:14.709 [2024-07-26 09:06:33.037949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.709 [2024-07-26 09:06:33.037974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.709 qpair failed and we were unable to recover it. 00:33:14.709 [2024-07-26 09:06:33.038104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.709 [2024-07-26 09:06:33.038131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.709 qpair failed and we were unable to recover it. 00:33:14.709 [2024-07-26 09:06:33.038278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.709 [2024-07-26 09:06:33.038303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.709 qpair failed and we were unable to recover it. 00:33:14.709 [2024-07-26 09:06:33.038418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.709 [2024-07-26 09:06:33.038442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.709 qpair failed and we were unable to recover it. 00:33:14.709 [2024-07-26 09:06:33.038595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.709 [2024-07-26 09:06:33.038621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.709 qpair failed and we were unable to recover it. 00:33:14.709 [2024-07-26 09:06:33.038803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.709 [2024-07-26 09:06:33.038828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.709 qpair failed and we were unable to recover it. 00:33:14.709 [2024-07-26 09:06:33.038969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.709 [2024-07-26 09:06:33.039008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.709 qpair failed and we were unable to recover it. 00:33:14.709 [2024-07-26 09:06:33.039143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.709 [2024-07-26 09:06:33.039172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.709 qpair failed and we were unable to recover it. 00:33:14.709 [2024-07-26 09:06:33.039302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.709 [2024-07-26 09:06:33.039328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.709 qpair failed and we were unable to recover it. 00:33:14.709 [2024-07-26 09:06:33.039453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.709 [2024-07-26 09:06:33.039479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.709 qpair failed and we were unable to recover it. 00:33:14.709 [2024-07-26 09:06:33.039599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.709 [2024-07-26 09:06:33.039625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.709 qpair failed and we were unable to recover it. 00:33:14.709 [2024-07-26 09:06:33.039745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.709 [2024-07-26 09:06:33.039772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.709 qpair failed and we were unable to recover it. 00:33:14.709 [2024-07-26 09:06:33.039929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.709 [2024-07-26 09:06:33.039956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.709 qpair failed and we were unable to recover it. 00:33:14.709 [2024-07-26 09:06:33.040074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.709 [2024-07-26 09:06:33.040100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.709 qpair failed and we were unable to recover it. 00:33:14.709 [2024-07-26 09:06:33.040244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.709 [2024-07-26 09:06:33.040270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.710 qpair failed and we were unable to recover it. 00:33:14.710 [2024-07-26 09:06:33.040392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.710 [2024-07-26 09:06:33.040418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.710 qpair failed and we were unable to recover it. 00:33:14.710 [2024-07-26 09:06:33.040564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.710 [2024-07-26 09:06:33.040590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.710 qpair failed and we were unable to recover it. 00:33:14.710 [2024-07-26 09:06:33.040759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.710 [2024-07-26 09:06:33.040785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.710 qpair failed and we were unable to recover it. 00:33:14.710 [2024-07-26 09:06:33.040905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.710 [2024-07-26 09:06:33.040932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.710 qpair failed and we were unable to recover it. 00:33:14.710 [2024-07-26 09:06:33.041048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.710 [2024-07-26 09:06:33.041085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.710 qpair failed and we were unable to recover it. 00:33:14.710 [2024-07-26 09:06:33.041249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.710 [2024-07-26 09:06:33.041287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.710 qpair failed and we were unable to recover it. 00:33:14.710 [2024-07-26 09:06:33.041409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.710 [2024-07-26 09:06:33.041436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.710 qpair failed and we were unable to recover it. 00:33:14.710 [2024-07-26 09:06:33.041577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.710 [2024-07-26 09:06:33.041603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.710 qpair failed and we were unable to recover it. 00:33:14.710 [2024-07-26 09:06:33.041725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.710 [2024-07-26 09:06:33.041751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.710 qpair failed and we were unable to recover it. 00:33:14.710 [2024-07-26 09:06:33.041906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.710 [2024-07-26 09:06:33.041934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.710 qpair failed and we were unable to recover it. 00:33:14.710 [2024-07-26 09:06:33.042078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.710 [2024-07-26 09:06:33.042105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.710 qpair failed and we were unable to recover it. 00:33:14.710 [2024-07-26 09:06:33.042229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.710 [2024-07-26 09:06:33.042255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.710 qpair failed and we were unable to recover it. 00:33:14.710 [2024-07-26 09:06:33.042384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.710 [2024-07-26 09:06:33.042409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.710 qpair failed and we were unable to recover it. 00:33:14.710 [2024-07-26 09:06:33.042564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.710 [2024-07-26 09:06:33.042590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.710 qpair failed and we were unable to recover it. 00:33:14.710 [2024-07-26 09:06:33.042735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.710 [2024-07-26 09:06:33.042761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.710 qpair failed and we were unable to recover it. 00:33:14.710 [2024-07-26 09:06:33.042877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.710 [2024-07-26 09:06:33.042902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.710 qpair failed and we were unable to recover it. 00:33:14.710 [2024-07-26 09:06:33.043064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.710 [2024-07-26 09:06:33.043092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.710 qpair failed and we were unable to recover it. 00:33:14.710 [2024-07-26 09:06:33.043215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.710 [2024-07-26 09:06:33.043241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.710 qpair failed and we were unable to recover it. 00:33:14.710 [2024-07-26 09:06:33.043366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.710 [2024-07-26 09:06:33.043393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.710 qpair failed and we were unable to recover it. 00:33:14.710 [2024-07-26 09:06:33.043530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.710 [2024-07-26 09:06:33.043556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.710 qpair failed and we were unable to recover it. 00:33:14.710 [2024-07-26 09:06:33.043669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.710 [2024-07-26 09:06:33.043694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.710 qpair failed and we were unable to recover it. 00:33:14.710 [2024-07-26 09:06:33.043817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.710 [2024-07-26 09:06:33.043844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.710 qpair failed and we were unable to recover it. 00:33:14.710 [2024-07-26 09:06:33.043960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.710 [2024-07-26 09:06:33.043985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.710 qpair failed and we were unable to recover it. 00:33:14.710 [2024-07-26 09:06:33.044117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.710 [2024-07-26 09:06:33.044155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.710 qpair failed and we were unable to recover it. 00:33:14.710 [2024-07-26 09:06:33.044309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.710 [2024-07-26 09:06:33.044336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.710 qpair failed and we were unable to recover it. 00:33:14.710 [2024-07-26 09:06:33.044456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.710 [2024-07-26 09:06:33.044482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.710 qpair failed and we were unable to recover it. 00:33:14.710 [2024-07-26 09:06:33.044611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.710 [2024-07-26 09:06:33.044637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.710 qpair failed and we were unable to recover it. 00:33:14.710 [2024-07-26 09:06:33.044759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.710 [2024-07-26 09:06:33.044786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.710 qpair failed and we were unable to recover it. 00:33:14.710 [2024-07-26 09:06:33.044903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.710 [2024-07-26 09:06:33.044929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.710 qpair failed and we were unable to recover it. 00:33:14.710 [2024-07-26 09:06:33.045050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.710 [2024-07-26 09:06:33.045082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.710 qpair failed and we were unable to recover it. 00:33:14.710 [2024-07-26 09:06:33.045229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.710 [2024-07-26 09:06:33.045254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.710 qpair failed and we were unable to recover it. 00:33:14.710 [2024-07-26 09:06:33.045380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.710 [2024-07-26 09:06:33.045409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.710 qpair failed and we were unable to recover it. 00:33:14.710 [2024-07-26 09:06:33.045521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.710 [2024-07-26 09:06:33.045546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.710 qpair failed and we were unable to recover it. 00:33:14.710 [2024-07-26 09:06:33.045658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.710 [2024-07-26 09:06:33.045685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.710 qpair failed and we were unable to recover it. 00:33:14.710 [2024-07-26 09:06:33.045828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.710 [2024-07-26 09:06:33.045853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.710 qpair failed and we were unable to recover it. 00:33:14.711 [2024-07-26 09:06:33.046035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.711 [2024-07-26 09:06:33.046066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.711 qpair failed and we were unable to recover it. 00:33:14.711 [2024-07-26 09:06:33.046213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.711 [2024-07-26 09:06:33.046237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.711 qpair failed and we were unable to recover it. 00:33:14.711 [2024-07-26 09:06:33.046359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.711 [2024-07-26 09:06:33.046386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.711 qpair failed and we were unable to recover it. 00:33:14.711 [2024-07-26 09:06:33.046503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.711 [2024-07-26 09:06:33.046530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.711 qpair failed and we were unable to recover it. 00:33:14.711 [2024-07-26 09:06:33.046678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.711 [2024-07-26 09:06:33.046704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.711 qpair failed and we were unable to recover it. 00:33:14.711 [2024-07-26 09:06:33.046822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.711 [2024-07-26 09:06:33.046848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.711 qpair failed and we were unable to recover it. 00:33:14.711 [2024-07-26 09:06:33.046990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.711 [2024-07-26 09:06:33.047015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.711 qpair failed and we were unable to recover it. 00:33:14.711 [2024-07-26 09:06:33.047138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.711 [2024-07-26 09:06:33.047164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.711 qpair failed and we were unable to recover it. 00:33:14.711 [2024-07-26 09:06:33.047329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.711 [2024-07-26 09:06:33.047355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.711 qpair failed and we were unable to recover it. 00:33:14.711 [2024-07-26 09:06:33.047498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.711 [2024-07-26 09:06:33.047524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.711 qpair failed and we were unable to recover it. 00:33:14.711 [2024-07-26 09:06:33.047659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.711 [2024-07-26 09:06:33.047687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.711 qpair failed and we were unable to recover it. 00:33:14.711 [2024-07-26 09:06:33.047862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.711 [2024-07-26 09:06:33.047888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.711 qpair failed and we were unable to recover it. 00:33:14.711 [2024-07-26 09:06:33.048032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.711 [2024-07-26 09:06:33.048078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.711 qpair failed and we were unable to recover it. 00:33:14.711 [2024-07-26 09:06:33.048241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.711 [2024-07-26 09:06:33.048268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.711 qpair failed and we were unable to recover it. 00:33:14.711 [2024-07-26 09:06:33.048455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.711 [2024-07-26 09:06:33.048481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.711 qpair failed and we were unable to recover it. 00:33:14.711 [2024-07-26 09:06:33.048624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.711 [2024-07-26 09:06:33.048650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.711 qpair failed and we were unable to recover it. 00:33:14.711 [2024-07-26 09:06:33.048765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.711 [2024-07-26 09:06:33.048792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.711 qpair failed and we were unable to recover it. 00:33:14.711 [2024-07-26 09:06:33.048914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.711 [2024-07-26 09:06:33.048940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.711 qpair failed and we were unable to recover it. 00:33:14.711 [2024-07-26 09:06:33.049053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.711 [2024-07-26 09:06:33.049087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.711 qpair failed and we were unable to recover it. 00:33:14.711 [2024-07-26 09:06:33.049213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.711 [2024-07-26 09:06:33.049239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.711 qpair failed and we were unable to recover it. 00:33:14.711 [2024-07-26 09:06:33.049381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.711 [2024-07-26 09:06:33.049407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.711 qpair failed and we were unable to recover it. 00:33:14.711 [2024-07-26 09:06:33.049534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.711 [2024-07-26 09:06:33.049559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.711 qpair failed and we were unable to recover it. 00:33:14.711 [2024-07-26 09:06:33.049674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.711 [2024-07-26 09:06:33.049700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.711 qpair failed and we were unable to recover it. 00:33:14.711 [2024-07-26 09:06:33.049817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.711 [2024-07-26 09:06:33.049843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.711 qpair failed and we were unable to recover it. 00:33:14.711 [2024-07-26 09:06:33.049957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.711 [2024-07-26 09:06:33.049983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.711 qpair failed and we were unable to recover it. 00:33:14.711 [2024-07-26 09:06:33.050099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.711 [2024-07-26 09:06:33.050125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.711 qpair failed and we were unable to recover it. 00:33:14.711 [2024-07-26 09:06:33.050243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.711 [2024-07-26 09:06:33.050268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.711 qpair failed and we were unable to recover it. 00:33:14.711 [2024-07-26 09:06:33.050383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.711 [2024-07-26 09:06:33.050408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.711 qpair failed and we were unable to recover it. 00:33:14.711 [2024-07-26 09:06:33.050552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.711 [2024-07-26 09:06:33.050578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.711 qpair failed and we were unable to recover it. 00:33:14.711 [2024-07-26 09:06:33.050694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.711 [2024-07-26 09:06:33.050720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.711 qpair failed and we were unable to recover it. 00:33:14.711 [2024-07-26 09:06:33.050850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.711 [2024-07-26 09:06:33.050876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.711 qpair failed and we were unable to recover it. 00:33:14.711 [2024-07-26 09:06:33.051028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.711 [2024-07-26 09:06:33.051053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.711 qpair failed and we were unable to recover it. 00:33:14.711 [2024-07-26 09:06:33.051213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.711 [2024-07-26 09:06:33.051239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.711 qpair failed and we were unable to recover it. 00:33:14.711 [2024-07-26 09:06:33.051375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.711 [2024-07-26 09:06:33.051413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.711 qpair failed and we were unable to recover it. 00:33:14.711 [2024-07-26 09:06:33.051536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.711 [2024-07-26 09:06:33.051563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.711 qpair failed and we were unable to recover it. 00:33:14.711 [2024-07-26 09:06:33.051708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.711 [2024-07-26 09:06:33.051734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.711 qpair failed and we were unable to recover it. 00:33:14.711 [2024-07-26 09:06:33.051848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.711 [2024-07-26 09:06:33.051874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.711 qpair failed and we were unable to recover it. 00:33:14.711 [2024-07-26 09:06:33.052018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.711 [2024-07-26 09:06:33.052071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.711 qpair failed and we were unable to recover it. 00:33:14.711 [2024-07-26 09:06:33.052196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.711 [2024-07-26 09:06:33.052223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.711 qpair failed and we were unable to recover it. 00:33:14.711 [2024-07-26 09:06:33.052394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.711 [2024-07-26 09:06:33.052419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.711 qpair failed and we were unable to recover it. 00:33:14.711 [2024-07-26 09:06:33.052546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.711 [2024-07-26 09:06:33.052572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.712 qpair failed and we were unable to recover it. 00:33:14.712 [2024-07-26 09:06:33.052702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.712 [2024-07-26 09:06:33.052728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.712 qpair failed and we were unable to recover it. 00:33:14.712 [2024-07-26 09:06:33.052898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.712 [2024-07-26 09:06:33.052925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.712 qpair failed and we were unable to recover it. 00:33:14.712 [2024-07-26 09:06:33.053046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.712 [2024-07-26 09:06:33.053078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.712 qpair failed and we were unable to recover it. 00:33:14.712 [2024-07-26 09:06:33.053193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.712 [2024-07-26 09:06:33.053218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.712 qpair failed and we were unable to recover it. 00:33:14.712 [2024-07-26 09:06:33.053329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.712 [2024-07-26 09:06:33.053354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.712 qpair failed and we were unable to recover it. 00:33:14.712 [2024-07-26 09:06:33.053473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.712 [2024-07-26 09:06:33.053498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.712 qpair failed and we were unable to recover it. 00:33:14.712 [2024-07-26 09:06:33.053668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.712 [2024-07-26 09:06:33.053693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.712 qpair failed and we were unable to recover it. 00:33:14.712 [2024-07-26 09:06:33.053806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.712 [2024-07-26 09:06:33.053831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.712 qpair failed and we were unable to recover it. 00:33:14.712 [2024-07-26 09:06:33.053941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.712 [2024-07-26 09:06:33.053966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.712 qpair failed and we were unable to recover it. 00:33:14.712 [2024-07-26 09:06:33.054118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.712 [2024-07-26 09:06:33.054144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.712 qpair failed and we were unable to recover it. 00:33:14.712 [2024-07-26 09:06:33.054264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.712 [2024-07-26 09:06:33.054290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.712 qpair failed and we were unable to recover it. 00:33:14.712 [2024-07-26 09:06:33.054408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.712 [2024-07-26 09:06:33.054434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.712 qpair failed and we were unable to recover it. 00:33:14.712 [2024-07-26 09:06:33.054553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.712 [2024-07-26 09:06:33.054578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.712 qpair failed and we were unable to recover it. 00:33:14.712 [2024-07-26 09:06:33.054695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.712 [2024-07-26 09:06:33.054720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.712 qpair failed and we were unable to recover it. 00:33:14.712 [2024-07-26 09:06:33.054863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.712 [2024-07-26 09:06:33.054889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.712 qpair failed and we were unable to recover it. 00:33:14.712 [2024-07-26 09:06:33.055001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.712 [2024-07-26 09:06:33.055026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.712 qpair failed and we were unable to recover it. 00:33:14.712 [2024-07-26 09:06:33.055190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.712 [2024-07-26 09:06:33.055218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.712 qpair failed and we were unable to recover it. 00:33:14.712 [2024-07-26 09:06:33.055344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.712 [2024-07-26 09:06:33.055374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.712 qpair failed and we were unable to recover it. 00:33:14.712 [2024-07-26 09:06:33.055527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.712 [2024-07-26 09:06:33.055554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.712 qpair failed and we were unable to recover it. 00:33:14.712 [2024-07-26 09:06:33.055693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.712 [2024-07-26 09:06:33.055719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.712 qpair failed and we were unable to recover it. 00:33:14.712 [2024-07-26 09:06:33.055893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.712 [2024-07-26 09:06:33.055920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.712 qpair failed and we were unable to recover it. 00:33:14.712 [2024-07-26 09:06:33.056089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.712 [2024-07-26 09:06:33.056116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.712 qpair failed and we were unable to recover it. 00:33:14.712 [2024-07-26 09:06:33.056242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.712 [2024-07-26 09:06:33.056268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.712 qpair failed and we were unable to recover it. 00:33:14.712 [2024-07-26 09:06:33.056446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.712 [2024-07-26 09:06:33.056472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.712 qpair failed and we were unable to recover it. 00:33:14.712 [2024-07-26 09:06:33.056596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.712 [2024-07-26 09:06:33.056622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.712 qpair failed and we were unable to recover it. 00:33:14.712 [2024-07-26 09:06:33.056738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.712 [2024-07-26 09:06:33.056766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.712 qpair failed and we were unable to recover it. 00:33:14.712 [2024-07-26 09:06:33.056886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.712 [2024-07-26 09:06:33.056913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.712 qpair failed and we were unable to recover it. 00:33:14.712 [2024-07-26 09:06:33.057036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.712 [2024-07-26 09:06:33.057067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.712 qpair failed and we were unable to recover it. 00:33:14.712 [2024-07-26 09:06:33.057182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.712 [2024-07-26 09:06:33.057207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.712 qpair failed and we were unable to recover it. 00:33:14.712 [2024-07-26 09:06:33.057333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.712 [2024-07-26 09:06:33.057360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.712 qpair failed and we were unable to recover it. 00:33:14.712 [2024-07-26 09:06:33.057509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.712 [2024-07-26 09:06:33.057534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.712 qpair failed and we were unable to recover it. 00:33:14.712 [2024-07-26 09:06:33.057668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.712 [2024-07-26 09:06:33.057694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.712 qpair failed and we were unable to recover it. 00:33:14.712 [2024-07-26 09:06:33.057844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.712 [2024-07-26 09:06:33.057871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.712 qpair failed and we were unable to recover it. 00:33:14.712 [2024-07-26 09:06:33.057988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.712 [2024-07-26 09:06:33.058013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.712 qpair failed and we were unable to recover it. 00:33:14.712 [2024-07-26 09:06:33.058141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.712 [2024-07-26 09:06:33.058167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.712 qpair failed and we were unable to recover it. 00:33:14.712 [2024-07-26 09:06:33.058284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.712 [2024-07-26 09:06:33.058310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.712 qpair failed and we were unable to recover it. 00:33:14.712 [2024-07-26 09:06:33.058449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.712 [2024-07-26 09:06:33.058490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.712 qpair failed and we were unable to recover it. 00:33:14.712 [2024-07-26 09:06:33.058648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.712 [2024-07-26 09:06:33.058676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.712 qpair failed and we were unable to recover it. 00:33:14.712 [2024-07-26 09:06:33.058803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.712 [2024-07-26 09:06:33.058829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.712 qpair failed and we were unable to recover it. 00:33:14.712 [2024-07-26 09:06:33.058956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.712 [2024-07-26 09:06:33.058982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.712 qpair failed and we were unable to recover it. 00:33:14.712 [2024-07-26 09:06:33.059108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.713 [2024-07-26 09:06:33.059136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.713 qpair failed and we were unable to recover it. 00:33:14.713 [2024-07-26 09:06:33.059266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.713 [2024-07-26 09:06:33.059292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.713 qpair failed and we were unable to recover it. 00:33:14.713 [2024-07-26 09:06:33.059442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.713 [2024-07-26 09:06:33.059468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.713 qpair failed and we were unable to recover it. 00:33:14.713 [2024-07-26 09:06:33.059585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.713 [2024-07-26 09:06:33.059611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfa4000b90 with addr=10.0.0.2, port=4420 00:33:14.713 qpair failed and we were unable to recover it. 00:33:14.713 [2024-07-26 09:06:33.059757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.713 [2024-07-26 09:06:33.059785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.713 qpair failed and we were unable to recover it. 00:33:14.713 [2024-07-26 09:06:33.059902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.713 [2024-07-26 09:06:33.059929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.713 qpair failed and we were unable to recover it. 00:33:14.713 [2024-07-26 09:06:33.060091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.713 [2024-07-26 09:06:33.060118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.713 qpair failed and we were unable to recover it. 00:33:14.713 [2024-07-26 09:06:33.060233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.713 [2024-07-26 09:06:33.060258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.713 qpair failed and we were unable to recover it. 00:33:14.713 [2024-07-26 09:06:33.060372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.713 [2024-07-26 09:06:33.060397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.713 qpair failed and we were unable to recover it. 00:33:14.713 [2024-07-26 09:06:33.060544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.713 [2024-07-26 09:06:33.060568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.713 qpair failed and we were unable to recover it. 00:33:14.713 [2024-07-26 09:06:33.060694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.713 [2024-07-26 09:06:33.060720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.713 qpair failed and we were unable to recover it. 00:33:14.713 [2024-07-26 09:06:33.060836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.713 [2024-07-26 09:06:33.060863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.713 qpair failed and we were unable to recover it. 00:33:14.713 [2024-07-26 09:06:33.061013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.713 [2024-07-26 09:06:33.061039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.713 qpair failed and we were unable to recover it. 00:33:14.713 [2024-07-26 09:06:33.061198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.713 [2024-07-26 09:06:33.061224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.713 qpair failed and we were unable to recover it. 00:33:14.713 [2024-07-26 09:06:33.061407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.713 [2024-07-26 09:06:33.061432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.713 qpair failed and we were unable to recover it. 00:33:14.713 [2024-07-26 09:06:33.061557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.713 [2024-07-26 09:06:33.061583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.713 qpair failed and we were unable to recover it. 00:33:14.713 [2024-07-26 09:06:33.061724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.713 [2024-07-26 09:06:33.061750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.713 qpair failed and we were unable to recover it. 00:33:14.713 [2024-07-26 09:06:33.061866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.713 [2024-07-26 09:06:33.061893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.713 qpair failed and we were unable to recover it. 00:33:14.713 [2024-07-26 09:06:33.062015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.713 [2024-07-26 09:06:33.062040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.713 qpair failed and we were unable to recover it. 00:33:14.713 [2024-07-26 09:06:33.062169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.713 [2024-07-26 09:06:33.062196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.713 qpair failed and we were unable to recover it. 00:33:14.713 [2024-07-26 09:06:33.062314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.713 [2024-07-26 09:06:33.062340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.713 qpair failed and we were unable to recover it. 00:33:14.713 [2024-07-26 09:06:33.062462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.713 [2024-07-26 09:06:33.062487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.713 qpair failed and we were unable to recover it. 00:33:14.713 [2024-07-26 09:06:33.062630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.713 [2024-07-26 09:06:33.062655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.713 qpair failed and we were unable to recover it. 00:33:14.713 [2024-07-26 09:06:33.062788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.713 [2024-07-26 09:06:33.062814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.713 qpair failed and we were unable to recover it. 00:33:14.713 [2024-07-26 09:06:33.062933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.713 [2024-07-26 09:06:33.062958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.713 qpair failed and we were unable to recover it. 00:33:14.713 [2024-07-26 09:06:33.063078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.713 [2024-07-26 09:06:33.063107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.713 qpair failed and we were unable to recover it. 00:33:14.713 [2024-07-26 09:06:33.063248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.713 [2024-07-26 09:06:33.063273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.713 qpair failed and we were unable to recover it. 00:33:14.713 [2024-07-26 09:06:33.063397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.713 [2024-07-26 09:06:33.063422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.713 qpair failed and we were unable to recover it. 00:33:14.713 [2024-07-26 09:06:33.063544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.713 [2024-07-26 09:06:33.063570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.713 qpair failed and we were unable to recover it. 00:33:14.713 [2024-07-26 09:06:33.063684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.713 [2024-07-26 09:06:33.063709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.713 qpair failed and we were unable to recover it. 00:33:14.713 [2024-07-26 09:06:33.063852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.713 [2024-07-26 09:06:33.063878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.713 qpair failed and we were unable to recover it. 00:33:14.713 [2024-07-26 09:06:33.064004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.713 [2024-07-26 09:06:33.064030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.713 qpair failed and we were unable to recover it. 00:33:14.713 [2024-07-26 09:06:33.064157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.713 [2024-07-26 09:06:33.064186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.713 qpair failed and we were unable to recover it. 00:33:14.713 [2024-07-26 09:06:33.064334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.713 [2024-07-26 09:06:33.064361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.713 qpair failed and we were unable to recover it. 00:33:14.713 [2024-07-26 09:06:33.064478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.713 [2024-07-26 09:06:33.064505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.713 qpair failed and we were unable to recover it. 00:33:14.713 [2024-07-26 09:06:33.064620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.713 [2024-07-26 09:06:33.064647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.713 qpair failed and we were unable to recover it. 00:33:14.713 [2024-07-26 09:06:33.064784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.713 [2024-07-26 09:06:33.064815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.713 qpair failed and we were unable to recover it. 00:33:14.713 [2024-07-26 09:06:33.064987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.713 [2024-07-26 09:06:33.065014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.713 qpair failed and we were unable to recover it. 00:33:14.713 [2024-07-26 09:06:33.065135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.713 [2024-07-26 09:06:33.065162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.713 qpair failed and we were unable to recover it. 00:33:14.713 [2024-07-26 09:06:33.065308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.713 [2024-07-26 09:06:33.065333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.713 qpair failed and we were unable to recover it. 00:33:14.713 [2024-07-26 09:06:33.065448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.713 [2024-07-26 09:06:33.065474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.713 qpair failed and we were unable to recover it. 00:33:14.713 [2024-07-26 09:06:33.065615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.714 [2024-07-26 09:06:33.065641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.714 qpair failed and we were unable to recover it. 00:33:14.714 [2024-07-26 09:06:33.065753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.714 [2024-07-26 09:06:33.065779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.714 qpair failed and we were unable to recover it. 00:33:14.714 [2024-07-26 09:06:33.065918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.714 [2024-07-26 09:06:33.065944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.714 qpair failed and we were unable to recover it. 00:33:14.714 [2024-07-26 09:06:33.066077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.714 [2024-07-26 09:06:33.066104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.714 qpair failed and we were unable to recover it. 00:33:14.714 [2024-07-26 09:06:33.066226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.714 [2024-07-26 09:06:33.066250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.714 qpair failed and we were unable to recover it. 00:33:14.714 [2024-07-26 09:06:33.066406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.714 [2024-07-26 09:06:33.066431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.714 qpair failed and we were unable to recover it. 00:33:14.714 [2024-07-26 09:06:33.066578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.714 [2024-07-26 09:06:33.066603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.714 qpair failed and we were unable to recover it. 00:33:14.714 [2024-07-26 09:06:33.066722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.714 [2024-07-26 09:06:33.066748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.714 qpair failed and we were unable to recover it. 00:33:14.714 [2024-07-26 09:06:33.066869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.714 [2024-07-26 09:06:33.066893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.714 qpair failed and we were unable to recover it. 00:33:14.714 [2024-07-26 09:06:33.067017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.714 [2024-07-26 09:06:33.067042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.714 qpair failed and we were unable to recover it. 00:33:14.714 [2024-07-26 09:06:33.067189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.714 [2024-07-26 09:06:33.067229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.714 qpair failed and we were unable to recover it. 00:33:14.714 [2024-07-26 09:06:33.067353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.714 [2024-07-26 09:06:33.067380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.714 qpair failed and we were unable to recover it. 00:33:14.714 [2024-07-26 09:06:33.067525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.714 [2024-07-26 09:06:33.067550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.714 qpair failed and we were unable to recover it. 00:33:14.714 [2024-07-26 09:06:33.067669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.714 [2024-07-26 09:06:33.067696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.714 qpair failed and we were unable to recover it. 00:33:14.714 [2024-07-26 09:06:33.067818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.714 [2024-07-26 09:06:33.067844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.714 qpair failed and we were unable to recover it. 00:33:14.714 [2024-07-26 09:06:33.068012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.714 [2024-07-26 09:06:33.068039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.714 qpair failed and we were unable to recover it. 00:33:14.714 [2024-07-26 09:06:33.068170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.714 [2024-07-26 09:06:33.068197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.714 qpair failed and we were unable to recover it. 00:33:14.714 [2024-07-26 09:06:33.068315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.714 [2024-07-26 09:06:33.068340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.714 qpair failed and we were unable to recover it. 00:33:14.714 [2024-07-26 09:06:33.068449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.714 [2024-07-26 09:06:33.068474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.714 qpair failed and we were unable to recover it. 00:33:14.714 [2024-07-26 09:06:33.068590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.714 [2024-07-26 09:06:33.068615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.714 qpair failed and we were unable to recover it. 00:33:14.714 [2024-07-26 09:06:33.068732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.714 [2024-07-26 09:06:33.068757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.714 qpair failed and we were unable to recover it. 00:33:14.714 [2024-07-26 09:06:33.068895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.714 [2024-07-26 09:06:33.068934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.714 qpair failed and we were unable to recover it. 00:33:14.714 [2024-07-26 09:06:33.069050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.714 [2024-07-26 09:06:33.069091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.714 qpair failed and we were unable to recover it. 00:33:14.714 [2024-07-26 09:06:33.069215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.714 [2024-07-26 09:06:33.069240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.714 qpair failed and we were unable to recover it. 00:33:14.714 [2024-07-26 09:06:33.069362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.714 [2024-07-26 09:06:33.069390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.714 qpair failed and we were unable to recover it. 00:33:14.714 [2024-07-26 09:06:33.069538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.714 [2024-07-26 09:06:33.069564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.714 qpair failed and we were unable to recover it. 00:33:14.714 [2024-07-26 09:06:33.069699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.714 [2024-07-26 09:06:33.069724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.714 qpair failed and we were unable to recover it. 00:33:14.714 [2024-07-26 09:06:33.069834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.714 [2024-07-26 09:06:33.069860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.714 qpair failed and we were unable to recover it. 00:33:14.714 [2024-07-26 09:06:33.070033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.714 [2024-07-26 09:06:33.070064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.714 qpair failed and we were unable to recover it. 00:33:14.714 [2024-07-26 09:06:33.070181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.714 [2024-07-26 09:06:33.070206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.714 qpair failed and we were unable to recover it. 00:33:14.714 [2024-07-26 09:06:33.070329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.714 [2024-07-26 09:06:33.070354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.714 qpair failed and we were unable to recover it. 00:33:14.714 [2024-07-26 09:06:33.070471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.714 [2024-07-26 09:06:33.070496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.714 qpair failed and we were unable to recover it. 00:33:14.714 [2024-07-26 09:06:33.070656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.714 [2024-07-26 09:06:33.070681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.714 qpair failed and we were unable to recover it. 00:33:14.714 [2024-07-26 09:06:33.070795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.714 [2024-07-26 09:06:33.070819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.714 qpair failed and we were unable to recover it. 00:33:14.714 [2024-07-26 09:06:33.070947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.714 [2024-07-26 09:06:33.070986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.714 qpair failed and we were unable to recover it. 00:33:14.714 [2024-07-26 09:06:33.071152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.714 [2024-07-26 09:06:33.071180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.714 qpair failed and we were unable to recover it. 00:33:14.714 [2024-07-26 09:06:33.071343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.714 [2024-07-26 09:06:33.071371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.715 qpair failed and we were unable to recover it. 00:33:14.715 [2024-07-26 09:06:33.071518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.715 [2024-07-26 09:06:33.071543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.715 qpair failed and we were unable to recover it. 00:33:14.715 [2024-07-26 09:06:33.071669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.715 [2024-07-26 09:06:33.071694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.715 qpair failed and we were unable to recover it. 00:33:14.715 [2024-07-26 09:06:33.071809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.715 [2024-07-26 09:06:33.071835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.715 qpair failed and we were unable to recover it. 00:33:14.715 [2024-07-26 09:06:33.071974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.715 [2024-07-26 09:06:33.072013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.715 qpair failed and we were unable to recover it. 00:33:14.715 [2024-07-26 09:06:33.072180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.715 [2024-07-26 09:06:33.072209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.715 qpair failed and we were unable to recover it. 00:33:14.715 [2024-07-26 09:06:33.072319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.715 [2024-07-26 09:06:33.072345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.715 qpair failed and we were unable to recover it. 00:33:14.715 [2024-07-26 09:06:33.072497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.715 [2024-07-26 09:06:33.072522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.715 qpair failed and we were unable to recover it. 00:33:14.715 [2024-07-26 09:06:33.072667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.715 [2024-07-26 09:06:33.072692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.715 qpair failed and we were unable to recover it. 00:33:14.715 [2024-07-26 09:06:33.072838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.715 [2024-07-26 09:06:33.072862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.715 qpair failed and we were unable to recover it. 00:33:14.715 [2024-07-26 09:06:33.073007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.715 [2024-07-26 09:06:33.073032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.715 qpair failed and we were unable to recover it. 00:33:14.715 [2024-07-26 09:06:33.073167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.715 [2024-07-26 09:06:33.073192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.715 qpair failed and we were unable to recover it. 00:33:14.715 [2024-07-26 09:06:33.073314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.715 [2024-07-26 09:06:33.073340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.715 qpair failed and we were unable to recover it. 00:33:14.715 [2024-07-26 09:06:33.073463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.715 [2024-07-26 09:06:33.073493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.715 qpair failed and we were unable to recover it. 00:33:14.715 [2024-07-26 09:06:33.073643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.715 [2024-07-26 09:06:33.073667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.715 qpair failed and we were unable to recover it. 00:33:14.715 [2024-07-26 09:06:33.073777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.715 [2024-07-26 09:06:33.073801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.715 qpair failed and we were unable to recover it. 00:33:14.715 [2024-07-26 09:06:33.073945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.715 [2024-07-26 09:06:33.073970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.715 qpair failed and we were unable to recover it. 00:33:14.715 [2024-07-26 09:06:33.074094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.715 [2024-07-26 09:06:33.074120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.715 qpair failed and we were unable to recover it. 00:33:14.715 [2024-07-26 09:06:33.074261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.715 [2024-07-26 09:06:33.074285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.715 qpair failed and we were unable to recover it. 00:33:14.715 [2024-07-26 09:06:33.074435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.715 [2024-07-26 09:06:33.074461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.715 qpair failed and we were unable to recover it. 00:33:14.715 [2024-07-26 09:06:33.074583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.715 [2024-07-26 09:06:33.074609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.715 qpair failed and we were unable to recover it. 00:33:14.715 [2024-07-26 09:06:33.074734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.715 [2024-07-26 09:06:33.074759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.715 qpair failed and we were unable to recover it. 00:33:14.715 [2024-07-26 09:06:33.074871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.715 [2024-07-26 09:06:33.074896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.715 qpair failed and we were unable to recover it. 00:33:14.715 [2024-07-26 09:06:33.075020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.715 [2024-07-26 09:06:33.075044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.715 qpair failed and we were unable to recover it. 00:33:14.715 [2024-07-26 09:06:33.075164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.715 [2024-07-26 09:06:33.075189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.715 qpair failed and we were unable to recover it. 00:33:14.715 [2024-07-26 09:06:33.075297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.715 [2024-07-26 09:06:33.075321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.715 qpair failed and we were unable to recover it. 00:33:14.715 [2024-07-26 09:06:33.075461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.715 [2024-07-26 09:06:33.075485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.715 qpair failed and we were unable to recover it. 00:33:14.715 [2024-07-26 09:06:33.075598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.715 [2024-07-26 09:06:33.075623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.715 qpair failed and we were unable to recover it. 00:33:14.715 [2024-07-26 09:06:33.075793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.715 [2024-07-26 09:06:33.075817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.715 qpair failed and we were unable to recover it. 00:33:14.715 [2024-07-26 09:06:33.075935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.715 [2024-07-26 09:06:33.075960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.715 qpair failed and we were unable to recover it. 00:33:14.715 [2024-07-26 09:06:33.076086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.715 [2024-07-26 09:06:33.076125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.715 qpair failed and we were unable to recover it. 00:33:14.715 [2024-07-26 09:06:33.076259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.715 [2024-07-26 09:06:33.076298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.715 qpair failed and we were unable to recover it. 00:33:14.715 [2024-07-26 09:06:33.076424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.715 [2024-07-26 09:06:33.076450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.715 qpair failed and we were unable to recover it. 00:33:14.715 [2024-07-26 09:06:33.076597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.715 [2024-07-26 09:06:33.076622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.715 qpair failed and we were unable to recover it. 00:33:14.715 [2024-07-26 09:06:33.076737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.715 [2024-07-26 09:06:33.076762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.715 qpair failed and we were unable to recover it. 00:33:14.715 [2024-07-26 09:06:33.076915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.715 [2024-07-26 09:06:33.076940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.715 qpair failed and we were unable to recover it. 00:33:14.715 [2024-07-26 09:06:33.077063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.715 [2024-07-26 09:06:33.077090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.715 qpair failed and we were unable to recover it. 00:33:14.715 [2024-07-26 09:06:33.077210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.715 [2024-07-26 09:06:33.077236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.715 qpair failed and we were unable to recover it. 00:33:14.715 [2024-07-26 09:06:33.077357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.715 [2024-07-26 09:06:33.077382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.715 qpair failed and we were unable to recover it. 00:33:14.715 [2024-07-26 09:06:33.077492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.715 [2024-07-26 09:06:33.077517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.715 qpair failed and we were unable to recover it. 00:33:14.715 [2024-07-26 09:06:33.077629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.715 [2024-07-26 09:06:33.077658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.715 qpair failed and we were unable to recover it. 00:33:14.715 [2024-07-26 09:06:33.077772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.715 [2024-07-26 09:06:33.077798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.715 qpair failed and we were unable to recover it. 00:33:14.715 [2024-07-26 09:06:33.077908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.715 [2024-07-26 09:06:33.077932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.715 qpair failed and we were unable to recover it. 00:33:14.716 [2024-07-26 09:06:33.078053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.716 [2024-07-26 09:06:33.078082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.716 qpair failed and we were unable to recover it. 00:33:14.716 [2024-07-26 09:06:33.078205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.716 [2024-07-26 09:06:33.078231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.716 qpair failed and we were unable to recover it. 00:33:14.716 [2024-07-26 09:06:33.078343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.716 [2024-07-26 09:06:33.078368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.716 qpair failed and we were unable to recover it. 00:33:14.716 [2024-07-26 09:06:33.078488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.716 [2024-07-26 09:06:33.078514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.716 qpair failed and we were unable to recover it. 00:33:14.716 [2024-07-26 09:06:33.078663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.716 [2024-07-26 09:06:33.078687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.716 qpair failed and we were unable to recover it. 00:33:14.716 [2024-07-26 09:06:33.078814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.716 [2024-07-26 09:06:33.078842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.716 qpair failed and we were unable to recover it. 00:33:14.716 [2024-07-26 09:06:33.078965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.716 [2024-07-26 09:06:33.078990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.716 qpair failed and we were unable to recover it. 00:33:14.716 [2024-07-26 09:06:33.079111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.716 [2024-07-26 09:06:33.079137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.716 qpair failed and we were unable to recover it. 00:33:14.716 [2024-07-26 09:06:33.079282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.716 [2024-07-26 09:06:33.079306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.716 qpair failed and we were unable to recover it. 00:33:14.716 [2024-07-26 09:06:33.079428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.716 [2024-07-26 09:06:33.079455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.716 qpair failed and we were unable to recover it. 00:33:14.716 [2024-07-26 09:06:33.079583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.716 [2024-07-26 09:06:33.079608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.716 qpair failed and we were unable to recover it. 00:33:14.716 [2024-07-26 09:06:33.079758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.716 [2024-07-26 09:06:33.079784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.716 qpair failed and we were unable to recover it. 00:33:14.716 [2024-07-26 09:06:33.079900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.716 [2024-07-26 09:06:33.079924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.716 qpair failed and we were unable to recover it. 00:33:14.716 [2024-07-26 09:06:33.080045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.716 [2024-07-26 09:06:33.080080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.716 qpair failed and we were unable to recover it. 00:33:14.716 [2024-07-26 09:06:33.080216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.716 [2024-07-26 09:06:33.080242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.716 qpair failed and we were unable to recover it. 00:33:14.716 [2024-07-26 09:06:33.080366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.716 [2024-07-26 09:06:33.080391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.716 qpair failed and we were unable to recover it. 00:33:14.716 [2024-07-26 09:06:33.080559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.716 [2024-07-26 09:06:33.080584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.716 qpair failed and we were unable to recover it. 00:33:14.716 [2024-07-26 09:06:33.080696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.716 [2024-07-26 09:06:33.080721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.716 qpair failed and we were unable to recover it. 00:33:14.716 [2024-07-26 09:06:33.080858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.716 [2024-07-26 09:06:33.080882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.716 qpair failed and we were unable to recover it. 00:33:14.716 [2024-07-26 09:06:33.081003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.716 [2024-07-26 09:06:33.081028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.716 qpair failed and we were unable to recover it. 00:33:14.716 [2024-07-26 09:06:33.081153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.716 [2024-07-26 09:06:33.081178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.716 qpair failed and we were unable to recover it. 00:33:14.716 [2024-07-26 09:06:33.081327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.716 [2024-07-26 09:06:33.081352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.716 qpair failed and we were unable to recover it. 00:33:14.716 [2024-07-26 09:06:33.081496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.716 [2024-07-26 09:06:33.081521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.716 qpair failed and we were unable to recover it. 00:33:14.716 [2024-07-26 09:06:33.081662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.716 [2024-07-26 09:06:33.081686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.716 qpair failed and we were unable to recover it. 00:33:14.716 [2024-07-26 09:06:33.081794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.716 [2024-07-26 09:06:33.081818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.716 qpair failed and we were unable to recover it. 00:33:14.716 [2024-07-26 09:06:33.081947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.716 [2024-07-26 09:06:33.081972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.716 qpair failed and we were unable to recover it. 00:33:14.716 [2024-07-26 09:06:33.082105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.716 [2024-07-26 09:06:33.082144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.716 qpair failed and we were unable to recover it. 00:33:14.716 [2024-07-26 09:06:33.082271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.716 [2024-07-26 09:06:33.082299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.716 qpair failed and we were unable to recover it. 00:33:14.716 [2024-07-26 09:06:33.082426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.716 [2024-07-26 09:06:33.082451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.716 qpair failed and we were unable to recover it. 00:33:14.716 [2024-07-26 09:06:33.082608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.716 [2024-07-26 09:06:33.082635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.716 qpair failed and we were unable to recover it. 00:33:14.716 [2024-07-26 09:06:33.082751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.716 [2024-07-26 09:06:33.082777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.716 qpair failed and we were unable to recover it. 00:33:14.716 [2024-07-26 09:06:33.082895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.716 [2024-07-26 09:06:33.082920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.716 qpair failed and we were unable to recover it. 00:33:14.716 [2024-07-26 09:06:33.083033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.716 [2024-07-26 09:06:33.083064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.716 qpair failed and we were unable to recover it. 00:33:14.716 [2024-07-26 09:06:33.083182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.716 [2024-07-26 09:06:33.083206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.716 qpair failed and we were unable to recover it. 00:33:14.716 [2024-07-26 09:06:33.083331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.716 [2024-07-26 09:06:33.083356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.716 qpair failed and we were unable to recover it. 00:33:14.716 [2024-07-26 09:06:33.083500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.716 [2024-07-26 09:06:33.083526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.716 qpair failed and we were unable to recover it. 00:33:14.716 [2024-07-26 09:06:33.083668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.716 [2024-07-26 09:06:33.083693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.716 qpair failed and we were unable to recover it. 00:33:14.716 [2024-07-26 09:06:33.083801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.716 [2024-07-26 09:06:33.083827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.716 qpair failed and we were unable to recover it. 00:33:14.716 [2024-07-26 09:06:33.083949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.716 [2024-07-26 09:06:33.083976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.716 qpair failed and we were unable to recover it. 00:33:14.716 [2024-07-26 09:06:33.084121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.716 [2024-07-26 09:06:33.084147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.716 qpair failed and we were unable to recover it. 00:33:14.716 [2024-07-26 09:06:33.084291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.716 [2024-07-26 09:06:33.084316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.716 qpair failed and we were unable to recover it. 00:33:14.716 [2024-07-26 09:06:33.084455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.716 [2024-07-26 09:06:33.084482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.716 qpair failed and we were unable to recover it. 00:33:14.716 [2024-07-26 09:06:33.084605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.716 [2024-07-26 09:06:33.084630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.716 qpair failed and we were unable to recover it. 00:33:14.716 [2024-07-26 09:06:33.084781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.716 [2024-07-26 09:06:33.084805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.716 qpair failed and we were unable to recover it. 00:33:14.716 [2024-07-26 09:06:33.084928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.716 [2024-07-26 09:06:33.084955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.716 qpair failed and we were unable to recover it. 00:33:14.716 [2024-07-26 09:06:33.085069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.716 [2024-07-26 09:06:33.085094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.716 qpair failed and we were unable to recover it. 00:33:14.716 [2024-07-26 09:06:33.085211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.716 [2024-07-26 09:06:33.085235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.716 qpair failed and we were unable to recover it. 00:33:14.716 [2024-07-26 09:06:33.085348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.716 [2024-07-26 09:06:33.085372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.716 qpair failed and we were unable to recover it. 00:33:14.716 [2024-07-26 09:06:33.085523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.716 [2024-07-26 09:06:33.085547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.716 qpair failed and we were unable to recover it. 00:33:14.716 [2024-07-26 09:06:33.085673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.716 [2024-07-26 09:06:33.085698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.716 qpair failed and we were unable to recover it. 00:33:14.716 [2024-07-26 09:06:33.085843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.716 [2024-07-26 09:06:33.085868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.716 qpair failed and we were unable to recover it. 00:33:14.716 [2024-07-26 09:06:33.086029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.716 [2024-07-26 09:06:33.086073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.716 qpair failed and we were unable to recover it. 00:33:14.716 [2024-07-26 09:06:33.086218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.716 [2024-07-26 09:06:33.086246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.716 qpair failed and we were unable to recover it. 00:33:14.716 [2024-07-26 09:06:33.086412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.716 [2024-07-26 09:06:33.086439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.716 qpair failed and we were unable to recover it. 00:33:14.716 [2024-07-26 09:06:33.086565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.716 [2024-07-26 09:06:33.086591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.716 qpair failed and we were unable to recover it. 00:33:14.716 [2024-07-26 09:06:33.086727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.716 [2024-07-26 09:06:33.086752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.716 qpair failed and we were unable to recover it. 00:33:14.716 [2024-07-26 09:06:33.086861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.716 [2024-07-26 09:06:33.086887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.716 qpair failed and we were unable to recover it. 00:33:14.716 [2024-07-26 09:06:33.087003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.716 [2024-07-26 09:06:33.087029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.716 qpair failed and we were unable to recover it. 00:33:14.716 [2024-07-26 09:06:33.087162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.717 [2024-07-26 09:06:33.087186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.717 qpair failed and we were unable to recover it. 00:33:14.717 [2024-07-26 09:06:33.087336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.717 [2024-07-26 09:06:33.087361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.717 qpair failed and we were unable to recover it. 00:33:14.717 [2024-07-26 09:06:33.087487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.717 [2024-07-26 09:06:33.087512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.717 qpair failed and we were unable to recover it. 00:33:14.717 [2024-07-26 09:06:33.087629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.717 [2024-07-26 09:06:33.087653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.717 qpair failed and we were unable to recover it. 00:33:14.717 [2024-07-26 09:06:33.087777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.717 [2024-07-26 09:06:33.087805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.717 qpair failed and we were unable to recover it. 00:33:14.717 [2024-07-26 09:06:33.087929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.717 [2024-07-26 09:06:33.087956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.717 qpair failed and we were unable to recover it. 00:33:14.717 [2024-07-26 09:06:33.088103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.717 [2024-07-26 09:06:33.088130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.717 qpair failed and we were unable to recover it. 00:33:14.717 [2024-07-26 09:06:33.088278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.717 [2024-07-26 09:06:33.088305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.717 qpair failed and we were unable to recover it. 00:33:14.717 [2024-07-26 09:06:33.088431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.717 [2024-07-26 09:06:33.088457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.717 qpair failed and we were unable to recover it. 00:33:14.717 [2024-07-26 09:06:33.088599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.717 [2024-07-26 09:06:33.088625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.717 qpair failed and we were unable to recover it. 00:33:14.717 [2024-07-26 09:06:33.088743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.717 [2024-07-26 09:06:33.088770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.717 qpair failed and we were unable to recover it. 00:33:14.717 [2024-07-26 09:06:33.088912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.717 [2024-07-26 09:06:33.088937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.717 qpair failed and we were unable to recover it. 00:33:14.717 [2024-07-26 09:06:33.089092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.717 [2024-07-26 09:06:33.089117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.717 qpair failed and we were unable to recover it. 00:33:14.717 [2024-07-26 09:06:33.089235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.717 [2024-07-26 09:06:33.089261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.717 qpair failed and we were unable to recover it. 00:33:14.717 [2024-07-26 09:06:33.089371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.717 [2024-07-26 09:06:33.089396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.717 qpair failed and we were unable to recover it. 00:33:14.717 [2024-07-26 09:06:33.089546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.717 [2024-07-26 09:06:33.089571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.717 qpair failed and we were unable to recover it. 00:33:14.717 [2024-07-26 09:06:33.089695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.717 [2024-07-26 09:06:33.089720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.717 qpair failed and we were unable to recover it. 00:33:14.717 [2024-07-26 09:06:33.089838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.717 [2024-07-26 09:06:33.089862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.717 qpair failed and we were unable to recover it. 00:33:14.717 [2024-07-26 09:06:33.089988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.717 [2024-07-26 09:06:33.090013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.717 qpair failed and we were unable to recover it. 00:33:14.717 [2024-07-26 09:06:33.090134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.717 [2024-07-26 09:06:33.090161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.717 qpair failed and we were unable to recover it. 00:33:14.717 [2024-07-26 09:06:33.090301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.717 [2024-07-26 09:06:33.090327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.717 qpair failed and we were unable to recover it. 00:33:14.717 [2024-07-26 09:06:33.090448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.717 [2024-07-26 09:06:33.090475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.717 qpair failed and we were unable to recover it. 00:33:14.717 [2024-07-26 09:06:33.090626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.717 [2024-07-26 09:06:33.090652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.717 qpair failed and we were unable to recover it. 00:33:14.717 [2024-07-26 09:06:33.090763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.717 [2024-07-26 09:06:33.090789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.717 qpair failed and we were unable to recover it. 00:33:14.717 [2024-07-26 09:06:33.090907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.717 [2024-07-26 09:06:33.090932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.717 qpair failed and we were unable to recover it. 00:33:14.717 [2024-07-26 09:06:33.091083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.717 [2024-07-26 09:06:33.091109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.717 qpair failed and we were unable to recover it. 00:33:14.717 [2024-07-26 09:06:33.091232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.717 [2024-07-26 09:06:33.091258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.717 qpair failed and we were unable to recover it. 00:33:14.717 [2024-07-26 09:06:33.091401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.717 [2024-07-26 09:06:33.091425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.717 qpair failed and we were unable to recover it. 00:33:14.717 [2024-07-26 09:06:33.091574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.717 [2024-07-26 09:06:33.091598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.717 qpair failed and we were unable to recover it. 00:33:14.717 [2024-07-26 09:06:33.091719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.717 [2024-07-26 09:06:33.091745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.717 qpair failed and we were unable to recover it. 00:33:14.717 [2024-07-26 09:06:33.091859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.717 [2024-07-26 09:06:33.091884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.717 qpair failed and we were unable to recover it. 00:33:14.717 [2024-07-26 09:06:33.092007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.717 [2024-07-26 09:06:33.092033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.717 qpair failed and we were unable to recover it. 00:33:14.717 [2024-07-26 09:06:33.092155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.717 [2024-07-26 09:06:33.092181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.717 qpair failed and we were unable to recover it. 00:33:14.717 [2024-07-26 09:06:33.092323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.717 [2024-07-26 09:06:33.092349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.717 qpair failed and we were unable to recover it. 00:33:14.717 [2024-07-26 09:06:33.092463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.717 [2024-07-26 09:06:33.092490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.717 qpair failed and we were unable to recover it. 00:33:14.717 [2024-07-26 09:06:33.092609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.717 [2024-07-26 09:06:33.092636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.717 qpair failed and we were unable to recover it. 00:33:14.717 [2024-07-26 09:06:33.092758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.717 [2024-07-26 09:06:33.092784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.717 qpair failed and we were unable to recover it. 00:33:14.717 [2024-07-26 09:06:33.092914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.717 [2024-07-26 09:06:33.092941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.717 qpair failed and we were unable to recover it. 00:33:14.717 [2024-07-26 09:06:33.093085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.717 [2024-07-26 09:06:33.093111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.717 qpair failed and we were unable to recover it. 00:33:14.717 [2024-07-26 09:06:33.093243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.717 [2024-07-26 09:06:33.093268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.717 qpair failed and we were unable to recover it. 00:33:14.717 [2024-07-26 09:06:33.093420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.717 [2024-07-26 09:06:33.093446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.717 qpair failed and we were unable to recover it. 00:33:14.717 [2024-07-26 09:06:33.093560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.717 [2024-07-26 09:06:33.093585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.717 qpair failed and we were unable to recover it. 00:33:14.717 [2024-07-26 09:06:33.093730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.717 [2024-07-26 09:06:33.093756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.717 qpair failed and we were unable to recover it. 00:33:14.717 [2024-07-26 09:06:33.093873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.717 [2024-07-26 09:06:33.093898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.717 qpair failed and we were unable to recover it. 00:33:14.717 [2024-07-26 09:06:33.094028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.717 [2024-07-26 09:06:33.094077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.717 qpair failed and we were unable to recover it. 00:33:14.717 [2024-07-26 09:06:33.094237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.717 [2024-07-26 09:06:33.094265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.717 qpair failed and we were unable to recover it. 00:33:14.717 [2024-07-26 09:06:33.094389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.717 [2024-07-26 09:06:33.094414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.717 qpair failed and we were unable to recover it. 00:33:14.717 [2024-07-26 09:06:33.094560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.717 [2024-07-26 09:06:33.094591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.717 qpair failed and we were unable to recover it. 00:33:14.717 [2024-07-26 09:06:33.094709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.717 [2024-07-26 09:06:33.094734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.717 qpair failed and we were unable to recover it. 00:33:14.717 [2024-07-26 09:06:33.094848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.717 [2024-07-26 09:06:33.094874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.717 qpair failed and we were unable to recover it. 00:33:14.717 [2024-07-26 09:06:33.095036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.717 [2024-07-26 09:06:33.095085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.717 qpair failed and we were unable to recover it. 00:33:14.717 [2024-07-26 09:06:33.095234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.717 [2024-07-26 09:06:33.095267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.717 qpair failed and we were unable to recover it. 00:33:14.717 [2024-07-26 09:06:33.095408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.717 [2024-07-26 09:06:33.095433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.717 qpair failed and we were unable to recover it. 00:33:14.717 [2024-07-26 09:06:33.095549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.717 [2024-07-26 09:06:33.095574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.717 qpair failed and we were unable to recover it. 00:33:14.717 [2024-07-26 09:06:33.095692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.717 [2024-07-26 09:06:33.095718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.717 qpair failed and we were unable to recover it. 00:33:14.717 [2024-07-26 09:06:33.095840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.717 [2024-07-26 09:06:33.095866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.717 qpair failed and we were unable to recover it. 00:33:14.717 [2024-07-26 09:06:33.096010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.717 [2024-07-26 09:06:33.096049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.717 qpair failed and we were unable to recover it. 00:33:14.717 [2024-07-26 09:06:33.096188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.717 [2024-07-26 09:06:33.096215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.717 qpair failed and we were unable to recover it. 00:33:14.717 [2024-07-26 09:06:33.096333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.717 [2024-07-26 09:06:33.096358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.717 qpair failed and we were unable to recover it. 00:33:14.717 [2024-07-26 09:06:33.096475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.717 [2024-07-26 09:06:33.096501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.717 qpair failed and we were unable to recover it. 00:33:14.717 [2024-07-26 09:06:33.096627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.717 [2024-07-26 09:06:33.096652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.718 qpair failed and we were unable to recover it. 00:33:14.718 [2024-07-26 09:06:33.096779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.718 [2024-07-26 09:06:33.096805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.718 qpair failed and we were unable to recover it. 00:33:14.718 [2024-07-26 09:06:33.096934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.718 [2024-07-26 09:06:33.096962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.718 qpair failed and we were unable to recover it. 00:33:14.718 [2024-07-26 09:06:33.097141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.718 [2024-07-26 09:06:33.097168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.718 qpair failed and we were unable to recover it. 00:33:14.718 [2024-07-26 09:06:33.097323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.718 [2024-07-26 09:06:33.097348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.718 qpair failed and we were unable to recover it. 00:33:14.718 [2024-07-26 09:06:33.097466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.718 [2024-07-26 09:06:33.097492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.718 qpair failed and we were unable to recover it. 00:33:14.718 [2024-07-26 09:06:33.097613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.718 [2024-07-26 09:06:33.097637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.718 qpair failed and we were unable to recover it. 00:33:14.718 [2024-07-26 09:06:33.097749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.718 [2024-07-26 09:06:33.097774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.718 qpair failed and we were unable to recover it. 00:33:14.718 [2024-07-26 09:06:33.097888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.718 [2024-07-26 09:06:33.097915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.718 qpair failed and we were unable to recover it. 00:33:14.718 [2024-07-26 09:06:33.098053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.718 [2024-07-26 09:06:33.098092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.718 qpair failed and we were unable to recover it. 00:33:14.718 [2024-07-26 09:06:33.098213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.718 [2024-07-26 09:06:33.098239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.718 qpair failed and we were unable to recover it. 00:33:14.718 [2024-07-26 09:06:33.098361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.718 [2024-07-26 09:06:33.098387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.718 qpair failed and we were unable to recover it. 00:33:14.718 [2024-07-26 09:06:33.098536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.718 [2024-07-26 09:06:33.098561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.718 qpair failed and we were unable to recover it. 00:33:14.718 [2024-07-26 09:06:33.098689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.718 [2024-07-26 09:06:33.098714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.718 qpair failed and we were unable to recover it. 00:33:14.718 [2024-07-26 09:06:33.098864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.718 [2024-07-26 09:06:33.098895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.718 qpair failed and we were unable to recover it. 00:33:14.718 [2024-07-26 09:06:33.099038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.718 [2024-07-26 09:06:33.099069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.718 qpair failed and we were unable to recover it. 00:33:14.718 [2024-07-26 09:06:33.099230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.718 [2024-07-26 09:06:33.099257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.718 qpair failed and we were unable to recover it. 00:33:14.718 [2024-07-26 09:06:33.099369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.718 [2024-07-26 09:06:33.099395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.718 qpair failed and we were unable to recover it. 00:33:14.718 [2024-07-26 09:06:33.099511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.718 [2024-07-26 09:06:33.099535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.718 qpair failed and we were unable to recover it. 00:33:14.979 [2024-07-26 09:06:33.099684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.979 [2024-07-26 09:06:33.099709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.979 qpair failed and we were unable to recover it. 00:33:14.979 [2024-07-26 09:06:33.099817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.979 [2024-07-26 09:06:33.099843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.979 qpair failed and we were unable to recover it. 00:33:14.979 [2024-07-26 09:06:33.099966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.979 [2024-07-26 09:06:33.099992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.979 qpair failed and we were unable to recover it. 00:33:14.979 [2024-07-26 09:06:33.100146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.979 [2024-07-26 09:06:33.100172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.979 qpair failed and we were unable to recover it. 00:33:14.979 [2024-07-26 09:06:33.100280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.979 [2024-07-26 09:06:33.100305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.979 qpair failed and we were unable to recover it. 00:33:14.979 [2024-07-26 09:06:33.100428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.979 [2024-07-26 09:06:33.100452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.979 qpair failed and we were unable to recover it. 00:33:14.979 [2024-07-26 09:06:33.100593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.979 [2024-07-26 09:06:33.100619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.979 qpair failed and we were unable to recover it. 00:33:14.979 [2024-07-26 09:06:33.100749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.979 [2024-07-26 09:06:33.100774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.979 qpair failed and we were unable to recover it. 00:33:14.979 [2024-07-26 09:06:33.100924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.979 [2024-07-26 09:06:33.100950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.979 qpair failed and we were unable to recover it. 00:33:14.979 [2024-07-26 09:06:33.101077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.979 [2024-07-26 09:06:33.101107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.979 qpair failed and we were unable to recover it. 00:33:14.979 [2024-07-26 09:06:33.101226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.979 [2024-07-26 09:06:33.101254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.979 qpair failed and we were unable to recover it. 00:33:14.979 [2024-07-26 09:06:33.101379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.979 [2024-07-26 09:06:33.101404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.979 qpair failed and we were unable to recover it. 00:33:14.979 [2024-07-26 09:06:33.101546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.979 [2024-07-26 09:06:33.101571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.979 qpair failed and we were unable to recover it. 00:33:14.979 [2024-07-26 09:06:33.101715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.979 [2024-07-26 09:06:33.101741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.979 qpair failed and we were unable to recover it. 00:33:14.979 [2024-07-26 09:06:33.101854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.979 [2024-07-26 09:06:33.101880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.979 qpair failed and we were unable to recover it. 00:33:14.979 [2024-07-26 09:06:33.102002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.980 [2024-07-26 09:06:33.102028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.980 qpair failed and we were unable to recover it. 00:33:14.980 [2024-07-26 09:06:33.102149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.980 [2024-07-26 09:06:33.102177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.980 qpair failed and we were unable to recover it. 00:33:14.980 [2024-07-26 09:06:33.102301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.980 [2024-07-26 09:06:33.102327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.980 qpair failed and we were unable to recover it. 00:33:14.980 [2024-07-26 09:06:33.102434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.980 [2024-07-26 09:06:33.102460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.980 qpair failed and we were unable to recover it. 00:33:14.980 [2024-07-26 09:06:33.102583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.980 [2024-07-26 09:06:33.102609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.980 qpair failed and we were unable to recover it. 00:33:14.980 [2024-07-26 09:06:33.102736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.980 [2024-07-26 09:06:33.102762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.980 qpair failed and we were unable to recover it. 00:33:14.980 [2024-07-26 09:06:33.102892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.980 [2024-07-26 09:06:33.102919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.980 qpair failed and we were unable to recover it. 00:33:14.980 [2024-07-26 09:06:33.103032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.980 [2024-07-26 09:06:33.103076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.980 qpair failed and we were unable to recover it. 00:33:14.980 [2024-07-26 09:06:33.103200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.980 [2024-07-26 09:06:33.103226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.980 qpair failed and we were unable to recover it. 00:33:14.980 [2024-07-26 09:06:33.103371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.980 [2024-07-26 09:06:33.103395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.980 qpair failed and we were unable to recover it. 00:33:14.980 [2024-07-26 09:06:33.103508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.980 [2024-07-26 09:06:33.103533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.980 qpair failed and we were unable to recover it. 00:33:14.980 [2024-07-26 09:06:33.103679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.980 [2024-07-26 09:06:33.103703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.980 qpair failed and we were unable to recover it. 00:33:14.980 [2024-07-26 09:06:33.103838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.980 [2024-07-26 09:06:33.103862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.980 qpair failed and we were unable to recover it. 00:33:14.980 [2024-07-26 09:06:33.104007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.980 [2024-07-26 09:06:33.104032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.980 qpair failed and we were unable to recover it. 00:33:14.980 [2024-07-26 09:06:33.104185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.980 [2024-07-26 09:06:33.104210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.980 qpair failed and we were unable to recover it. 00:33:14.980 [2024-07-26 09:06:33.104441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.980 [2024-07-26 09:06:33.104467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.980 qpair failed and we were unable to recover it. 00:33:14.980 [2024-07-26 09:06:33.104600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.980 [2024-07-26 09:06:33.104625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.980 qpair failed and we were unable to recover it. 00:33:14.980 [2024-07-26 09:06:33.104769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.980 [2024-07-26 09:06:33.104793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.980 qpair failed and we were unable to recover it. 00:33:14.980 [2024-07-26 09:06:33.104948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.980 [2024-07-26 09:06:33.104974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.980 qpair failed and we were unable to recover it. 00:33:14.980 [2024-07-26 09:06:33.105108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.980 [2024-07-26 09:06:33.105134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.980 qpair failed and we were unable to recover it. 00:33:14.980 [2024-07-26 09:06:33.105334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.980 [2024-07-26 09:06:33.105361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.980 qpair failed and we were unable to recover it. 00:33:14.980 [2024-07-26 09:06:33.105489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.980 [2024-07-26 09:06:33.105514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.980 qpair failed and we were unable to recover it. 00:33:14.980 [2024-07-26 09:06:33.105658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.980 [2024-07-26 09:06:33.105683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.980 qpair failed and we were unable to recover it. 00:33:14.980 [2024-07-26 09:06:33.105797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.980 [2024-07-26 09:06:33.105822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.980 qpair failed and we were unable to recover it. 00:33:14.980 [2024-07-26 09:06:33.105982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.980 [2024-07-26 09:06:33.106021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.980 qpair failed and we were unable to recover it. 00:33:14.980 [2024-07-26 09:06:33.106190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.980 [2024-07-26 09:06:33.106228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.980 qpair failed and we were unable to recover it. 00:33:14.980 [2024-07-26 09:06:33.106355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.980 [2024-07-26 09:06:33.106381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.980 qpair failed and we were unable to recover it. 00:33:14.980 [2024-07-26 09:06:33.106508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.980 [2024-07-26 09:06:33.106534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.980 qpair failed and we were unable to recover it. 00:33:14.980 [2024-07-26 09:06:33.106670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.980 [2024-07-26 09:06:33.106696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.980 qpair failed and we were unable to recover it. 00:33:14.980 [2024-07-26 09:06:33.106820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.980 [2024-07-26 09:06:33.106844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.980 qpair failed and we were unable to recover it. 00:33:14.980 [2024-07-26 09:06:33.106962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.980 [2024-07-26 09:06:33.106989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.980 qpair failed and we were unable to recover it. 00:33:14.980 [2024-07-26 09:06:33.107136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.980 [2024-07-26 09:06:33.107165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.980 qpair failed and we were unable to recover it. 00:33:14.980 [2024-07-26 09:06:33.107317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.980 [2024-07-26 09:06:33.107344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.980 qpair failed and we were unable to recover it. 00:33:14.980 [2024-07-26 09:06:33.107479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.980 [2024-07-26 09:06:33.107505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.980 qpair failed and we were unable to recover it. 00:33:14.980 [2024-07-26 09:06:33.107637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.980 [2024-07-26 09:06:33.107664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.980 qpair failed and we were unable to recover it. 00:33:14.980 [2024-07-26 09:06:33.107799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.980 [2024-07-26 09:06:33.107825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.980 qpair failed and we were unable to recover it. 00:33:14.980 [2024-07-26 09:06:33.107946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.980 [2024-07-26 09:06:33.107973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.980 qpair failed and we were unable to recover it. 00:33:14.980 [2024-07-26 09:06:33.108097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.980 [2024-07-26 09:06:33.108129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.980 qpair failed and we were unable to recover it. 00:33:14.980 [2024-07-26 09:06:33.108250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.980 [2024-07-26 09:06:33.108275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.980 qpair failed and we were unable to recover it. 00:33:14.980 [2024-07-26 09:06:33.108389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.980 [2024-07-26 09:06:33.108415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.980 qpair failed and we were unable to recover it. 00:33:14.980 [2024-07-26 09:06:33.108525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.980 [2024-07-26 09:06:33.108550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.980 qpair failed and we were unable to recover it. 00:33:14.980 [2024-07-26 09:06:33.108699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.980 [2024-07-26 09:06:33.108724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.980 qpair failed and we were unable to recover it. 00:33:14.980 [2024-07-26 09:06:33.108841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.980 [2024-07-26 09:06:33.108867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.980 qpair failed and we were unable to recover it. 00:33:14.980 [2024-07-26 09:06:33.108991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.980 [2024-07-26 09:06:33.109019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.980 qpair failed and we were unable to recover it. 00:33:14.980 [2024-07-26 09:06:33.109149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.980 [2024-07-26 09:06:33.109177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.980 qpair failed and we were unable to recover it. 00:33:14.980 [2024-07-26 09:06:33.109322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.980 [2024-07-26 09:06:33.109348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.980 qpair failed and we were unable to recover it. 00:33:14.980 [2024-07-26 09:06:33.109478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.980 [2024-07-26 09:06:33.109504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.980 qpair failed and we were unable to recover it. 00:33:14.980 [2024-07-26 09:06:33.109623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.980 [2024-07-26 09:06:33.109649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.980 qpair failed and we were unable to recover it. 00:33:14.980 [2024-07-26 09:06:33.109806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.980 [2024-07-26 09:06:33.109834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.980 qpair failed and we were unable to recover it. 00:33:14.980 [2024-07-26 09:06:33.109952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.980 [2024-07-26 09:06:33.109978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.980 qpair failed and we were unable to recover it. 00:33:14.980 [2024-07-26 09:06:33.110109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.980 [2024-07-26 09:06:33.110135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.980 qpair failed and we were unable to recover it. 00:33:14.980 [2024-07-26 09:06:33.110277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.980 [2024-07-26 09:06:33.110302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.980 qpair failed and we were unable to recover it. 00:33:14.980 [2024-07-26 09:06:33.110423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.980 [2024-07-26 09:06:33.110449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.980 qpair failed and we were unable to recover it. 00:33:14.980 [2024-07-26 09:06:33.110562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.980 [2024-07-26 09:06:33.110587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.980 qpair failed and we were unable to recover it. 00:33:14.980 [2024-07-26 09:06:33.110732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.980 [2024-07-26 09:06:33.110757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.980 qpair failed and we were unable to recover it. 00:33:14.980 [2024-07-26 09:06:33.110893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.980 [2024-07-26 09:06:33.110932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.980 qpair failed and we were unable to recover it. 00:33:14.980 [2024-07-26 09:06:33.111071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.980 [2024-07-26 09:06:33.111099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.980 qpair failed and we were unable to recover it. 00:33:14.980 [2024-07-26 09:06:33.111229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.980 [2024-07-26 09:06:33.111255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.980 qpair failed and we were unable to recover it. 00:33:14.980 [2024-07-26 09:06:33.111377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.980 [2024-07-26 09:06:33.111404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.980 qpair failed and we were unable to recover it. 00:33:14.980 [2024-07-26 09:06:33.111553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.980 [2024-07-26 09:06:33.111578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.980 qpair failed and we were unable to recover it. 00:33:14.980 [2024-07-26 09:06:33.111687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.980 [2024-07-26 09:06:33.111711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.980 qpair failed and we were unable to recover it. 00:33:14.981 [2024-07-26 09:06:33.111834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.981 [2024-07-26 09:06:33.111861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.981 qpair failed and we were unable to recover it. 00:33:14.981 [2024-07-26 09:06:33.111982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.981 [2024-07-26 09:06:33.112007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.981 qpair failed and we were unable to recover it. 00:33:14.981 [2024-07-26 09:06:33.112140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.981 [2024-07-26 09:06:33.112168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.981 qpair failed and we were unable to recover it. 00:33:14.981 [2024-07-26 09:06:33.112286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.981 [2024-07-26 09:06:33.112311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.981 qpair failed and we were unable to recover it. 00:33:14.981 [2024-07-26 09:06:33.112458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.981 [2024-07-26 09:06:33.112483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.981 qpair failed and we were unable to recover it. 00:33:14.981 [2024-07-26 09:06:33.112617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.981 [2024-07-26 09:06:33.112642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.981 qpair failed and we were unable to recover it. 00:33:14.981 [2024-07-26 09:06:33.112765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.981 [2024-07-26 09:06:33.112791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.981 qpair failed and we were unable to recover it. 00:33:14.981 [2024-07-26 09:06:33.112903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.981 [2024-07-26 09:06:33.112928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.981 qpair failed and we were unable to recover it. 00:33:14.981 [2024-07-26 09:06:33.113070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.981 [2024-07-26 09:06:33.113096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.981 qpair failed and we were unable to recover it. 00:33:14.981 [2024-07-26 09:06:33.113213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.981 [2024-07-26 09:06:33.113239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.981 qpair failed and we were unable to recover it. 00:33:14.981 [2024-07-26 09:06:33.113359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.981 [2024-07-26 09:06:33.113385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.981 qpair failed and we were unable to recover it. 00:33:14.981 [2024-07-26 09:06:33.113493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.981 [2024-07-26 09:06:33.113519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.981 qpair failed and we were unable to recover it. 00:33:14.981 [2024-07-26 09:06:33.113637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.981 [2024-07-26 09:06:33.113664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.981 qpair failed and we were unable to recover it. 00:33:14.981 [2024-07-26 09:06:33.113783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.981 [2024-07-26 09:06:33.113809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.981 qpair failed and we were unable to recover it. 00:33:14.981 [2024-07-26 09:06:33.113941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.981 [2024-07-26 09:06:33.113980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.981 qpair failed and we were unable to recover it. 00:33:14.981 [2024-07-26 09:06:33.114192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.981 [2024-07-26 09:06:33.114231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.981 qpair failed and we were unable to recover it. 00:33:14.981 [2024-07-26 09:06:33.114356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.981 [2024-07-26 09:06:33.114383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.981 qpair failed and we were unable to recover it. 00:33:14.981 [2024-07-26 09:06:33.114528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.981 [2024-07-26 09:06:33.114553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.981 qpair failed and we were unable to recover it. 00:33:14.981 [2024-07-26 09:06:33.114678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.981 [2024-07-26 09:06:33.114704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.981 qpair failed and we were unable to recover it. 00:33:14.981 [2024-07-26 09:06:33.114821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.981 [2024-07-26 09:06:33.114846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.981 qpair failed and we were unable to recover it. 00:33:14.981 [2024-07-26 09:06:33.114979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.981 [2024-07-26 09:06:33.115006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.981 qpair failed and we were unable to recover it. 00:33:14.981 [2024-07-26 09:06:33.115148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.981 [2024-07-26 09:06:33.115174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.981 qpair failed and we were unable to recover it. 00:33:14.981 [2024-07-26 09:06:33.115314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.981 [2024-07-26 09:06:33.115340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.981 qpair failed and we were unable to recover it. 00:33:14.981 [2024-07-26 09:06:33.115457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.981 [2024-07-26 09:06:33.115483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.981 qpair failed and we were unable to recover it. 00:33:14.981 09:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:14.981 [2024-07-26 09:06:33.115652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.981 [2024-07-26 09:06:33.115678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.981 qpair failed and we were unable to recover it. 00:33:14.981 09:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:33:14.981 [2024-07-26 09:06:33.115798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.981 [2024-07-26 09:06:33.115825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.981 qpair failed and we were unable to recover it. 00:33:14.981 09:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:14.981 [2024-07-26 09:06:33.115937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.981 09:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:14.981 [2024-07-26 09:06:33.115964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.981 qpair failed and we were unable to recover it. 00:33:14.981 09:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:14.981 [2024-07-26 09:06:33.116106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.981 [2024-07-26 09:06:33.116145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.981 qpair failed and we were unable to recover it. 00:33:14.981 [2024-07-26 09:06:33.116295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.981 [2024-07-26 09:06:33.116321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.981 qpair failed and we were unable to recover it. 00:33:14.981 [2024-07-26 09:06:33.116445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.981 [2024-07-26 09:06:33.116472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.981 qpair failed and we were unable to recover it. 00:33:14.981 [2024-07-26 09:06:33.116590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.981 [2024-07-26 09:06:33.116616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.981 qpair failed and we were unable to recover it. 00:33:14.981 [2024-07-26 09:06:33.116749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.981 [2024-07-26 09:06:33.116775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.981 qpair failed and we were unable to recover it. 00:33:14.981 [2024-07-26 09:06:33.116915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.981 [2024-07-26 09:06:33.116941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.981 qpair failed and we were unable to recover it. 00:33:14.981 [2024-07-26 09:06:33.117092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.981 [2024-07-26 09:06:33.117119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.981 qpair failed and we were unable to recover it. 00:33:14.981 [2024-07-26 09:06:33.117234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.981 [2024-07-26 09:06:33.117260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.981 qpair failed and we were unable to recover it. 00:33:14.981 [2024-07-26 09:06:33.117381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.981 [2024-07-26 09:06:33.117407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.981 qpair failed and we were unable to recover it. 00:33:14.981 [2024-07-26 09:06:33.117568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.981 [2024-07-26 09:06:33.117595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.981 qpair failed and we were unable to recover it. 00:33:14.981 [2024-07-26 09:06:33.117708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.981 [2024-07-26 09:06:33.117735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.981 qpair failed and we were unable to recover it. 00:33:14.981 [2024-07-26 09:06:33.117847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.981 [2024-07-26 09:06:33.117878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.981 qpair failed and we were unable to recover it. 00:33:14.981 [2024-07-26 09:06:33.118052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.981 [2024-07-26 09:06:33.118087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.981 qpair failed and we were unable to recover it. 00:33:14.981 [2024-07-26 09:06:33.118234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.981 [2024-07-26 09:06:33.118260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.981 qpair failed and we were unable to recover it. 00:33:14.981 [2024-07-26 09:06:33.118371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.981 [2024-07-26 09:06:33.118398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.981 qpair failed and we were unable to recover it. 00:33:14.981 [2024-07-26 09:06:33.118523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.981 [2024-07-26 09:06:33.118549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.981 qpair failed and we were unable to recover it. 00:33:14.981 [2024-07-26 09:06:33.118676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.981 [2024-07-26 09:06:33.118703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.981 qpair failed and we were unable to recover it. 00:33:14.981 [2024-07-26 09:06:33.118830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.981 [2024-07-26 09:06:33.118868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.981 qpair failed and we were unable to recover it. 00:33:14.981 [2024-07-26 09:06:33.118988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.981 [2024-07-26 09:06:33.119016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.981 qpair failed and we were unable to recover it. 00:33:14.981 [2024-07-26 09:06:33.119140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.981 [2024-07-26 09:06:33.119167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.981 qpair failed and we were unable to recover it. 00:33:14.981 [2024-07-26 09:06:33.119340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.981 [2024-07-26 09:06:33.119367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.981 qpair failed and we were unable to recover it. 00:33:14.981 [2024-07-26 09:06:33.119510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.981 [2024-07-26 09:06:33.119536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.981 qpair failed and we were unable to recover it. 00:33:14.981 [2024-07-26 09:06:33.119692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.981 [2024-07-26 09:06:33.119718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.981 qpair failed and we were unable to recover it. 00:33:14.981 [2024-07-26 09:06:33.119850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.981 [2024-07-26 09:06:33.119877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.981 qpair failed and we were unable to recover it. 00:33:14.981 [2024-07-26 09:06:33.120007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.981 [2024-07-26 09:06:33.120033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.981 qpair failed and we were unable to recover it. 00:33:14.981 [2024-07-26 09:06:33.120205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.981 [2024-07-26 09:06:33.120231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.981 qpair failed and we were unable to recover it. 00:33:14.981 [2024-07-26 09:06:33.120372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.981 [2024-07-26 09:06:33.120397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.981 qpair failed and we were unable to recover it. 00:33:14.981 [2024-07-26 09:06:33.120554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.981 [2024-07-26 09:06:33.120580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.981 qpair failed and we were unable to recover it. 00:33:14.981 [2024-07-26 09:06:33.120730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.981 [2024-07-26 09:06:33.120756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.981 qpair failed and we were unable to recover it. 00:33:14.981 [2024-07-26 09:06:33.120866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.981 [2024-07-26 09:06:33.120892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.981 qpair failed and we were unable to recover it. 00:33:14.981 [2024-07-26 09:06:33.121000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.981 [2024-07-26 09:06:33.121025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.981 qpair failed and we were unable to recover it. 00:33:14.981 [2024-07-26 09:06:33.121147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.981 [2024-07-26 09:06:33.121174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.981 qpair failed and we were unable to recover it. 00:33:14.981 [2024-07-26 09:06:33.121319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.981 [2024-07-26 09:06:33.121345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.981 qpair failed and we were unable to recover it. 00:33:14.981 [2024-07-26 09:06:33.121467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.981 [2024-07-26 09:06:33.121493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.981 qpair failed and we were unable to recover it. 00:33:14.981 [2024-07-26 09:06:33.121644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.982 [2024-07-26 09:06:33.121670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.982 qpair failed and we were unable to recover it. 00:33:14.982 [2024-07-26 09:06:33.121792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.982 [2024-07-26 09:06:33.121817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.982 qpair failed and we were unable to recover it. 00:33:14.982 [2024-07-26 09:06:33.121952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.982 [2024-07-26 09:06:33.121993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.982 qpair failed and we were unable to recover it. 00:33:14.982 [2024-07-26 09:06:33.122135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.982 [2024-07-26 09:06:33.122161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.982 qpair failed and we were unable to recover it. 00:33:14.982 [2024-07-26 09:06:33.122310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.982 [2024-07-26 09:06:33.122342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.982 qpair failed and we were unable to recover it. 00:33:14.982 [2024-07-26 09:06:33.122491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.982 [2024-07-26 09:06:33.122518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.982 qpair failed and we were unable to recover it. 00:33:14.982 [2024-07-26 09:06:33.122638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.982 [2024-07-26 09:06:33.122664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.982 qpair failed and we were unable to recover it. 00:33:14.982 [2024-07-26 09:06:33.122814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.982 [2024-07-26 09:06:33.122839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.982 qpair failed and we were unable to recover it. 00:33:14.982 [2024-07-26 09:06:33.122990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.982 [2024-07-26 09:06:33.123016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.982 qpair failed and we were unable to recover it. 00:33:14.982 [2024-07-26 09:06:33.123144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.982 [2024-07-26 09:06:33.123171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.982 qpair failed and we were unable to recover it. 00:33:14.982 [2024-07-26 09:06:33.123319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.982 [2024-07-26 09:06:33.123345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.982 qpair failed and we were unable to recover it. 00:33:14.982 [2024-07-26 09:06:33.123460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.982 [2024-07-26 09:06:33.123495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.982 qpair failed and we were unable to recover it. 00:33:14.982 [2024-07-26 09:06:33.123616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.982 [2024-07-26 09:06:33.123642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.982 qpair failed and we were unable to recover it. 00:33:14.982 [2024-07-26 09:06:33.123783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.982 [2024-07-26 09:06:33.123809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfb4000b90 with addr=10.0.0.2, port=4420 00:33:14.982 qpair failed and we were unable to recover it. 00:33:14.982 [2024-07-26 09:06:33.123940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.982 [2024-07-26 09:06:33.123978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.982 qpair failed and we were unable to recover it. 00:33:14.982 [2024-07-26 09:06:33.124146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.982 [2024-07-26 09:06:33.124176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.982 qpair failed and we were unable to recover it. 00:33:14.982 [2024-07-26 09:06:33.124305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.982 [2024-07-26 09:06:33.124331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.982 qpair failed and we were unable to recover it. 00:33:14.982 [2024-07-26 09:06:33.124442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.982 [2024-07-26 09:06:33.124468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.982 qpair failed and we were unable to recover it. 00:33:14.982 [2024-07-26 09:06:33.124598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.982 [2024-07-26 09:06:33.124624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.982 qpair failed and we were unable to recover it. 00:33:14.982 [2024-07-26 09:06:33.124753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.982 [2024-07-26 09:06:33.124778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.982 qpair failed and we were unable to recover it. 00:33:14.982 [2024-07-26 09:06:33.124891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.982 [2024-07-26 09:06:33.124917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.982 qpair failed and we were unable to recover it. 00:33:14.982 [2024-07-26 09:06:33.125055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.982 [2024-07-26 09:06:33.125093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.982 qpair failed and we were unable to recover it. 00:33:14.982 [2024-07-26 09:06:33.125236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.982 [2024-07-26 09:06:33.125262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.982 qpair failed and we were unable to recover it. 00:33:14.982 [2024-07-26 09:06:33.125379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.982 [2024-07-26 09:06:33.125405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.982 qpair failed and we were unable to recover it. 00:33:14.982 [2024-07-26 09:06:33.125560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.982 [2024-07-26 09:06:33.125586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.982 qpair failed and we were unable to recover it. 00:33:14.982 [2024-07-26 09:06:33.125707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.982 [2024-07-26 09:06:33.125732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.982 qpair failed and we were unable to recover it. 00:33:14.982 [2024-07-26 09:06:33.125844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.982 [2024-07-26 09:06:33.125869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.982 qpair failed and we were unable to recover it. 00:33:14.982 [2024-07-26 09:06:33.125982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.982 [2024-07-26 09:06:33.126014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.982 qpair failed and we were unable to recover it. 00:33:14.982 [2024-07-26 09:06:33.126174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.982 [2024-07-26 09:06:33.126200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.982 qpair failed and we were unable to recover it. 00:33:14.982 [2024-07-26 09:06:33.126323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.982 [2024-07-26 09:06:33.126352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.982 qpair failed and we were unable to recover it. 00:33:14.982 [2024-07-26 09:06:33.126488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.982 [2024-07-26 09:06:33.126514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.982 qpair failed and we were unable to recover it. 00:33:14.982 [2024-07-26 09:06:33.126652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.982 [2024-07-26 09:06:33.126683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.982 qpair failed and we were unable to recover it. 00:33:14.982 [2024-07-26 09:06:33.126836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.982 [2024-07-26 09:06:33.126870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.982 qpair failed and we were unable to recover it. 00:33:14.982 [2024-07-26 09:06:33.126992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.982 [2024-07-26 09:06:33.127018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.982 qpair failed and we were unable to recover it. 00:33:14.982 [2024-07-26 09:06:33.127153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.982 [2024-07-26 09:06:33.127180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.982 qpair failed and we were unable to recover it. 00:33:14.982 [2024-07-26 09:06:33.127321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.982 [2024-07-26 09:06:33.127346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.982 qpair failed and we were unable to recover it. 00:33:14.982 [2024-07-26 09:06:33.127473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.982 [2024-07-26 09:06:33.127499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.982 qpair failed and we were unable to recover it. 00:33:14.982 [2024-07-26 09:06:33.127622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.982 [2024-07-26 09:06:33.127648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.982 qpair failed and we were unable to recover it. 00:33:14.982 [2024-07-26 09:06:33.127771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.982 [2024-07-26 09:06:33.127798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.982 qpair failed and we were unable to recover it. 00:33:14.982 [2024-07-26 09:06:33.127921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.982 [2024-07-26 09:06:33.127947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.982 qpair failed and we were unable to recover it. 00:33:14.982 [2024-07-26 09:06:33.128083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.982 [2024-07-26 09:06:33.128123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.982 qpair failed and we were unable to recover it. 00:33:14.982 [2024-07-26 09:06:33.128246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.982 [2024-07-26 09:06:33.128273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.982 qpair failed and we were unable to recover it. 00:33:14.982 [2024-07-26 09:06:33.128406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.982 [2024-07-26 09:06:33.128433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.982 qpair failed and we were unable to recover it. 00:33:14.982 [2024-07-26 09:06:33.128552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.982 [2024-07-26 09:06:33.128579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.982 qpair failed and we were unable to recover it. 00:33:14.982 [2024-07-26 09:06:33.128754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.982 [2024-07-26 09:06:33.128780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.982 qpair failed and we were unable to recover it. 00:33:14.982 [2024-07-26 09:06:33.128916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.982 [2024-07-26 09:06:33.128943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.982 qpair failed and we were unable to recover it. 00:33:14.982 [2024-07-26 09:06:33.129099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.982 [2024-07-26 09:06:33.129127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.982 qpair failed and we were unable to recover it. 00:33:14.982 [2024-07-26 09:06:33.129298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.982 [2024-07-26 09:06:33.129324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.982 qpair failed and we were unable to recover it. 00:33:14.982 [2024-07-26 09:06:33.129451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.982 [2024-07-26 09:06:33.129477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c4b0 with addr=10.0.0.2, port=4420 00:33:14.982 qpair failed and we were unable to recover it. 00:33:14.982 [2024-07-26 09:06:33.129654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.982 [2024-07-26 09:06:33.129681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.982 qpair failed and we were unable to recover it. 00:33:14.982 [2024-07-26 09:06:33.129832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.982 [2024-07-26 09:06:33.129858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.982 qpair failed and we were unable to recover it. 00:33:14.982 [2024-07-26 09:06:33.130008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.982 [2024-07-26 09:06:33.130034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.982 qpair failed and we were unable to recover it. 00:33:14.982 [2024-07-26 09:06:33.130172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.983 [2024-07-26 09:06:33.130198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.983 qpair failed and we were unable to recover it. 00:33:14.983 [2024-07-26 09:06:33.130343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.983 [2024-07-26 09:06:33.130378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcfac000b90 with addr=10.0.0.2, port=4420 00:33:14.983 qpair failed and we were unable to recover it. 00:33:14.983 A controller has encountered a failure and is being reset. 00:33:14.983 [2024-07-26 09:06:33.130570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.983 [2024-07-26 09:06:33.130615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb3a470 with addr=10.0.0.2, port=4420 00:33:14.983 [2024-07-26 09:06:33.130634] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3a470 is same with the state(5) to be set 00:33:14.983 [2024-07-26 09:06:33.130661] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb3a470 (9): Bad file descriptor 00:33:14.983 [2024-07-26 09:06:33.130681] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:14.983 [2024-07-26 09:06:33.130696] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:14.983 [2024-07-26 09:06:33.130717] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:14.983 Unable to reset the controller. 00:33:14.983 09:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:14.983 09:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:14.983 09:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:14.983 09:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:14.983 Malloc0 00:33:14.983 09:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:14.983 09:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:33:14.983 09:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:14.983 09:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:14.983 [2024-07-26 09:06:33.159599] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:14.983 09:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:14.983 09:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:14.983 09:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:14.983 09:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:14.983 09:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:14.983 09:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:14.983 09:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:14.983 09:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:14.983 09:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:14.983 09:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:14.983 09:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:14.983 09:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:14.983 [2024-07-26 09:06:33.187853] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:14.983 09:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:14.983 09:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:14.983 09:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:14.983 09:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:14.983 09:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:14.983 09:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 1120085 00:33:15.920 Controller properly reset. 00:33:21.188 Initializing NVMe Controllers 00:33:21.188 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:21.188 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:21.188 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:33:21.188 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:33:21.188 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:33:21.188 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:33:21.188 Initialization complete. Launching workers. 00:33:21.188 Starting thread on core 1 00:33:21.188 Starting thread on core 2 00:33:21.188 Starting thread on core 3 00:33:21.188 Starting thread on core 0 00:33:21.188 09:06:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:33:21.188 00:33:21.188 real 0m10.724s 00:33:21.188 user 0m32.167s 00:33:21.188 sys 0m7.856s 00:33:21.188 09:06:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:21.188 09:06:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:21.188 ************************************ 00:33:21.188 END TEST nvmf_target_disconnect_tc2 00:33:21.188 ************************************ 00:33:21.188 09:06:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:33:21.188 09:06:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:33:21.188 09:06:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:33:21.188 09:06:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:21.188 09:06:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:33:21.188 09:06:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:21.188 09:06:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:33:21.188 09:06:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:21.188 09:06:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:21.188 rmmod nvme_tcp 00:33:21.188 rmmod nvme_fabrics 00:33:21.188 rmmod nvme_keyring 00:33:21.188 09:06:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:21.188 09:06:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:33:21.188 09:06:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:33:21.188 09:06:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 1120497 ']' 00:33:21.188 09:06:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 1120497 00:33:21.188 09:06:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@950 -- # '[' -z 1120497 ']' 00:33:21.188 09:06:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # kill -0 1120497 00:33:21.188 09:06:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # uname 00:33:21.188 09:06:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:21.188 09:06:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1120497 00:33:21.188 09:06:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_4 00:33:21.188 09:06:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_4 = sudo ']' 00:33:21.188 09:06:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1120497' 00:33:21.188 killing process with pid 1120497 00:33:21.188 09:06:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@969 -- # kill 1120497 00:33:21.188 09:06:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@974 -- # wait 1120497 00:33:21.188 09:06:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:21.188 09:06:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:21.188 09:06:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:21.188 09:06:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:21.188 09:06:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:21.188 09:06:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:21.188 09:06:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:21.188 09:06:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:23.095 09:06:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:23.095 00:33:23.095 real 0m15.436s 00:33:23.095 user 0m57.566s 00:33:23.095 sys 0m10.295s 00:33:23.095 09:06:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:23.095 09:06:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:33:23.095 ************************************ 00:33:23.095 END TEST nvmf_target_disconnect 00:33:23.095 ************************************ 00:33:23.095 09:06:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:33:23.095 00:33:23.095 real 6m31.626s 00:33:23.095 user 17m1.771s 00:33:23.095 sys 1m28.256s 00:33:23.095 09:06:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:23.095 09:06:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:23.095 ************************************ 00:33:23.095 END TEST nvmf_host 00:33:23.095 ************************************ 00:33:23.095 00:33:23.095 real 27m9.859s 00:33:23.095 user 74m9.469s 00:33:23.095 sys 6m30.095s 00:33:23.095 09:06:41 nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:23.095 09:06:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:23.095 ************************************ 00:33:23.095 END TEST nvmf_tcp 00:33:23.095 ************************************ 00:33:23.095 09:06:41 -- spdk/autotest.sh@292 -- # [[ 0 -eq 0 ]] 00:33:23.095 09:06:41 -- spdk/autotest.sh@293 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:33:23.095 09:06:41 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:33:23.095 09:06:41 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:23.095 09:06:41 -- common/autotest_common.sh@10 -- # set +x 00:33:23.095 ************************************ 00:33:23.095 START TEST spdkcli_nvmf_tcp 00:33:23.095 ************************************ 00:33:23.095 09:06:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:33:23.356 * Looking for test storage... 00:33:23.356 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:33:23.356 09:06:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:33:23.356 09:06:41 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:33:23.356 09:06:41 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:33:23.356 09:06:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:23.356 09:06:41 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:33:23.356 09:06:41 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:23.356 09:06:41 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:23.356 09:06:41 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:23.356 09:06:41 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:23.356 09:06:41 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:23.356 09:06:41 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:23.356 09:06:41 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:23.356 09:06:41 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:23.356 09:06:41 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:23.356 09:06:41 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:23.356 09:06:41 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:23.356 09:06:41 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:23.356 09:06:41 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:23.356 09:06:41 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:23.356 09:06:41 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:23.356 09:06:41 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:23.356 09:06:41 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:23.356 09:06:41 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:23.356 09:06:41 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:23.356 09:06:41 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:23.356 09:06:41 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:23.356 09:06:41 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:23.356 09:06:41 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:23.356 09:06:41 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:33:23.356 09:06:41 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:23.356 09:06:41 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:33:23.356 09:06:41 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:23.356 09:06:41 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:23.356 09:06:41 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:23.356 09:06:41 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:23.356 09:06:41 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:23.356 09:06:41 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:23.356 09:06:41 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:23.356 09:06:41 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:23.356 09:06:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:33:23.356 09:06:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:33:23.356 09:06:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:33:23.356 09:06:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:33:23.356 09:06:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:23.356 09:06:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:23.356 09:06:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:33:23.356 09:06:41 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=1121691 00:33:23.356 09:06:41 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:33:23.356 09:06:41 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 1121691 00:33:23.356 09:06:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@831 -- # '[' -z 1121691 ']' 00:33:23.356 09:06:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:23.356 09:06:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:23.356 09:06:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:23.356 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:23.356 09:06:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:23.356 09:06:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:23.356 [2024-07-26 09:06:41.636276] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:33:23.356 [2024-07-26 09:06:41.636374] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1121691 ] 00:33:23.356 EAL: No free 2048 kB hugepages reported on node 1 00:33:23.356 [2024-07-26 09:06:41.666930] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:33:23.356 [2024-07-26 09:06:41.709349] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:33:23.356 [2024-07-26 09:06:41.812157] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:23.356 [2024-07-26 09:06:41.812169] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:23.615 09:06:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:23.615 09:06:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # return 0 00:33:23.615 09:06:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:33:23.615 09:06:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:23.615 09:06:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:23.615 09:06:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:33:23.615 09:06:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:33:23.615 09:06:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:33:23.615 09:06:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:23.615 09:06:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:23.615 09:06:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:33:23.615 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:33:23.616 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:33:23.616 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:33:23.616 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:33:23.616 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:33:23.616 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:33:23.616 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:33:23.616 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:33:23.616 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:33:23.616 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:33:23.616 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:23.616 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:33:23.616 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:33:23.616 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:23.616 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:33:23.616 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:33:23.616 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:33:23.616 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:33:23.616 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:23.616 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:33:23.616 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:33:23.616 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:33:23.616 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:33:23.616 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:23.616 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:33:23.616 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:33:23.616 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:33:23.616 ' 00:33:26.149 [2024-07-26 09:06:44.562625] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:27.528 [2024-07-26 09:06:45.782907] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:33:30.061 [2024-07-26 09:06:48.037977] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:33:31.968 [2024-07-26 09:06:49.976092] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:33:33.347 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:33:33.347 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:33:33.347 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:33:33.347 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:33:33.347 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:33:33.347 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:33:33.347 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:33:33.347 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:33:33.347 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:33:33.347 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:33:33.347 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:33:33.347 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:33.347 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:33:33.347 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:33:33.347 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:33.347 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:33:33.347 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:33:33.347 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:33:33.347 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:33:33.347 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:33.347 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:33:33.347 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:33:33.347 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:33:33.347 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:33:33.347 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:33.347 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:33:33.347 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:33:33.347 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:33:33.347 09:06:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:33:33.347 09:06:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:33.347 09:06:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:33.347 09:06:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:33:33.347 09:06:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:33.347 09:06:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:33.347 09:06:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:33:33.347 09:06:51 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:33:33.604 09:06:52 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:33:33.604 09:06:52 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:33:33.862 09:06:52 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:33:33.862 09:06:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:33.862 09:06:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:33.862 09:06:52 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:33:33.862 09:06:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:33.862 09:06:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:33.862 09:06:52 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:33:33.862 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:33:33.862 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:33:33.862 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:33:33.862 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:33:33.862 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:33:33.862 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:33:33.862 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:33:33.862 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:33:33.862 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:33:33.862 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:33:33.862 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:33:33.862 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:33:33.862 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:33:33.862 ' 00:33:39.135 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:33:39.135 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:33:39.135 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:33:39.135 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:33:39.135 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:33:39.135 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:33:39.135 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:33:39.135 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:33:39.135 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:33:39.135 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:33:39.135 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:33:39.135 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:33:39.135 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:33:39.135 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:33:39.136 09:06:57 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:33:39.136 09:06:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:39.136 09:06:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:39.136 09:06:57 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 1121691 00:33:39.136 09:06:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 1121691 ']' 00:33:39.136 09:06:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 1121691 00:33:39.136 09:06:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # uname 00:33:39.136 09:06:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:39.136 09:06:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1121691 00:33:39.136 09:06:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:33:39.136 09:06:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:33:39.136 09:06:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1121691' 00:33:39.136 killing process with pid 1121691 00:33:39.136 09:06:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@969 -- # kill 1121691 00:33:39.136 09:06:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@974 -- # wait 1121691 00:33:39.136 09:06:57 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:33:39.136 09:06:57 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:33:39.136 09:06:57 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 1121691 ']' 00:33:39.136 09:06:57 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 1121691 00:33:39.136 09:06:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 1121691 ']' 00:33:39.136 09:06:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 1121691 00:33:39.136 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1121691) - No such process 00:33:39.136 09:06:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@977 -- # echo 'Process with pid 1121691 is not found' 00:33:39.136 Process with pid 1121691 is not found 00:33:39.136 09:06:57 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:33:39.136 09:06:57 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:33:39.136 09:06:57 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:33:39.395 00:33:39.395 real 0m16.069s 00:33:39.395 user 0m34.071s 00:33:39.395 sys 0m0.806s 00:33:39.395 09:06:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:39.395 09:06:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:39.395 ************************************ 00:33:39.395 END TEST spdkcli_nvmf_tcp 00:33:39.395 ************************************ 00:33:39.395 09:06:57 -- spdk/autotest.sh@294 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:33:39.395 09:06:57 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:33:39.395 09:06:57 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:39.395 09:06:57 -- common/autotest_common.sh@10 -- # set +x 00:33:39.395 ************************************ 00:33:39.395 START TEST nvmf_identify_passthru 00:33:39.395 ************************************ 00:33:39.395 09:06:57 nvmf_identify_passthru -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:33:39.395 * Looking for test storage... 00:33:39.395 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:39.395 09:06:57 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:39.395 09:06:57 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:33:39.395 09:06:57 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:39.395 09:06:57 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:39.395 09:06:57 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:39.395 09:06:57 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:39.395 09:06:57 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:39.395 09:06:57 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:39.395 09:06:57 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:39.395 09:06:57 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:39.395 09:06:57 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:39.395 09:06:57 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:39.395 09:06:57 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:39.395 09:06:57 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:39.395 09:06:57 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:39.395 09:06:57 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:39.395 09:06:57 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:39.395 09:06:57 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:39.395 09:06:57 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:39.395 09:06:57 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:39.395 09:06:57 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:39.395 09:06:57 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:39.395 09:06:57 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:39.395 09:06:57 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:39.395 09:06:57 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:39.395 09:06:57 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:33:39.395 09:06:57 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:39.395 09:06:57 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:33:39.395 09:06:57 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:39.395 09:06:57 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:39.395 09:06:57 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:39.395 09:06:57 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:39.395 09:06:57 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:39.395 09:06:57 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:39.395 09:06:57 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:39.395 09:06:57 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:39.395 09:06:57 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:39.395 09:06:57 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:39.395 09:06:57 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:39.395 09:06:57 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:39.395 09:06:57 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:39.395 09:06:57 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:39.395 09:06:57 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:39.395 09:06:57 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:33:39.395 09:06:57 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:39.395 09:06:57 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:33:39.395 09:06:57 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:33:39.395 09:06:57 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:39.395 09:06:57 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:39.395 09:06:57 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:39.395 09:06:57 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:39.395 09:06:57 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:39.395 09:06:57 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:39.395 09:06:57 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:39.395 09:06:57 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:33:39.395 09:06:57 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:39.395 09:06:57 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:33:39.395 09:06:57 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:41.302 09:06:59 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:41.302 09:06:59 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:33:41.302 09:06:59 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:41.302 09:06:59 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:41.302 09:06:59 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:41.302 09:06:59 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:41.302 09:06:59 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:41.302 09:06:59 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:33:41.302 09:06:59 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:41.302 09:06:59 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:33:41.302 09:06:59 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:33:41.302 09:06:59 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:33:41.302 09:06:59 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:33:41.302 09:06:59 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:33:41.302 09:06:59 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:33:41.302 09:06:59 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:41.302 09:06:59 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:41.302 09:06:59 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:41.302 09:06:59 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:41.302 09:06:59 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:41.302 09:06:59 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:41.302 09:06:59 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:41.302 09:06:59 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:41.302 09:06:59 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:41.302 09:06:59 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:41.302 09:06:59 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:41.302 09:06:59 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:41.302 09:06:59 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:33:41.302 09:06:59 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:33:41.302 09:06:59 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:33:41.302 09:06:59 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:33:41.302 09:06:59 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:41.302 09:06:59 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:41.302 09:06:59 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:33:41.302 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:33:41.302 09:06:59 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:41.302 09:06:59 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:41.302 09:06:59 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:41.302 09:06:59 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:41.302 09:06:59 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:41.302 09:06:59 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:41.302 09:06:59 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:33:41.302 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:33:41.302 09:06:59 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:41.302 09:06:59 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:41.302 09:06:59 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:41.302 09:06:59 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:41.302 09:06:59 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:41.302 09:06:59 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:41.302 09:06:59 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:33:41.302 09:06:59 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:33:41.302 09:06:59 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:41.302 09:06:59 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:41.302 09:06:59 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:41.302 09:06:59 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:41.302 09:06:59 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:41.302 09:06:59 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:41.302 09:06:59 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:41.302 09:06:59 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:33:41.302 Found net devices under 0000:0a:00.0: cvl_0_0 00:33:41.302 09:06:59 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:41.302 09:06:59 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:41.302 09:06:59 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:41.302 09:06:59 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:41.302 09:06:59 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:41.302 09:06:59 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:41.302 09:06:59 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:41.302 09:06:59 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:41.302 09:06:59 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:33:41.302 Found net devices under 0000:0a:00.1: cvl_0_1 00:33:41.302 09:06:59 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:41.302 09:06:59 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:41.302 09:06:59 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:33:41.302 09:06:59 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:41.302 09:06:59 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:33:41.302 09:06:59 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:33:41.302 09:06:59 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:41.302 09:06:59 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:41.302 09:06:59 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:41.302 09:06:59 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:33:41.303 09:06:59 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:41.303 09:06:59 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:41.303 09:06:59 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:33:41.303 09:06:59 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:41.303 09:06:59 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:41.303 09:06:59 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:33:41.303 09:06:59 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:33:41.303 09:06:59 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:33:41.303 09:06:59 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:41.303 09:06:59 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:41.303 09:06:59 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:41.303 09:06:59 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:33:41.303 09:06:59 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:41.303 09:06:59 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:41.303 09:06:59 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:41.303 09:06:59 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:33:41.303 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:41.303 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.140 ms 00:33:41.303 00:33:41.303 --- 10.0.0.2 ping statistics --- 00:33:41.303 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:41.303 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:33:41.303 09:06:59 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:41.303 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:41.303 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.121 ms 00:33:41.303 00:33:41.303 --- 10.0.0.1 ping statistics --- 00:33:41.303 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:41.303 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:33:41.303 09:06:59 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:41.303 09:06:59 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:33:41.303 09:06:59 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:41.303 09:06:59 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:41.303 09:06:59 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:33:41.303 09:06:59 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:33:41.303 09:06:59 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:41.303 09:06:59 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:33:41.303 09:06:59 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:33:41.303 09:06:59 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:33:41.303 09:06:59 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:41.303 09:06:59 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:41.303 09:06:59 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:33:41.303 09:06:59 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=() 00:33:41.303 09:06:59 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # local bdfs 00:33:41.303 09:06:59 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:33:41.303 09:06:59 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:33:41.303 09:06:59 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=() 00:33:41.303 09:06:59 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # local bdfs 00:33:41.303 09:06:59 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:33:41.303 09:06:59 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:33:41.303 09:06:59 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:33:41.303 09:06:59 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:33:41.303 09:06:59 nvmf_identify_passthru -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:88:00.0 00:33:41.303 09:06:59 nvmf_identify_passthru -- common/autotest_common.sh@1527 -- # echo 0000:88:00.0 00:33:41.303 09:06:59 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:88:00.0 00:33:41.303 09:06:59 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:88:00.0 ']' 00:33:41.303 09:06:59 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:33:41.303 09:06:59 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:33:41.303 09:06:59 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:33:41.561 EAL: No free 2048 kB hugepages reported on node 1 00:33:45.756 09:07:03 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLJ916004901P0FGN 00:33:45.756 09:07:03 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:33:45.756 09:07:03 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:33:45.756 09:07:03 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:33:45.756 EAL: No free 2048 kB hugepages reported on node 1 00:33:49.945 09:07:08 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:33:49.946 09:07:08 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:33:49.946 09:07:08 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:49.946 09:07:08 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:49.946 09:07:08 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:33:49.946 09:07:08 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:49.946 09:07:08 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:49.946 09:07:08 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=1126185 00:33:49.946 09:07:08 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:33:49.946 09:07:08 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:49.946 09:07:08 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 1126185 00:33:49.946 09:07:08 nvmf_identify_passthru -- common/autotest_common.sh@831 -- # '[' -z 1126185 ']' 00:33:49.946 09:07:08 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:49.946 09:07:08 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:49.946 09:07:08 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:49.946 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:49.946 09:07:08 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:49.946 09:07:08 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:49.946 [2024-07-26 09:07:08.237268] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:33:49.946 [2024-07-26 09:07:08.237365] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:49.946 EAL: No free 2048 kB hugepages reported on node 1 00:33:49.946 [2024-07-26 09:07:08.279804] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:33:49.946 [2024-07-26 09:07:08.305366] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:49.946 [2024-07-26 09:07:08.391039] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:49.946 [2024-07-26 09:07:08.391122] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:49.946 [2024-07-26 09:07:08.391150] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:49.946 [2024-07-26 09:07:08.391162] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:49.946 [2024-07-26 09:07:08.391171] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:49.946 [2024-07-26 09:07:08.391231] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:49.946 [2024-07-26 09:07:08.391255] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:33:49.946 [2024-07-26 09:07:08.391313] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:33:49.946 [2024-07-26 09:07:08.391315] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:50.204 09:07:08 nvmf_identify_passthru -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:50.204 09:07:08 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # return 0 00:33:50.204 09:07:08 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:33:50.204 09:07:08 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:50.204 09:07:08 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:50.204 INFO: Log level set to 20 00:33:50.204 INFO: Requests: 00:33:50.204 { 00:33:50.204 "jsonrpc": "2.0", 00:33:50.204 "method": "nvmf_set_config", 00:33:50.204 "id": 1, 00:33:50.204 "params": { 00:33:50.204 "admin_cmd_passthru": { 00:33:50.204 "identify_ctrlr": true 00:33:50.204 } 00:33:50.204 } 00:33:50.204 } 00:33:50.204 00:33:50.204 INFO: response: 00:33:50.204 { 00:33:50.204 "jsonrpc": "2.0", 00:33:50.204 "id": 1, 00:33:50.204 "result": true 00:33:50.204 } 00:33:50.204 00:33:50.204 09:07:08 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:50.204 09:07:08 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:33:50.204 09:07:08 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:50.204 09:07:08 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:50.204 INFO: Setting log level to 20 00:33:50.204 INFO: Setting log level to 20 00:33:50.204 INFO: Log level set to 20 00:33:50.204 INFO: Log level set to 20 00:33:50.204 INFO: Requests: 00:33:50.204 { 00:33:50.204 "jsonrpc": "2.0", 00:33:50.204 "method": "framework_start_init", 00:33:50.204 "id": 1 00:33:50.204 } 00:33:50.204 00:33:50.204 INFO: Requests: 00:33:50.204 { 00:33:50.204 "jsonrpc": "2.0", 00:33:50.204 "method": "framework_start_init", 00:33:50.204 "id": 1 00:33:50.204 } 00:33:50.204 00:33:50.204 [2024-07-26 09:07:08.570269] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:33:50.204 INFO: response: 00:33:50.204 { 00:33:50.204 "jsonrpc": "2.0", 00:33:50.204 "id": 1, 00:33:50.204 "result": true 00:33:50.204 } 00:33:50.204 00:33:50.204 INFO: response: 00:33:50.204 { 00:33:50.204 "jsonrpc": "2.0", 00:33:50.204 "id": 1, 00:33:50.204 "result": true 00:33:50.204 } 00:33:50.204 00:33:50.204 09:07:08 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:50.204 09:07:08 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:50.204 09:07:08 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:50.204 09:07:08 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:50.204 INFO: Setting log level to 40 00:33:50.204 INFO: Setting log level to 40 00:33:50.204 INFO: Setting log level to 40 00:33:50.204 [2024-07-26 09:07:08.580227] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:50.204 09:07:08 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:50.204 09:07:08 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:33:50.204 09:07:08 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:50.204 09:07:08 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:50.204 09:07:08 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 00:33:50.204 09:07:08 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:50.204 09:07:08 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:53.531 Nvme0n1 00:33:53.531 09:07:11 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:53.531 09:07:11 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:33:53.531 09:07:11 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:53.531 09:07:11 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:53.531 09:07:11 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:53.531 09:07:11 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:33:53.531 09:07:11 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:53.531 09:07:11 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:53.531 09:07:11 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:53.531 09:07:11 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:53.531 09:07:11 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:53.531 09:07:11 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:53.531 [2024-07-26 09:07:11.464679] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:53.531 09:07:11 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:53.531 09:07:11 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:33:53.531 09:07:11 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:53.531 09:07:11 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:53.531 [ 00:33:53.531 { 00:33:53.531 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:33:53.531 "subtype": "Discovery", 00:33:53.531 "listen_addresses": [], 00:33:53.531 "allow_any_host": true, 00:33:53.531 "hosts": [] 00:33:53.531 }, 00:33:53.531 { 00:33:53.531 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:33:53.531 "subtype": "NVMe", 00:33:53.531 "listen_addresses": [ 00:33:53.531 { 00:33:53.531 "trtype": "TCP", 00:33:53.531 "adrfam": "IPv4", 00:33:53.531 "traddr": "10.0.0.2", 00:33:53.531 "trsvcid": "4420" 00:33:53.531 } 00:33:53.531 ], 00:33:53.531 "allow_any_host": true, 00:33:53.531 "hosts": [], 00:33:53.531 "serial_number": "SPDK00000000000001", 00:33:53.531 "model_number": "SPDK bdev Controller", 00:33:53.531 "max_namespaces": 1, 00:33:53.531 "min_cntlid": 1, 00:33:53.531 "max_cntlid": 65519, 00:33:53.531 "namespaces": [ 00:33:53.531 { 00:33:53.531 "nsid": 1, 00:33:53.531 "bdev_name": "Nvme0n1", 00:33:53.531 "name": "Nvme0n1", 00:33:53.531 "nguid": "683AE66A59E44CE4A3A3B72B4A59CC79", 00:33:53.531 "uuid": "683ae66a-59e4-4ce4-a3a3-b72b4a59cc79" 00:33:53.531 } 00:33:53.531 ] 00:33:53.531 } 00:33:53.531 ] 00:33:53.531 09:07:11 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:53.531 09:07:11 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:33:53.531 09:07:11 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:33:53.531 09:07:11 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:33:53.531 EAL: No free 2048 kB hugepages reported on node 1 00:33:53.531 09:07:11 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLJ916004901P0FGN 00:33:53.532 09:07:11 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:33:53.532 09:07:11 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:33:53.532 09:07:11 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:33:53.532 EAL: No free 2048 kB hugepages reported on node 1 00:33:53.532 09:07:11 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:33:53.532 09:07:11 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' PHLJ916004901P0FGN '!=' PHLJ916004901P0FGN ']' 00:33:53.532 09:07:11 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:33:53.792 09:07:11 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:53.792 09:07:11 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:53.792 09:07:11 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:53.792 09:07:11 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:53.792 09:07:11 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:33:53.792 09:07:12 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:33:53.792 09:07:12 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:53.792 09:07:12 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:33:53.792 09:07:12 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:53.792 09:07:12 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:33:53.792 09:07:12 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:53.792 09:07:12 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:53.792 rmmod nvme_tcp 00:33:53.792 rmmod nvme_fabrics 00:33:53.792 rmmod nvme_keyring 00:33:53.792 09:07:12 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:53.792 09:07:12 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:33:53.792 09:07:12 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:33:53.792 09:07:12 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 1126185 ']' 00:33:53.792 09:07:12 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 1126185 00:33:53.792 09:07:12 nvmf_identify_passthru -- common/autotest_common.sh@950 -- # '[' -z 1126185 ']' 00:33:53.792 09:07:12 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # kill -0 1126185 00:33:53.792 09:07:12 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # uname 00:33:53.792 09:07:12 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:53.792 09:07:12 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1126185 00:33:53.792 09:07:12 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:33:53.792 09:07:12 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:33:53.792 09:07:12 nvmf_identify_passthru -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1126185' 00:33:53.792 killing process with pid 1126185 00:33:53.792 09:07:12 nvmf_identify_passthru -- common/autotest_common.sh@969 -- # kill 1126185 00:33:53.792 09:07:12 nvmf_identify_passthru -- common/autotest_common.sh@974 -- # wait 1126185 00:33:55.692 09:07:13 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:55.692 09:07:13 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:55.692 09:07:13 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:55.693 09:07:13 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:55.693 09:07:13 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:55.693 09:07:13 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:55.693 09:07:13 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:55.693 09:07:13 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:57.593 09:07:15 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:57.593 00:33:57.593 real 0m18.047s 00:33:57.593 user 0m27.322s 00:33:57.593 sys 0m2.275s 00:33:57.593 09:07:15 nvmf_identify_passthru -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:57.593 09:07:15 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:57.593 ************************************ 00:33:57.593 END TEST nvmf_identify_passthru 00:33:57.593 ************************************ 00:33:57.593 09:07:15 -- spdk/autotest.sh@296 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:33:57.593 09:07:15 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:33:57.593 09:07:15 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:57.593 09:07:15 -- common/autotest_common.sh@10 -- # set +x 00:33:57.593 ************************************ 00:33:57.593 START TEST nvmf_dif 00:33:57.593 ************************************ 00:33:57.593 09:07:15 nvmf_dif -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:33:57.593 * Looking for test storage... 00:33:57.593 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:57.593 09:07:15 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:57.594 09:07:15 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:33:57.594 09:07:15 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:57.594 09:07:15 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:57.594 09:07:15 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:57.594 09:07:15 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:57.594 09:07:15 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:57.594 09:07:15 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:57.594 09:07:15 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:57.594 09:07:15 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:57.594 09:07:15 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:57.594 09:07:15 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:57.594 09:07:15 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:57.594 09:07:15 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:57.594 09:07:15 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:57.594 09:07:15 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:57.594 09:07:15 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:57.594 09:07:15 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:57.594 09:07:15 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:57.594 09:07:15 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:57.594 09:07:15 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:57.594 09:07:15 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:57.594 09:07:15 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:57.594 09:07:15 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:57.594 09:07:15 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:57.594 09:07:15 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:33:57.594 09:07:15 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:57.594 09:07:15 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:33:57.594 09:07:15 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:57.594 09:07:15 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:57.594 09:07:15 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:57.594 09:07:15 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:57.594 09:07:15 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:57.594 09:07:15 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:57.594 09:07:15 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:57.594 09:07:15 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:57.594 09:07:15 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:33:57.594 09:07:15 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:33:57.594 09:07:15 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:33:57.594 09:07:15 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:33:57.594 09:07:15 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:33:57.594 09:07:15 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:33:57.594 09:07:15 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:57.594 09:07:15 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:57.594 09:07:15 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:57.594 09:07:15 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:57.594 09:07:15 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:57.594 09:07:15 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:57.594 09:07:15 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:57.594 09:07:15 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:33:57.594 09:07:15 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:57.594 09:07:15 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:33:57.594 09:07:15 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:59.499 09:07:17 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:59.499 09:07:17 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:33:59.499 09:07:17 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:59.499 09:07:17 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:59.499 09:07:17 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:59.499 09:07:17 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:59.499 09:07:17 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:59.499 09:07:17 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:33:59.499 09:07:17 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:59.499 09:07:17 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:33:59.499 09:07:17 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:33:59.499 09:07:17 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:33:59.499 09:07:17 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:33:59.499 09:07:17 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:33:59.499 09:07:17 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:33:59.499 09:07:17 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:59.499 09:07:17 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:59.499 09:07:17 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:59.499 09:07:17 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:59.499 09:07:17 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:59.499 09:07:17 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:59.499 09:07:17 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:59.499 09:07:17 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:59.499 09:07:17 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:59.499 09:07:17 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:59.499 09:07:17 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:59.499 09:07:17 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:59.499 09:07:17 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:33:59.499 09:07:17 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:33:59.499 09:07:17 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:33:59.499 09:07:17 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:33:59.499 09:07:17 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:59.499 09:07:17 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:59.499 09:07:17 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:33:59.499 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:33:59.499 09:07:17 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:59.499 09:07:17 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:59.499 09:07:17 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:59.499 09:07:17 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:59.499 09:07:17 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:59.499 09:07:17 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:59.499 09:07:17 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:33:59.499 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:33:59.499 09:07:17 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:59.499 09:07:17 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:59.499 09:07:17 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:59.499 09:07:17 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:59.499 09:07:17 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:59.499 09:07:17 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:59.499 09:07:17 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:33:59.499 09:07:17 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:33:59.499 09:07:17 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:59.499 09:07:17 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:59.499 09:07:17 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:59.499 09:07:17 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:59.499 09:07:17 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:59.499 09:07:17 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:59.499 09:07:17 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:59.499 09:07:17 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:33:59.499 Found net devices under 0000:0a:00.0: cvl_0_0 00:33:59.499 09:07:17 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:59.499 09:07:17 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:59.499 09:07:17 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:59.499 09:07:17 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:59.499 09:07:17 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:59.499 09:07:17 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:59.499 09:07:17 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:59.499 09:07:17 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:59.499 09:07:17 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:33:59.499 Found net devices under 0000:0a:00.1: cvl_0_1 00:33:59.499 09:07:17 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:59.499 09:07:17 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:59.499 09:07:17 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:33:59.499 09:07:17 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:59.499 09:07:17 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:33:59.499 09:07:17 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:33:59.499 09:07:17 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:59.499 09:07:17 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:59.499 09:07:17 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:59.499 09:07:17 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:33:59.499 09:07:17 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:59.499 09:07:17 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:59.499 09:07:17 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:33:59.499 09:07:17 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:59.499 09:07:17 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:59.499 09:07:17 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:33:59.499 09:07:17 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:33:59.499 09:07:17 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:33:59.499 09:07:17 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:59.499 09:07:17 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:59.499 09:07:17 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:59.499 09:07:17 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:33:59.499 09:07:17 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:59.499 09:07:17 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:59.499 09:07:17 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:59.499 09:07:17 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:33:59.499 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:59.499 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.136 ms 00:33:59.499 00:33:59.499 --- 10.0.0.2 ping statistics --- 00:33:59.499 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:59.500 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:33:59.500 09:07:17 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:59.500 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:59.500 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.171 ms 00:33:59.500 00:33:59.500 --- 10.0.0.1 ping statistics --- 00:33:59.500 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:59.500 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:33:59.500 09:07:17 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:59.500 09:07:17 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:33:59.500 09:07:17 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:33:59.500 09:07:17 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:00.875 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:34:00.875 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:34:00.875 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:34:00.875 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:34:00.875 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:34:00.875 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:34:00.875 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:34:00.875 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:34:00.875 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:34:00.875 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:34:00.875 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:34:00.875 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:34:00.875 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:34:00.875 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:34:00.875 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:34:00.875 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:34:00.875 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:34:00.875 09:07:19 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:00.875 09:07:19 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:34:00.875 09:07:19 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:34:00.875 09:07:19 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:00.875 09:07:19 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:34:00.875 09:07:19 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:34:00.875 09:07:19 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:34:00.875 09:07:19 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:34:00.875 09:07:19 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:00.875 09:07:19 nvmf_dif -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:00.875 09:07:19 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:00.875 09:07:19 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=1129447 00:34:00.875 09:07:19 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:34:00.875 09:07:19 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 1129447 00:34:00.875 09:07:19 nvmf_dif -- common/autotest_common.sh@831 -- # '[' -z 1129447 ']' 00:34:00.875 09:07:19 nvmf_dif -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:00.875 09:07:19 nvmf_dif -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:00.875 09:07:19 nvmf_dif -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:00.875 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:00.875 09:07:19 nvmf_dif -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:00.875 09:07:19 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:00.875 [2024-07-26 09:07:19.218514] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:34:00.875 [2024-07-26 09:07:19.218589] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:00.875 EAL: No free 2048 kB hugepages reported on node 1 00:34:00.875 [2024-07-26 09:07:19.254849] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:34:00.875 [2024-07-26 09:07:19.281514] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:01.134 [2024-07-26 09:07:19.365524] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:01.134 [2024-07-26 09:07:19.365577] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:01.134 [2024-07-26 09:07:19.365606] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:01.134 [2024-07-26 09:07:19.365617] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:01.134 [2024-07-26 09:07:19.365626] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:01.134 [2024-07-26 09:07:19.365652] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:01.134 09:07:19 nvmf_dif -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:01.134 09:07:19 nvmf_dif -- common/autotest_common.sh@864 -- # return 0 00:34:01.134 09:07:19 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:01.134 09:07:19 nvmf_dif -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:01.134 09:07:19 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:01.134 09:07:19 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:01.134 09:07:19 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:34:01.134 09:07:19 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:34:01.134 09:07:19 nvmf_dif -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:01.134 09:07:19 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:01.134 [2024-07-26 09:07:19.506055] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:01.134 09:07:19 nvmf_dif -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:01.134 09:07:19 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:34:01.134 09:07:19 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:34:01.134 09:07:19 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:01.134 09:07:19 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:01.134 ************************************ 00:34:01.134 START TEST fio_dif_1_default 00:34:01.134 ************************************ 00:34:01.134 09:07:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1125 -- # fio_dif_1 00:34:01.134 09:07:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:34:01.134 09:07:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:34:01.134 09:07:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:34:01.134 09:07:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:34:01.134 09:07:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:34:01.134 09:07:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:34:01.135 09:07:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:01.135 09:07:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:01.135 bdev_null0 00:34:01.135 09:07:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:01.135 09:07:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:01.135 09:07:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:01.135 09:07:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:01.135 09:07:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:01.135 09:07:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:01.135 09:07:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:01.135 09:07:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:01.135 09:07:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:01.135 09:07:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:01.135 09:07:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:01.135 09:07:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:01.135 [2024-07-26 09:07:19.570445] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:01.135 09:07:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:01.135 09:07:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:34:01.135 09:07:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:01.135 09:07:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:34:01.135 09:07:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:01.135 09:07:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:34:01.135 09:07:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:34:01.135 09:07:19 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:34:01.135 09:07:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:01.135 09:07:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:34:01.135 09:07:19 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:34:01.135 09:07:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:34:01.135 09:07:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:01.135 09:07:19 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:01.135 09:07:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:34:01.135 09:07:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:34:01.135 09:07:19 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:01.135 { 00:34:01.135 "params": { 00:34:01.135 "name": "Nvme$subsystem", 00:34:01.135 "trtype": "$TEST_TRANSPORT", 00:34:01.135 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:01.135 "adrfam": "ipv4", 00:34:01.135 "trsvcid": "$NVMF_PORT", 00:34:01.135 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:01.135 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:01.135 "hdgst": ${hdgst:-false}, 00:34:01.135 "ddgst": ${ddgst:-false} 00:34:01.135 }, 00:34:01.135 "method": "bdev_nvme_attach_controller" 00:34:01.135 } 00:34:01.135 EOF 00:34:01.135 )") 00:34:01.135 09:07:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:34:01.135 09:07:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:34:01.135 09:07:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:01.135 09:07:19 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:34:01.135 09:07:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:01.135 09:07:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:34:01.135 09:07:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:01.135 09:07:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:34:01.135 09:07:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:34:01.135 09:07:19 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:34:01.135 09:07:19 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:34:01.135 09:07:19 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:34:01.135 "params": { 00:34:01.135 "name": "Nvme0", 00:34:01.135 "trtype": "tcp", 00:34:01.135 "traddr": "10.0.0.2", 00:34:01.135 "adrfam": "ipv4", 00:34:01.135 "trsvcid": "4420", 00:34:01.135 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:01.135 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:01.135 "hdgst": false, 00:34:01.135 "ddgst": false 00:34:01.135 }, 00:34:01.135 "method": "bdev_nvme_attach_controller" 00:34:01.135 }' 00:34:01.394 09:07:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:01.394 09:07:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:01.394 09:07:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:01.394 09:07:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:01.394 09:07:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:34:01.394 09:07:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:01.394 09:07:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:01.394 09:07:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:01.394 09:07:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:01.394 09:07:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:01.394 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:34:01.394 fio-3.35 00:34:01.394 Starting 1 thread 00:34:01.652 EAL: No free 2048 kB hugepages reported on node 1 00:34:13.882 00:34:13.882 filename0: (groupid=0, jobs=1): err= 0: pid=1129675: Fri Jul 26 09:07:30 2024 00:34:13.882 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10012msec) 00:34:13.882 slat (nsec): min=6424, max=39023, avg=9013.39, stdev=2934.75 00:34:13.882 clat (usec): min=40843, max=47635, avg=41003.29, stdev=429.10 00:34:13.882 lat (usec): min=40850, max=47656, avg=41012.30, stdev=429.26 00:34:13.882 clat percentiles (usec): 00:34:13.882 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:34:13.882 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:34:13.882 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:34:13.882 | 99.00th=[41157], 99.50th=[41681], 99.90th=[47449], 99.95th=[47449], 00:34:13.882 | 99.99th=[47449] 00:34:13.882 bw ( KiB/s): min= 384, max= 416, per=99.50%, avg=388.80, stdev=11.72, samples=20 00:34:13.882 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:34:13.883 lat (msec) : 50=100.00% 00:34:13.883 cpu : usr=89.43%, sys=10.30%, ctx=18, majf=0, minf=248 00:34:13.883 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:13.883 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:13.883 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:13.883 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:13.883 latency : target=0, window=0, percentile=100.00%, depth=4 00:34:13.883 00:34:13.883 Run status group 0 (all jobs): 00:34:13.883 READ: bw=390KiB/s (399kB/s), 390KiB/s-390KiB/s (399kB/s-399kB/s), io=3904KiB (3998kB), run=10012-10012msec 00:34:13.883 09:07:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:34:13.883 09:07:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:34:13.883 09:07:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:34:13.883 09:07:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:13.883 09:07:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:34:13.883 09:07:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:13.883 09:07:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:13.883 09:07:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:13.883 09:07:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:13.883 09:07:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:13.883 09:07:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:13.883 09:07:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:13.883 09:07:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:13.883 00:34:13.883 real 0m11.116s 00:34:13.883 user 0m10.116s 00:34:13.883 sys 0m1.333s 00:34:13.883 09:07:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:13.883 09:07:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:13.883 ************************************ 00:34:13.883 END TEST fio_dif_1_default 00:34:13.883 ************************************ 00:34:13.883 09:07:30 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:34:13.883 09:07:30 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:34:13.883 09:07:30 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:13.883 09:07:30 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:13.883 ************************************ 00:34:13.883 START TEST fio_dif_1_multi_subsystems 00:34:13.883 ************************************ 00:34:13.883 09:07:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1125 -- # fio_dif_1_multi_subsystems 00:34:13.883 09:07:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:34:13.883 09:07:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:34:13.883 09:07:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:34:13.883 09:07:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:34:13.883 09:07:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:34:13.883 09:07:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:34:13.883 09:07:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:34:13.883 09:07:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:13.883 09:07:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:13.883 bdev_null0 00:34:13.883 09:07:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:13.883 09:07:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:13.883 09:07:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:13.883 09:07:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:13.883 09:07:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:13.883 09:07:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:13.883 09:07:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:13.883 09:07:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:13.883 09:07:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:13.883 09:07:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:13.883 09:07:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:13.883 09:07:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:13.883 [2024-07-26 09:07:30.726595] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:13.883 09:07:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:13.883 09:07:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:34:13.883 09:07:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:34:13.883 09:07:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:34:13.883 09:07:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:34:13.883 09:07:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:13.883 09:07:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:13.883 bdev_null1 00:34:13.883 09:07:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:13.883 09:07:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:34:13.883 09:07:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:13.883 09:07:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:13.883 09:07:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:13.883 09:07:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:34:13.883 09:07:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:13.883 09:07:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:13.883 09:07:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:13.883 09:07:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:13.883 09:07:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:13.883 09:07:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:13.883 09:07:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:13.883 09:07:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:34:13.883 09:07:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:34:13.883 09:07:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:34:13.883 09:07:30 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:34:13.883 09:07:30 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:34:13.883 09:07:30 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:13.883 09:07:30 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:13.883 { 00:34:13.883 "params": { 00:34:13.883 "name": "Nvme$subsystem", 00:34:13.883 "trtype": "$TEST_TRANSPORT", 00:34:13.883 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:13.883 "adrfam": "ipv4", 00:34:13.883 "trsvcid": "$NVMF_PORT", 00:34:13.883 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:13.883 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:13.883 "hdgst": ${hdgst:-false}, 00:34:13.883 "ddgst": ${ddgst:-false} 00:34:13.883 }, 00:34:13.883 "method": "bdev_nvme_attach_controller" 00:34:13.883 } 00:34:13.883 EOF 00:34:13.883 )") 00:34:13.883 09:07:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:13.883 09:07:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:34:13.883 09:07:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:13.883 09:07:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:34:13.883 09:07:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:34:13.883 09:07:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:34:13.883 09:07:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:13.883 09:07:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:34:13.883 09:07:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:13.883 09:07:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:34:13.883 09:07:30 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:34:13.883 09:07:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:34:13.883 09:07:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:13.883 09:07:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:34:13.883 09:07:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:34:13.883 09:07:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:13.883 09:07:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:34:13.883 09:07:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:34:13.883 09:07:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:13.883 09:07:30 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:13.883 09:07:30 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:13.883 { 00:34:13.883 "params": { 00:34:13.883 "name": "Nvme$subsystem", 00:34:13.883 "trtype": "$TEST_TRANSPORT", 00:34:13.884 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:13.884 "adrfam": "ipv4", 00:34:13.884 "trsvcid": "$NVMF_PORT", 00:34:13.884 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:13.884 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:13.884 "hdgst": ${hdgst:-false}, 00:34:13.884 "ddgst": ${ddgst:-false} 00:34:13.884 }, 00:34:13.884 "method": "bdev_nvme_attach_controller" 00:34:13.884 } 00:34:13.884 EOF 00:34:13.884 )") 00:34:13.884 09:07:30 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:34:13.884 09:07:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:34:13.884 09:07:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:34:13.884 09:07:30 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:34:13.884 09:07:30 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:34:13.884 09:07:30 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:34:13.884 "params": { 00:34:13.884 "name": "Nvme0", 00:34:13.884 "trtype": "tcp", 00:34:13.884 "traddr": "10.0.0.2", 00:34:13.884 "adrfam": "ipv4", 00:34:13.884 "trsvcid": "4420", 00:34:13.884 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:13.884 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:13.884 "hdgst": false, 00:34:13.884 "ddgst": false 00:34:13.884 }, 00:34:13.884 "method": "bdev_nvme_attach_controller" 00:34:13.884 },{ 00:34:13.884 "params": { 00:34:13.884 "name": "Nvme1", 00:34:13.884 "trtype": "tcp", 00:34:13.884 "traddr": "10.0.0.2", 00:34:13.884 "adrfam": "ipv4", 00:34:13.884 "trsvcid": "4420", 00:34:13.884 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:13.884 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:13.884 "hdgst": false, 00:34:13.884 "ddgst": false 00:34:13.884 }, 00:34:13.884 "method": "bdev_nvme_attach_controller" 00:34:13.884 }' 00:34:13.884 09:07:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:13.884 09:07:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:13.884 09:07:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:13.884 09:07:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:13.884 09:07:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:34:13.884 09:07:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:13.884 09:07:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:13.884 09:07:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:13.884 09:07:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:13.884 09:07:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:13.884 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:34:13.884 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:34:13.884 fio-3.35 00:34:13.884 Starting 2 threads 00:34:13.884 EAL: No free 2048 kB hugepages reported on node 1 00:34:23.847 00:34:23.847 filename0: (groupid=0, jobs=1): err= 0: pid=1131095: Fri Jul 26 09:07:41 2024 00:34:23.847 read: IOPS=189, BW=760KiB/s (778kB/s)(7600KiB/10004msec) 00:34:23.847 slat (nsec): min=4732, max=72235, avg=9494.35, stdev=4302.93 00:34:23.847 clat (usec): min=665, max=46113, avg=21029.82, stdev=20198.68 00:34:23.847 lat (usec): min=672, max=46185, avg=21039.32, stdev=20199.02 00:34:23.847 clat percentiles (usec): 00:34:23.847 | 1.00th=[ 725], 5.00th=[ 750], 10.00th=[ 758], 20.00th=[ 783], 00:34:23.847 | 30.00th=[ 799], 40.00th=[ 824], 50.00th=[41157], 60.00th=[41157], 00:34:23.847 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:34:23.847 | 99.00th=[41157], 99.50th=[41681], 99.90th=[45876], 99.95th=[45876], 00:34:23.847 | 99.99th=[45876] 00:34:23.847 bw ( KiB/s): min= 702, max= 768, per=66.46%, avg=761.16, stdev=20.50, samples=19 00:34:23.847 iops : min= 175, max= 192, avg=190.26, stdev= 5.21, samples=19 00:34:23.847 lat (usec) : 750=5.26%, 1000=44.21% 00:34:23.847 lat (msec) : 2=0.42%, 50=50.11% 00:34:23.847 cpu : usr=94.54%, sys=5.16%, ctx=23, majf=0, minf=212 00:34:23.847 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:23.847 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:23.847 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:23.847 issued rwts: total=1900,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:23.847 latency : target=0, window=0, percentile=100.00%, depth=4 00:34:23.847 filename1: (groupid=0, jobs=1): err= 0: pid=1131096: Fri Jul 26 09:07:41 2024 00:34:23.847 read: IOPS=96, BW=385KiB/s (395kB/s)(3856KiB/10005msec) 00:34:23.847 slat (nsec): min=4931, max=35863, avg=9553.81, stdev=3630.95 00:34:23.847 clat (usec): min=40812, max=45195, avg=41481.63, stdev=565.57 00:34:23.847 lat (usec): min=40819, max=45208, avg=41491.18, stdev=565.54 00:34:23.847 clat percentiles (usec): 00:34:23.847 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:34:23.847 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[42206], 00:34:23.847 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:34:23.847 | 99.00th=[42730], 99.50th=[43254], 99.90th=[45351], 99.95th=[45351], 00:34:23.847 | 99.99th=[45351] 00:34:23.847 bw ( KiB/s): min= 352, max= 416, per=33.54%, avg=384.00, stdev=10.38, samples=20 00:34:23.847 iops : min= 88, max= 104, avg=96.00, stdev= 2.60, samples=20 00:34:23.847 lat (msec) : 50=100.00% 00:34:23.847 cpu : usr=93.74%, sys=5.96%, ctx=16, majf=0, minf=73 00:34:23.847 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:23.847 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:23.847 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:23.847 issued rwts: total=964,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:23.847 latency : target=0, window=0, percentile=100.00%, depth=4 00:34:23.847 00:34:23.847 Run status group 0 (all jobs): 00:34:23.847 READ: bw=1145KiB/s (1173kB/s), 385KiB/s-760KiB/s (395kB/s-778kB/s), io=11.2MiB (11.7MB), run=10004-10005msec 00:34:23.847 09:07:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:34:23.847 09:07:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:34:23.847 09:07:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:34:23.847 09:07:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:23.847 09:07:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:34:23.847 09:07:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:23.847 09:07:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:23.847 09:07:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:23.847 09:07:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:23.847 09:07:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:23.847 09:07:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:23.847 09:07:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:23.847 09:07:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:23.847 09:07:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:34:23.847 09:07:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:34:23.847 09:07:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:34:23.848 09:07:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:23.848 09:07:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:23.848 09:07:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:23.848 09:07:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:23.848 09:07:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:34:23.848 09:07:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:23.848 09:07:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:23.848 09:07:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:23.848 00:34:23.848 real 0m11.518s 00:34:23.848 user 0m20.348s 00:34:23.848 sys 0m1.397s 00:34:23.848 09:07:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:23.848 09:07:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:23.848 ************************************ 00:34:23.848 END TEST fio_dif_1_multi_subsystems 00:34:23.848 ************************************ 00:34:23.848 09:07:42 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:34:23.848 09:07:42 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:34:23.848 09:07:42 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:23.848 09:07:42 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:23.848 ************************************ 00:34:23.848 START TEST fio_dif_rand_params 00:34:23.848 ************************************ 00:34:23.848 09:07:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1125 -- # fio_dif_rand_params 00:34:23.848 09:07:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:34:23.848 09:07:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:34:23.848 09:07:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:34:23.848 09:07:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:34:23.848 09:07:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:34:23.848 09:07:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:34:23.848 09:07:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:34:23.848 09:07:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:34:23.848 09:07:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:34:23.848 09:07:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:23.848 09:07:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:34:23.848 09:07:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:34:23.848 09:07:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:34:23.848 09:07:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:23.848 09:07:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:23.848 bdev_null0 00:34:23.848 09:07:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:23.848 09:07:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:23.848 09:07:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:23.848 09:07:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:23.848 09:07:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:23.848 09:07:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:23.848 09:07:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:23.848 09:07:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:23.848 09:07:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:23.848 09:07:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:23.848 09:07:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:23.848 09:07:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:23.848 [2024-07-26 09:07:42.299373] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:23.848 09:07:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:23.848 09:07:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:34:23.848 09:07:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:34:23.848 09:07:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:34:23.848 09:07:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:34:23.848 09:07:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:34:23.848 09:07:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:23.848 09:07:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:23.848 { 00:34:23.848 "params": { 00:34:23.848 "name": "Nvme$subsystem", 00:34:23.848 "trtype": "$TEST_TRANSPORT", 00:34:23.848 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:23.848 "adrfam": "ipv4", 00:34:23.848 "trsvcid": "$NVMF_PORT", 00:34:23.848 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:23.848 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:23.848 "hdgst": ${hdgst:-false}, 00:34:23.848 "ddgst": ${ddgst:-false} 00:34:23.848 }, 00:34:23.848 "method": "bdev_nvme_attach_controller" 00:34:23.848 } 00:34:23.848 EOF 00:34:23.848 )") 00:34:23.848 09:07:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:23.848 09:07:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:23.848 09:07:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:34:23.848 09:07:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:34:23.848 09:07:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:23.848 09:07:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:34:23.848 09:07:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:34:23.848 09:07:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:34:23.848 09:07:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:23.848 09:07:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:34:23.848 09:07:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:34:23.848 09:07:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:23.848 09:07:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:34:24.106 09:07:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:24.106 09:07:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:34:24.106 09:07:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:34:24.106 09:07:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:24.106 09:07:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:24.106 09:07:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:34:24.106 09:07:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:34:24.106 09:07:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:34:24.106 "params": { 00:34:24.106 "name": "Nvme0", 00:34:24.106 "trtype": "tcp", 00:34:24.106 "traddr": "10.0.0.2", 00:34:24.106 "adrfam": "ipv4", 00:34:24.106 "trsvcid": "4420", 00:34:24.106 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:24.106 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:24.106 "hdgst": false, 00:34:24.106 "ddgst": false 00:34:24.106 }, 00:34:24.106 "method": "bdev_nvme_attach_controller" 00:34:24.106 }' 00:34:24.106 09:07:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:24.106 09:07:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:24.106 09:07:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:24.106 09:07:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:24.106 09:07:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:34:24.106 09:07:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:24.106 09:07:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:24.106 09:07:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:24.106 09:07:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:24.106 09:07:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:24.106 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:34:24.106 ... 00:34:24.106 fio-3.35 00:34:24.106 Starting 3 threads 00:34:24.364 EAL: No free 2048 kB hugepages reported on node 1 00:34:30.920 00:34:30.920 filename0: (groupid=0, jobs=1): err= 0: pid=1132492: Fri Jul 26 09:07:48 2024 00:34:30.920 read: IOPS=202, BW=25.3MiB/s (26.5MB/s)(128MiB/5046msec) 00:34:30.920 slat (nsec): min=5250, max=56712, avg=16304.51, stdev=5318.34 00:34:30.920 clat (usec): min=4942, max=91008, avg=14760.22, stdev=12757.74 00:34:30.920 lat (usec): min=4954, max=91027, avg=14776.53, stdev=12757.94 00:34:30.920 clat percentiles (usec): 00:34:30.920 | 1.00th=[ 5407], 5.00th=[ 5800], 10.00th=[ 7111], 20.00th=[ 8356], 00:34:30.920 | 30.00th=[ 8979], 40.00th=[10028], 50.00th=[11076], 60.00th=[11994], 00:34:30.920 | 70.00th=[12780], 80.00th=[14353], 90.00th=[45351], 95.00th=[51119], 00:34:30.920 | 99.00th=[54789], 99.50th=[55837], 99.90th=[57410], 99.95th=[90702], 00:34:30.920 | 99.99th=[90702] 00:34:30.920 bw ( KiB/s): min=20480, max=32256, per=32.52%, avg=26086.40, stdev=4298.04, samples=10 00:34:30.920 iops : min= 160, max= 252, avg=203.80, stdev=33.58, samples=10 00:34:30.920 lat (msec) : 10=39.86%, 20=49.76%, 50=3.72%, 100=6.66% 00:34:30.920 cpu : usr=93.60%, sys=5.95%, ctx=14, majf=0, minf=72 00:34:30.920 IO depths : 1=0.5%, 2=99.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:30.920 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:30.920 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:30.920 issued rwts: total=1021,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:30.920 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:30.920 filename0: (groupid=0, jobs=1): err= 0: pid=1132493: Fri Jul 26 09:07:48 2024 00:34:30.920 read: IOPS=214, BW=26.9MiB/s (28.2MB/s)(135MiB/5010msec) 00:34:30.920 slat (nsec): min=5206, max=37964, avg=14931.04, stdev=4053.88 00:34:30.920 clat (usec): min=4975, max=90574, avg=13934.23, stdev=12540.59 00:34:30.920 lat (usec): min=4988, max=90588, avg=13949.16, stdev=12540.47 00:34:30.920 clat percentiles (usec): 00:34:30.920 | 1.00th=[ 5538], 5.00th=[ 6194], 10.00th=[ 7373], 20.00th=[ 8225], 00:34:30.920 | 30.00th=[ 8848], 40.00th=[ 9241], 50.00th=[10159], 60.00th=[11207], 00:34:30.920 | 70.00th=[11994], 80.00th=[13042], 90.00th=[15401], 95.00th=[51119], 00:34:30.920 | 99.00th=[54264], 99.50th=[54264], 99.90th=[89654], 99.95th=[90702], 00:34:30.920 | 99.99th=[90702] 00:34:30.920 bw ( KiB/s): min=20264, max=32768, per=34.28%, avg=27498.40, stdev=3723.06, samples=10 00:34:30.920 iops : min= 158, max= 256, avg=214.80, stdev=29.15, samples=10 00:34:30.920 lat (msec) : 10=47.91%, 20=42.90%, 50=3.16%, 100=6.04% 00:34:30.920 cpu : usr=94.85%, sys=4.73%, ctx=16, majf=0, minf=91 00:34:30.920 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:30.920 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:30.920 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:30.920 issued rwts: total=1077,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:30.920 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:30.920 filename0: (groupid=0, jobs=1): err= 0: pid=1132494: Fri Jul 26 09:07:48 2024 00:34:30.920 read: IOPS=210, BW=26.4MiB/s (27.6MB/s)(133MiB/5046msec) 00:34:30.920 slat (nsec): min=5089, max=72502, avg=15062.58, stdev=4433.60 00:34:30.920 clat (usec): min=5902, max=56080, avg=14166.02, stdev=11386.84 00:34:30.920 lat (usec): min=5915, max=56108, avg=14181.08, stdev=11386.80 00:34:30.920 clat percentiles (usec): 00:34:30.920 | 1.00th=[ 6587], 5.00th=[ 7308], 10.00th=[ 8160], 20.00th=[ 8979], 00:34:30.920 | 30.00th=[ 9372], 40.00th=[ 9896], 50.00th=[10814], 60.00th=[11863], 00:34:30.920 | 70.00th=[12649], 80.00th=[13566], 90.00th=[16057], 95.00th=[50070], 00:34:30.920 | 99.00th=[54264], 99.50th=[54789], 99.90th=[55837], 99.95th=[55837], 00:34:30.920 | 99.99th=[55837] 00:34:30.920 bw ( KiB/s): min=20224, max=35584, per=33.90%, avg=27187.20, stdev=5194.55, samples=10 00:34:30.920 iops : min= 158, max= 278, avg=212.40, stdev=40.58, samples=10 00:34:30.920 lat (msec) : 10=40.79%, 20=50.85%, 50=3.01%, 100=5.36% 00:34:30.920 cpu : usr=94.27%, sys=5.23%, ctx=21, majf=0, minf=154 00:34:30.920 IO depths : 1=0.8%, 2=99.2%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:30.920 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:30.920 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:30.920 issued rwts: total=1064,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:30.920 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:30.920 00:34:30.920 Run status group 0 (all jobs): 00:34:30.920 READ: bw=78.3MiB/s (82.1MB/s), 25.3MiB/s-26.9MiB/s (26.5MB/s-28.2MB/s), io=395MiB (414MB), run=5010-5046msec 00:34:30.920 09:07:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:34:30.920 09:07:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:34:30.920 09:07:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:30.920 09:07:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:30.920 09:07:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:34:30.920 09:07:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:30.920 09:07:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:30.920 09:07:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:30.920 09:07:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:30.920 09:07:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:30.920 09:07:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:30.920 09:07:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:30.920 09:07:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:30.920 09:07:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:34:30.920 09:07:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:34:30.920 09:07:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:34:30.920 09:07:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:34:30.920 09:07:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:34:30.920 09:07:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:34:30.920 09:07:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:34:30.920 09:07:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:34:30.920 09:07:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:30.920 09:07:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:34:30.920 09:07:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:34:30.920 09:07:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:34:30.920 09:07:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:30.920 09:07:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:30.920 bdev_null0 00:34:30.920 09:07:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:30.920 09:07:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:30.920 09:07:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:30.920 09:07:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:30.920 09:07:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:30.920 09:07:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:30.920 09:07:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:30.920 09:07:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:30.920 09:07:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:30.920 09:07:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:30.920 09:07:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:30.920 09:07:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:30.920 [2024-07-26 09:07:48.403872] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:30.920 09:07:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:30.920 09:07:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:30.920 09:07:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:34:30.920 09:07:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:34:30.920 09:07:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:34:30.920 09:07:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:30.920 09:07:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:30.920 bdev_null1 00:34:30.920 09:07:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:30.920 09:07:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:34:30.920 09:07:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:30.920 09:07:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:30.920 09:07:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:30.920 09:07:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:34:30.920 09:07:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:30.920 09:07:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:30.920 09:07:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:30.920 09:07:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:30.921 09:07:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:30.921 09:07:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:30.921 09:07:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:30.921 09:07:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:30.921 09:07:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:34:30.921 09:07:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:34:30.921 09:07:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:34:30.921 09:07:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:30.921 09:07:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:30.921 bdev_null2 00:34:30.921 09:07:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:30.921 09:07:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:34:30.921 09:07:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:30.921 09:07:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:30.921 09:07:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:30.921 09:07:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:34:30.921 09:07:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:30.921 09:07:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:30.921 09:07:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:30.921 09:07:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:34:30.921 09:07:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:30.921 09:07:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:30.921 09:07:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:30.921 09:07:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:34:30.921 09:07:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:34:30.921 09:07:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:34:30.921 09:07:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:30.921 09:07:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:30.921 09:07:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:34:30.921 09:07:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:34:30.921 09:07:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:34:30.921 09:07:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:34:30.921 09:07:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:30.921 09:07:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:30.921 09:07:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:34:30.921 09:07:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:34:30.921 09:07:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:30.921 { 00:34:30.921 "params": { 00:34:30.921 "name": "Nvme$subsystem", 00:34:30.921 "trtype": "$TEST_TRANSPORT", 00:34:30.921 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:30.921 "adrfam": "ipv4", 00:34:30.921 "trsvcid": "$NVMF_PORT", 00:34:30.921 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:30.921 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:30.921 "hdgst": ${hdgst:-false}, 00:34:30.921 "ddgst": ${ddgst:-false} 00:34:30.921 }, 00:34:30.921 "method": "bdev_nvme_attach_controller" 00:34:30.921 } 00:34:30.921 EOF 00:34:30.921 )") 00:34:30.921 09:07:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:34:30.921 09:07:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:30.921 09:07:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:34:30.921 09:07:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:34:30.921 09:07:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:30.921 09:07:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:34:30.921 09:07:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:30.921 09:07:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:34:30.921 09:07:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:30.921 09:07:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:34:30.921 09:07:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:34:30.921 09:07:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:30.921 09:07:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:30.921 09:07:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:30.921 { 00:34:30.921 "params": { 00:34:30.921 "name": "Nvme$subsystem", 00:34:30.921 "trtype": "$TEST_TRANSPORT", 00:34:30.921 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:30.921 "adrfam": "ipv4", 00:34:30.921 "trsvcid": "$NVMF_PORT", 00:34:30.921 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:30.921 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:30.921 "hdgst": ${hdgst:-false}, 00:34:30.921 "ddgst": ${ddgst:-false} 00:34:30.921 }, 00:34:30.921 "method": "bdev_nvme_attach_controller" 00:34:30.921 } 00:34:30.921 EOF 00:34:30.921 )") 00:34:30.921 09:07:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:34:30.921 09:07:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:30.921 09:07:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:34:30.921 09:07:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:34:30.921 09:07:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:34:30.921 09:07:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:30.921 09:07:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:30.921 09:07:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:30.921 { 00:34:30.921 "params": { 00:34:30.921 "name": "Nvme$subsystem", 00:34:30.921 "trtype": "$TEST_TRANSPORT", 00:34:30.921 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:30.921 "adrfam": "ipv4", 00:34:30.921 "trsvcid": "$NVMF_PORT", 00:34:30.921 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:30.921 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:30.921 "hdgst": ${hdgst:-false}, 00:34:30.921 "ddgst": ${ddgst:-false} 00:34:30.921 }, 00:34:30.921 "method": "bdev_nvme_attach_controller" 00:34:30.921 } 00:34:30.921 EOF 00:34:30.921 )") 00:34:30.921 09:07:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:34:30.921 09:07:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:34:30.921 09:07:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:34:30.921 09:07:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:34:30.921 "params": { 00:34:30.921 "name": "Nvme0", 00:34:30.921 "trtype": "tcp", 00:34:30.921 "traddr": "10.0.0.2", 00:34:30.921 "adrfam": "ipv4", 00:34:30.921 "trsvcid": "4420", 00:34:30.921 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:30.921 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:30.921 "hdgst": false, 00:34:30.921 "ddgst": false 00:34:30.921 }, 00:34:30.921 "method": "bdev_nvme_attach_controller" 00:34:30.921 },{ 00:34:30.921 "params": { 00:34:30.921 "name": "Nvme1", 00:34:30.921 "trtype": "tcp", 00:34:30.921 "traddr": "10.0.0.2", 00:34:30.921 "adrfam": "ipv4", 00:34:30.921 "trsvcid": "4420", 00:34:30.921 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:30.921 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:30.921 "hdgst": false, 00:34:30.921 "ddgst": false 00:34:30.921 }, 00:34:30.921 "method": "bdev_nvme_attach_controller" 00:34:30.921 },{ 00:34:30.921 "params": { 00:34:30.921 "name": "Nvme2", 00:34:30.921 "trtype": "tcp", 00:34:30.921 "traddr": "10.0.0.2", 00:34:30.921 "adrfam": "ipv4", 00:34:30.921 "trsvcid": "4420", 00:34:30.921 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:34:30.921 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:34:30.921 "hdgst": false, 00:34:30.921 "ddgst": false 00:34:30.921 }, 00:34:30.921 "method": "bdev_nvme_attach_controller" 00:34:30.921 }' 00:34:30.921 09:07:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:30.921 09:07:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:30.921 09:07:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:30.922 09:07:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:30.922 09:07:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:30.922 09:07:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:34:30.922 09:07:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:30.922 09:07:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:30.922 09:07:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:30.922 09:07:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:30.922 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:34:30.922 ... 00:34:30.922 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:34:30.922 ... 00:34:30.922 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:34:30.922 ... 00:34:30.922 fio-3.35 00:34:30.922 Starting 24 threads 00:34:30.922 EAL: No free 2048 kB hugepages reported on node 1 00:34:43.139 00:34:43.139 filename0: (groupid=0, jobs=1): err= 0: pid=1133346: Fri Jul 26 09:07:59 2024 00:34:43.139 read: IOPS=484, BW=1936KiB/s (1983kB/s)(18.9MiB/10015msec) 00:34:43.139 slat (usec): min=9, max=121, avg=55.61, stdev=24.95 00:34:43.139 clat (usec): min=16396, max=65385, avg=32557.74, stdev=1555.76 00:34:43.139 lat (usec): min=16433, max=65411, avg=32613.35, stdev=1553.23 00:34:43.139 clat percentiles (usec): 00:34:43.139 | 1.00th=[31065], 5.00th=[31589], 10.00th=[31851], 20.00th=[32113], 00:34:43.139 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32637], 60.00th=[32637], 00:34:43.139 | 70.00th=[32900], 80.00th=[32900], 90.00th=[33162], 95.00th=[33424], 00:34:43.139 | 99.00th=[35390], 99.50th=[36963], 99.90th=[49546], 99.95th=[51119], 00:34:43.139 | 99.99th=[65274] 00:34:43.139 bw ( KiB/s): min= 1792, max= 2048, per=4.15%, avg=1932.80, stdev=57.48, samples=20 00:34:43.139 iops : min= 448, max= 512, avg=483.20, stdev=14.37, samples=20 00:34:43.139 lat (msec) : 20=0.17%, 50=99.75%, 100=0.08% 00:34:43.139 cpu : usr=97.98%, sys=1.44%, ctx=74, majf=0, minf=38 00:34:43.139 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:34:43.139 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:43.139 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:43.139 issued rwts: total=4848,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:43.139 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:43.139 filename0: (groupid=0, jobs=1): err= 0: pid=1133347: Fri Jul 26 09:07:59 2024 00:34:43.139 read: IOPS=484, BW=1938KiB/s (1984kB/s)(18.9MiB/10008msec) 00:34:43.139 slat (usec): min=7, max=115, avg=52.01, stdev=28.72 00:34:43.139 clat (usec): min=16018, max=44971, avg=32582.54, stdev=1328.31 00:34:43.139 lat (usec): min=16077, max=44986, avg=32634.55, stdev=1322.33 00:34:43.139 clat percentiles (usec): 00:34:43.139 | 1.00th=[31327], 5.00th=[31589], 10.00th=[31851], 20.00th=[32113], 00:34:43.139 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32637], 60.00th=[32637], 00:34:43.139 | 70.00th=[32900], 80.00th=[32900], 90.00th=[33162], 95.00th=[33424], 00:34:43.139 | 99.00th=[35390], 99.50th=[35914], 99.90th=[44827], 99.95th=[44827], 00:34:43.139 | 99.99th=[44827] 00:34:43.139 bw ( KiB/s): min= 1792, max= 2048, per=4.15%, avg=1932.80, stdev=57.24, samples=20 00:34:43.139 iops : min= 448, max= 512, avg=483.20, stdev=14.31, samples=20 00:34:43.139 lat (msec) : 20=0.33%, 50=99.67% 00:34:43.139 cpu : usr=97.94%, sys=1.65%, ctx=19, majf=0, minf=26 00:34:43.139 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:43.139 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:43.139 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:43.139 issued rwts: total=4848,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:43.139 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:43.139 filename0: (groupid=0, jobs=1): err= 0: pid=1133348: Fri Jul 26 09:07:59 2024 00:34:43.139 read: IOPS=482, BW=1929KiB/s (1976kB/s)(18.9MiB/10006msec) 00:34:43.139 slat (usec): min=8, max=132, avg=58.73, stdev=23.84 00:34:43.139 clat (usec): min=10748, max=74425, avg=32692.67, stdev=3005.56 00:34:43.139 lat (usec): min=10758, max=74466, avg=32751.40, stdev=3005.05 00:34:43.139 clat percentiles (usec): 00:34:43.139 | 1.00th=[28181], 5.00th=[31589], 10.00th=[31851], 20.00th=[32113], 00:34:43.139 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:34:43.139 | 70.00th=[32900], 80.00th=[32900], 90.00th=[33424], 95.00th=[33817], 00:34:43.139 | 99.00th=[41681], 99.50th=[46400], 99.90th=[73925], 99.95th=[73925], 00:34:43.139 | 99.99th=[73925] 00:34:43.139 bw ( KiB/s): min= 1667, max= 2048, per=4.14%, avg=1925.21, stdev=76.36, samples=19 00:34:43.139 iops : min= 416, max= 512, avg=481.26, stdev=19.23, samples=19 00:34:43.139 lat (msec) : 20=0.33%, 50=99.34%, 100=0.33% 00:34:43.139 cpu : usr=98.19%, sys=1.39%, ctx=22, majf=0, minf=57 00:34:43.139 IO depths : 1=3.0%, 2=9.0%, 4=23.9%, 8=54.5%, 16=9.7%, 32=0.0%, >=64=0.0% 00:34:43.139 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:43.139 complete : 0=0.0%, 4=94.0%, 8=0.4%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:43.139 issued rwts: total=4826,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:43.139 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:43.139 filename0: (groupid=0, jobs=1): err= 0: pid=1133349: Fri Jul 26 09:07:59 2024 00:34:43.139 read: IOPS=500, BW=2001KiB/s (2049kB/s)(19.6MiB/10006msec) 00:34:43.139 slat (usec): min=8, max=675, avg=27.11, stdev=26.43 00:34:43.139 clat (usec): min=12230, max=92440, avg=31851.04, stdev=5697.10 00:34:43.139 lat (usec): min=12240, max=92487, avg=31878.15, stdev=5698.76 00:34:43.139 clat percentiles (usec): 00:34:43.139 | 1.00th=[17171], 5.00th=[23200], 10.00th=[25035], 20.00th=[27657], 00:34:43.139 | 30.00th=[32113], 40.00th=[32637], 50.00th=[32637], 60.00th=[32900], 00:34:43.139 | 70.00th=[32900], 80.00th=[33424], 90.00th=[36439], 95.00th=[40633], 00:34:43.139 | 99.00th=[45351], 99.50th=[52167], 99.90th=[73925], 99.95th=[73925], 00:34:43.139 | 99.99th=[92799] 00:34:43.139 bw ( KiB/s): min= 1632, max= 2208, per=4.27%, avg=1989.89, stdev=134.31, samples=19 00:34:43.139 iops : min= 408, max= 552, avg=497.47, stdev=33.58, samples=19 00:34:43.139 lat (msec) : 20=4.16%, 50=95.21%, 100=0.64% 00:34:43.139 cpu : usr=95.40%, sys=2.59%, ctx=105, majf=0, minf=42 00:34:43.139 IO depths : 1=0.1%, 2=0.6%, 4=4.7%, 8=78.9%, 16=15.8%, 32=0.0%, >=64=0.0% 00:34:43.139 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:43.139 complete : 0=0.0%, 4=89.4%, 8=8.2%, 16=2.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:43.139 issued rwts: total=5006,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:43.139 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:43.139 filename0: (groupid=0, jobs=1): err= 0: pid=1133350: Fri Jul 26 09:07:59 2024 00:34:43.139 read: IOPS=483, BW=1932KiB/s (1979kB/s)(18.9MiB/10002msec) 00:34:43.139 slat (usec): min=9, max=113, avg=43.95, stdev=19.91 00:34:43.139 clat (usec): min=23287, max=67011, avg=32721.11, stdev=2109.69 00:34:43.139 lat (usec): min=23300, max=67048, avg=32765.06, stdev=2108.29 00:34:43.139 clat percentiles (usec): 00:34:43.139 | 1.00th=[31589], 5.00th=[31851], 10.00th=[32113], 20.00th=[32375], 00:34:43.139 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32637], 60.00th=[32637], 00:34:43.139 | 70.00th=[32900], 80.00th=[32900], 90.00th=[33162], 95.00th=[33424], 00:34:43.139 | 99.00th=[35390], 99.50th=[35914], 99.90th=[66847], 99.95th=[66847], 00:34:43.139 | 99.99th=[66847] 00:34:43.139 bw ( KiB/s): min= 1664, max= 2048, per=4.14%, avg=1926.74, stdev=79.52, samples=19 00:34:43.139 iops : min= 416, max= 512, avg=481.68, stdev=19.88, samples=19 00:34:43.139 lat (msec) : 50=99.67%, 100=0.33% 00:34:43.139 cpu : usr=98.17%, sys=1.42%, ctx=15, majf=0, minf=31 00:34:43.139 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:43.139 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:43.139 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:43.139 issued rwts: total=4832,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:43.139 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:43.139 filename0: (groupid=0, jobs=1): err= 0: pid=1133351: Fri Jul 26 09:07:59 2024 00:34:43.139 read: IOPS=484, BW=1937KiB/s (1984kB/s)(18.9MiB/10011msec) 00:34:43.139 slat (nsec): min=7283, max=86593, avg=38123.10, stdev=12889.71 00:34:43.139 clat (usec): min=12963, max=65481, avg=32690.89, stdev=2021.83 00:34:43.139 lat (usec): min=12975, max=65510, avg=32729.01, stdev=2020.85 00:34:43.139 clat percentiles (usec): 00:34:43.139 | 1.00th=[31589], 5.00th=[31851], 10.00th=[32113], 20.00th=[32375], 00:34:43.139 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32637], 60.00th=[32637], 00:34:43.139 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33424], 95.00th=[33817], 00:34:43.139 | 99.00th=[35914], 99.50th=[35914], 99.90th=[57934], 99.95th=[57934], 00:34:43.139 | 99.99th=[65274] 00:34:43.139 bw ( KiB/s): min= 1667, max= 2048, per=4.15%, avg=1932.10, stdev=80.31, samples=20 00:34:43.139 iops : min= 416, max= 512, avg=482.95, stdev=20.16, samples=20 00:34:43.139 lat (msec) : 20=0.33%, 50=99.34%, 100=0.33% 00:34:43.139 cpu : usr=98.23%, sys=1.38%, ctx=16, majf=0, minf=38 00:34:43.139 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:43.139 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:43.139 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:43.139 issued rwts: total=4848,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:43.140 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:43.140 filename0: (groupid=0, jobs=1): err= 0: pid=1133352: Fri Jul 26 09:07:59 2024 00:34:43.140 read: IOPS=482, BW=1930KiB/s (1977kB/s)(18.9MiB/10012msec) 00:34:43.140 slat (usec): min=10, max=109, avg=44.17, stdev=17.27 00:34:43.140 clat (usec): min=24225, max=71927, avg=32759.99, stdev=2367.19 00:34:43.140 lat (usec): min=24257, max=71964, avg=32804.16, stdev=2366.42 00:34:43.140 clat percentiles (usec): 00:34:43.140 | 1.00th=[31589], 5.00th=[31851], 10.00th=[32113], 20.00th=[32113], 00:34:43.140 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32637], 60.00th=[32637], 00:34:43.140 | 70.00th=[32900], 80.00th=[32900], 90.00th=[33424], 95.00th=[33424], 00:34:43.140 | 99.00th=[35390], 99.50th=[35914], 99.90th=[71828], 99.95th=[71828], 00:34:43.140 | 99.99th=[71828] 00:34:43.140 bw ( KiB/s): min= 1664, max= 2048, per=4.14%, avg=1926.40, stdev=77.42, samples=20 00:34:43.140 iops : min= 416, max= 512, avg=481.60, stdev=19.35, samples=20 00:34:43.140 lat (msec) : 50=99.67%, 100=0.33% 00:34:43.140 cpu : usr=94.22%, sys=3.19%, ctx=162, majf=0, minf=37 00:34:43.140 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:43.140 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:43.140 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:43.140 issued rwts: total=4832,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:43.140 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:43.140 filename0: (groupid=0, jobs=1): err= 0: pid=1133353: Fri Jul 26 09:07:59 2024 00:34:43.140 read: IOPS=486, BW=1944KiB/s (1991kB/s)(19.0MiB/10008msec) 00:34:43.140 slat (usec): min=5, max=142, avg=26.83, stdev=23.33 00:34:43.140 clat (usec): min=12004, max=40018, avg=32693.31, stdev=1641.11 00:34:43.140 lat (usec): min=12010, max=40042, avg=32720.13, stdev=1639.19 00:34:43.140 clat percentiles (usec): 00:34:43.140 | 1.00th=[29492], 5.00th=[31851], 10.00th=[32113], 20.00th=[32375], 00:34:43.140 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32637], 60.00th=[32900], 00:34:43.140 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33424], 95.00th=[33817], 00:34:43.140 | 99.00th=[35390], 99.50th=[35914], 99.90th=[35914], 99.95th=[35914], 00:34:43.140 | 99.99th=[40109] 00:34:43.140 bw ( KiB/s): min= 1920, max= 2048, per=4.17%, avg=1939.20, stdev=46.89, samples=20 00:34:43.140 iops : min= 480, max= 512, avg=484.80, stdev=11.72, samples=20 00:34:43.140 lat (msec) : 20=0.66%, 50=99.34% 00:34:43.140 cpu : usr=97.76%, sys=1.67%, ctx=69, majf=0, minf=22 00:34:43.140 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:43.140 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:43.140 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:43.140 issued rwts: total=4864,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:43.140 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:43.140 filename1: (groupid=0, jobs=1): err= 0: pid=1133354: Fri Jul 26 09:07:59 2024 00:34:43.140 read: IOPS=484, BW=1937KiB/s (1983kB/s)(18.9MiB/10009msec) 00:34:43.140 slat (usec): min=8, max=111, avg=37.58, stdev=15.94 00:34:43.140 clat (usec): min=11362, max=62311, avg=32721.53, stdev=3540.93 00:34:43.140 lat (usec): min=11370, max=62356, avg=32759.10, stdev=3542.04 00:34:43.140 clat percentiles (usec): 00:34:43.140 | 1.00th=[17957], 5.00th=[31851], 10.00th=[32113], 20.00th=[32375], 00:34:43.140 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32637], 60.00th=[32637], 00:34:43.140 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33424], 95.00th=[33817], 00:34:43.140 | 99.00th=[47449], 99.50th=[51643], 99.90th=[62129], 99.95th=[62129], 00:34:43.140 | 99.99th=[62129] 00:34:43.140 bw ( KiB/s): min= 1667, max= 2048, per=4.15%, avg=1932.15, stdev=78.17, samples=20 00:34:43.140 iops : min= 416, max= 512, avg=483.00, stdev=19.68, samples=20 00:34:43.140 lat (msec) : 20=1.94%, 50=97.52%, 100=0.54% 00:34:43.140 cpu : usr=98.23%, sys=1.32%, ctx=17, majf=0, minf=26 00:34:43.140 IO depths : 1=3.8%, 2=9.9%, 4=24.5%, 8=53.1%, 16=8.7%, 32=0.0%, >=64=0.0% 00:34:43.140 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:43.140 complete : 0=0.0%, 4=94.1%, 8=0.2%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:43.140 issued rwts: total=4846,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:43.140 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:43.140 filename1: (groupid=0, jobs=1): err= 0: pid=1133355: Fri Jul 26 09:07:59 2024 00:34:43.140 read: IOPS=482, BW=1930KiB/s (1976kB/s)(18.9MiB/10014msec) 00:34:43.140 slat (nsec): min=10127, max=83345, avg=39143.93, stdev=12334.76 00:34:43.140 clat (usec): min=23907, max=71623, avg=32812.09, stdev=2359.48 00:34:43.140 lat (usec): min=23918, max=71654, avg=32851.23, stdev=2358.54 00:34:43.140 clat percentiles (usec): 00:34:43.140 | 1.00th=[31589], 5.00th=[32113], 10.00th=[32113], 20.00th=[32375], 00:34:43.140 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32637], 60.00th=[32637], 00:34:43.140 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33424], 95.00th=[33817], 00:34:43.140 | 99.00th=[35914], 99.50th=[35914], 99.90th=[71828], 99.95th=[71828], 00:34:43.140 | 99.99th=[71828] 00:34:43.140 bw ( KiB/s): min= 1664, max= 2048, per=4.14%, avg=1926.40, stdev=77.42, samples=20 00:34:43.140 iops : min= 416, max= 512, avg=481.60, stdev=19.35, samples=20 00:34:43.140 lat (msec) : 50=99.67%, 100=0.33% 00:34:43.140 cpu : usr=97.56%, sys=1.64%, ctx=328, majf=0, minf=36 00:34:43.140 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:34:43.140 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:43.140 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:43.140 issued rwts: total=4832,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:43.140 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:43.140 filename1: (groupid=0, jobs=1): err= 0: pid=1133356: Fri Jul 26 09:07:59 2024 00:34:43.140 read: IOPS=482, BW=1930KiB/s (1977kB/s)(18.9MiB/10012msec) 00:34:43.140 slat (usec): min=10, max=114, avg=45.52, stdev=17.59 00:34:43.140 clat (usec): min=23992, max=71971, avg=32754.55, stdev=2378.83 00:34:43.140 lat (usec): min=24029, max=72000, avg=32800.06, stdev=2376.98 00:34:43.140 clat percentiles (usec): 00:34:43.140 | 1.00th=[31589], 5.00th=[31851], 10.00th=[32113], 20.00th=[32113], 00:34:43.140 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32637], 60.00th=[32637], 00:34:43.140 | 70.00th=[32900], 80.00th=[32900], 90.00th=[33424], 95.00th=[33817], 00:34:43.140 | 99.00th=[35390], 99.50th=[35914], 99.90th=[71828], 99.95th=[71828], 00:34:43.140 | 99.99th=[71828] 00:34:43.140 bw ( KiB/s): min= 1664, max= 2048, per=4.14%, avg=1926.40, stdev=77.42, samples=20 00:34:43.140 iops : min= 416, max= 512, avg=481.60, stdev=19.35, samples=20 00:34:43.140 lat (msec) : 50=99.67%, 100=0.33% 00:34:43.140 cpu : usr=98.15%, sys=1.45%, ctx=15, majf=0, minf=34 00:34:43.140 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:43.140 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:43.140 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:43.140 issued rwts: total=4832,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:43.140 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:43.140 filename1: (groupid=0, jobs=1): err= 0: pid=1133357: Fri Jul 26 09:07:59 2024 00:34:43.140 read: IOPS=483, BW=1932KiB/s (1979kB/s)(18.9MiB/10003msec) 00:34:43.140 slat (usec): min=8, max=122, avg=35.75, stdev=30.09 00:34:43.140 clat (usec): min=20437, max=71251, avg=32798.57, stdev=1835.35 00:34:43.140 lat (usec): min=20460, max=71281, avg=32834.32, stdev=1831.19 00:34:43.140 clat percentiles (usec): 00:34:43.140 | 1.00th=[30802], 5.00th=[31589], 10.00th=[31851], 20.00th=[32375], 00:34:43.140 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32637], 60.00th=[32900], 00:34:43.140 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33424], 95.00th=[33817], 00:34:43.140 | 99.00th=[35914], 99.50th=[35914], 99.90th=[58459], 99.95th=[58459], 00:34:43.140 | 99.99th=[70779] 00:34:43.140 bw ( KiB/s): min= 1664, max= 2048, per=4.14%, avg=1926.74, stdev=79.52, samples=19 00:34:43.140 iops : min= 416, max= 512, avg=481.68, stdev=19.88, samples=19 00:34:43.140 lat (msec) : 50=99.67%, 100=0.33% 00:34:43.140 cpu : usr=95.38%, sys=2.57%, ctx=493, majf=0, minf=23 00:34:43.140 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:34:43.140 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:43.140 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:43.140 issued rwts: total=4832,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:43.140 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:43.140 filename1: (groupid=0, jobs=1): err= 0: pid=1133358: Fri Jul 26 09:07:59 2024 00:34:43.140 read: IOPS=484, BW=1937KiB/s (1984kB/s)(18.9MiB/10011msec) 00:34:43.140 slat (nsec): min=9557, max=93854, avg=38549.98, stdev=12263.17 00:34:43.140 clat (usec): min=13144, max=57595, avg=32684.03, stdev=1954.74 00:34:43.140 lat (usec): min=13168, max=57636, avg=32722.58, stdev=1954.58 00:34:43.140 clat percentiles (usec): 00:34:43.140 | 1.00th=[31589], 5.00th=[31851], 10.00th=[32113], 20.00th=[32375], 00:34:43.140 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32637], 60.00th=[32637], 00:34:43.140 | 70.00th=[32900], 80.00th=[32900], 90.00th=[33424], 95.00th=[33817], 00:34:43.140 | 99.00th=[35914], 99.50th=[35914], 99.90th=[57410], 99.95th=[57410], 00:34:43.140 | 99.99th=[57410] 00:34:43.140 bw ( KiB/s): min= 1667, max= 2048, per=4.15%, avg=1932.10, stdev=80.31, samples=20 00:34:43.140 iops : min= 416, max= 512, avg=482.95, stdev=20.16, samples=20 00:34:43.140 lat (msec) : 20=0.33%, 50=99.34%, 100=0.33% 00:34:43.140 cpu : usr=95.84%, sys=2.51%, ctx=293, majf=0, minf=31 00:34:43.140 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:43.140 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:43.140 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:43.140 issued rwts: total=4848,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:43.140 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:43.140 filename1: (groupid=0, jobs=1): err= 0: pid=1133359: Fri Jul 26 09:07:59 2024 00:34:43.140 read: IOPS=489, BW=1956KiB/s (2003kB/s)(19.1MiB/10012msec) 00:34:43.140 slat (usec): min=5, max=107, avg=20.64, stdev=20.89 00:34:43.140 clat (usec): min=1924, max=37205, avg=32529.73, stdev=2939.33 00:34:43.140 lat (usec): min=1943, max=37222, avg=32550.38, stdev=2939.05 00:34:43.140 clat percentiles (usec): 00:34:43.140 | 1.00th=[16450], 5.00th=[32113], 10.00th=[32375], 20.00th=[32375], 00:34:43.140 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32900], 60.00th=[32900], 00:34:43.141 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33424], 95.00th=[33817], 00:34:43.141 | 99.00th=[35390], 99.50th=[35914], 99.90th=[35914], 99.95th=[36439], 00:34:43.141 | 99.99th=[36963] 00:34:43.141 bw ( KiB/s): min= 1920, max= 2176, per=4.19%, avg=1952.00, stdev=70.42, samples=20 00:34:43.141 iops : min= 480, max= 544, avg=488.00, stdev=17.60, samples=20 00:34:43.141 lat (msec) : 2=0.18%, 4=0.14%, 10=0.65%, 20=0.33%, 50=98.69% 00:34:43.141 cpu : usr=97.88%, sys=1.70%, ctx=26, majf=0, minf=40 00:34:43.141 IO depths : 1=6.1%, 2=12.3%, 4=24.8%, 8=50.3%, 16=6.4%, 32=0.0%, >=64=0.0% 00:34:43.141 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:43.141 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:43.141 issued rwts: total=4896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:43.141 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:43.141 filename1: (groupid=0, jobs=1): err= 0: pid=1133360: Fri Jul 26 09:07:59 2024 00:34:43.141 read: IOPS=486, BW=1944KiB/s (1991kB/s)(19.0MiB/10007msec) 00:34:43.141 slat (usec): min=5, max=101, avg=37.03, stdev=17.35 00:34:43.141 clat (usec): min=11498, max=42602, avg=32625.19, stdev=1679.89 00:34:43.141 lat (usec): min=11510, max=42667, avg=32662.22, stdev=1679.93 00:34:43.141 clat percentiles (usec): 00:34:43.141 | 1.00th=[28967], 5.00th=[31851], 10.00th=[32113], 20.00th=[32375], 00:34:43.141 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32637], 60.00th=[32900], 00:34:43.141 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33424], 95.00th=[33817], 00:34:43.141 | 99.00th=[35390], 99.50th=[35914], 99.90th=[37487], 99.95th=[37487], 00:34:43.141 | 99.99th=[42730] 00:34:43.141 bw ( KiB/s): min= 1920, max= 2048, per=4.17%, avg=1939.35, stdev=46.83, samples=20 00:34:43.141 iops : min= 480, max= 512, avg=484.80, stdev=11.72, samples=20 00:34:43.141 lat (msec) : 20=0.66%, 50=99.34% 00:34:43.141 cpu : usr=94.80%, sys=3.13%, ctx=125, majf=0, minf=34 00:34:43.141 IO depths : 1=5.3%, 2=11.5%, 4=25.0%, 8=51.0%, 16=7.2%, 32=0.0%, >=64=0.0% 00:34:43.141 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:43.141 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:43.141 issued rwts: total=4864,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:43.141 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:43.141 filename1: (groupid=0, jobs=1): err= 0: pid=1133361: Fri Jul 26 09:07:59 2024 00:34:43.141 read: IOPS=484, BW=1938KiB/s (1984kB/s)(18.9MiB/10008msec) 00:34:43.141 slat (usec): min=7, max=110, avg=35.94, stdev=19.96 00:34:43.141 clat (usec): min=15587, max=48568, avg=32741.75, stdev=1636.09 00:34:43.141 lat (usec): min=15614, max=48611, avg=32777.70, stdev=1633.79 00:34:43.141 clat percentiles (usec): 00:34:43.141 | 1.00th=[31589], 5.00th=[31851], 10.00th=[32113], 20.00th=[32375], 00:34:43.141 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32637], 60.00th=[32900], 00:34:43.141 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33424], 95.00th=[33817], 00:34:43.141 | 99.00th=[35914], 99.50th=[44827], 99.90th=[47449], 99.95th=[48497], 00:34:43.141 | 99.99th=[48497] 00:34:43.141 bw ( KiB/s): min= 1792, max= 2048, per=4.15%, avg=1932.80, stdev=53.85, samples=20 00:34:43.141 iops : min= 448, max= 512, avg=483.20, stdev=13.46, samples=20 00:34:43.141 lat (msec) : 20=0.54%, 50=99.46% 00:34:43.141 cpu : usr=97.49%, sys=1.89%, ctx=36, majf=0, minf=33 00:34:43.141 IO depths : 1=3.4%, 2=9.6%, 4=25.0%, 8=52.9%, 16=9.1%, 32=0.0%, >=64=0.0% 00:34:43.141 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:43.141 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:43.141 issued rwts: total=4848,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:43.141 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:43.141 filename2: (groupid=0, jobs=1): err= 0: pid=1133362: Fri Jul 26 09:07:59 2024 00:34:43.141 read: IOPS=484, BW=1936KiB/s (1983kB/s)(18.9MiB/10015msec) 00:34:43.141 slat (nsec): min=8210, max=74582, avg=32109.22, stdev=11204.96 00:34:43.141 clat (usec): min=23302, max=44495, avg=32767.35, stdev=1027.06 00:34:43.141 lat (usec): min=23320, max=44511, avg=32799.46, stdev=1024.62 00:34:43.141 clat percentiles (usec): 00:34:43.141 | 1.00th=[31851], 5.00th=[32113], 10.00th=[32113], 20.00th=[32375], 00:34:43.141 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32637], 60.00th=[32900], 00:34:43.141 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33162], 95.00th=[33424], 00:34:43.141 | 99.00th=[35914], 99.50th=[35914], 99.90th=[44303], 99.95th=[44303], 00:34:43.141 | 99.99th=[44303] 00:34:43.141 bw ( KiB/s): min= 1792, max= 2048, per=4.15%, avg=1932.80, stdev=57.24, samples=20 00:34:43.141 iops : min= 448, max= 512, avg=483.20, stdev=14.31, samples=20 00:34:43.141 lat (msec) : 50=100.00% 00:34:43.141 cpu : usr=97.54%, sys=2.06%, ctx=23, majf=0, minf=32 00:34:43.141 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:43.141 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:43.141 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:43.141 issued rwts: total=4848,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:43.141 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:43.141 filename2: (groupid=0, jobs=1): err= 0: pid=1133363: Fri Jul 26 09:07:59 2024 00:34:43.141 read: IOPS=482, BW=1932KiB/s (1978kB/s)(18.9MiB/10005msec) 00:34:43.141 slat (usec): min=12, max=131, avg=50.34, stdev=20.61 00:34:43.141 clat (usec): min=24044, max=62484, avg=32664.05, stdev=1868.68 00:34:43.141 lat (usec): min=24084, max=62525, avg=32714.40, stdev=1868.00 00:34:43.141 clat percentiles (usec): 00:34:43.141 | 1.00th=[31327], 5.00th=[31851], 10.00th=[31851], 20.00th=[32113], 00:34:43.141 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32637], 60.00th=[32637], 00:34:43.141 | 70.00th=[32900], 80.00th=[32900], 90.00th=[33162], 95.00th=[33424], 00:34:43.141 | 99.00th=[35390], 99.50th=[35914], 99.90th=[62129], 99.95th=[62653], 00:34:43.141 | 99.99th=[62653] 00:34:43.141 bw ( KiB/s): min= 1664, max= 2048, per=4.14%, avg=1926.74, stdev=79.52, samples=19 00:34:43.141 iops : min= 416, max= 512, avg=481.68, stdev=19.88, samples=19 00:34:43.141 lat (msec) : 50=99.67%, 100=0.33% 00:34:43.141 cpu : usr=94.50%, sys=3.08%, ctx=233, majf=0, minf=35 00:34:43.141 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:43.141 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:43.141 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:43.141 issued rwts: total=4832,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:43.141 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:43.141 filename2: (groupid=0, jobs=1): err= 0: pid=1133364: Fri Jul 26 09:07:59 2024 00:34:43.141 read: IOPS=486, BW=1945KiB/s (1992kB/s)(19.0MiB/10006msec) 00:34:43.141 slat (nsec): min=8005, max=95922, avg=26762.76, stdev=19052.14 00:34:43.141 clat (usec): min=12081, max=75342, avg=32669.46, stdev=3426.61 00:34:43.141 lat (usec): min=12091, max=75381, avg=32696.22, stdev=3427.52 00:34:43.141 clat percentiles (usec): 00:34:43.141 | 1.00th=[20055], 5.00th=[31851], 10.00th=[32113], 20.00th=[32375], 00:34:43.141 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32637], 60.00th=[32900], 00:34:43.141 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33424], 95.00th=[33817], 00:34:43.141 | 99.00th=[42730], 99.50th=[45351], 99.90th=[74974], 99.95th=[74974], 00:34:43.141 | 99.99th=[74974] 00:34:43.141 bw ( KiB/s): min= 1664, max= 2048, per=4.17%, avg=1941.05, stdev=87.97, samples=19 00:34:43.141 iops : min= 416, max= 512, avg=485.26, stdev=21.99, samples=19 00:34:43.141 lat (msec) : 20=0.99%, 50=98.68%, 100=0.33% 00:34:43.141 cpu : usr=97.67%, sys=1.76%, ctx=77, majf=0, minf=35 00:34:43.141 IO depths : 1=4.0%, 2=9.2%, 4=21.1%, 8=56.4%, 16=9.3%, 32=0.0%, >=64=0.0% 00:34:43.141 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:43.141 complete : 0=0.0%, 4=93.3%, 8=1.7%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:43.141 issued rwts: total=4866,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:43.141 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:43.141 filename2: (groupid=0, jobs=1): err= 0: pid=1133365: Fri Jul 26 09:07:59 2024 00:34:43.141 read: IOPS=487, BW=1951KiB/s (1998kB/s)(19.1MiB/10006msec) 00:34:43.141 slat (usec): min=7, max=124, avg=47.44, stdev=32.20 00:34:43.141 clat (usec): min=11524, max=74450, avg=32401.70, stdev=4680.05 00:34:43.141 lat (usec): min=11547, max=74498, avg=32449.14, stdev=4681.38 00:34:43.141 clat percentiles (usec): 00:34:43.141 | 1.00th=[17695], 5.00th=[25035], 10.00th=[31065], 20.00th=[31851], 00:34:43.141 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32637], 60.00th=[32637], 00:34:43.141 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33817], 95.00th=[36963], 00:34:43.141 | 99.00th=[47973], 99.50th=[55837], 99.90th=[73925], 99.95th=[73925], 00:34:43.141 | 99.99th=[74974] 00:34:43.141 bw ( KiB/s): min= 1667, max= 2144, per=4.19%, avg=1948.37, stdev=95.22, samples=19 00:34:43.141 iops : min= 416, max= 536, avg=487.16, stdev=24.03, samples=19 00:34:43.141 lat (msec) : 20=2.44%, 50=97.03%, 100=0.53% 00:34:43.141 cpu : usr=97.49%, sys=1.68%, ctx=62, majf=0, minf=34 00:34:43.141 IO depths : 1=4.0%, 2=8.3%, 4=17.8%, 8=60.2%, 16=9.7%, 32=0.0%, >=64=0.0% 00:34:43.141 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:43.141 complete : 0=0.0%, 4=92.4%, 8=3.0%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:43.141 issued rwts: total=4880,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:43.141 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:43.141 filename2: (groupid=0, jobs=1): err= 0: pid=1133366: Fri Jul 26 09:07:59 2024 00:34:43.141 read: IOPS=485, BW=1944KiB/s (1991kB/s)(19.0MiB/10009msec) 00:34:43.141 slat (usec): min=8, max=129, avg=36.21, stdev=14.69 00:34:43.141 clat (usec): min=11467, max=36204, avg=32620.21, stdev=1644.42 00:34:43.141 lat (usec): min=11479, max=36233, avg=32656.42, stdev=1642.78 00:34:43.141 clat percentiles (usec): 00:34:43.141 | 1.00th=[29754], 5.00th=[31851], 10.00th=[32113], 20.00th=[32375], 00:34:43.141 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32637], 60.00th=[32900], 00:34:43.141 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33424], 95.00th=[33817], 00:34:43.141 | 99.00th=[35914], 99.50th=[35914], 99.90th=[35914], 99.95th=[35914], 00:34:43.141 | 99.99th=[36439] 00:34:43.141 bw ( KiB/s): min= 1920, max= 2048, per=4.17%, avg=1939.35, stdev=46.83, samples=20 00:34:43.141 iops : min= 480, max= 512, avg=484.80, stdev=11.72, samples=20 00:34:43.141 lat (msec) : 20=0.66%, 50=99.34% 00:34:43.141 cpu : usr=88.82%, sys=5.52%, ctx=253, majf=0, minf=36 00:34:43.141 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:43.142 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:43.142 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:43.142 issued rwts: total=4864,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:43.142 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:43.142 filename2: (groupid=0, jobs=1): err= 0: pid=1133367: Fri Jul 26 09:07:59 2024 00:34:43.142 read: IOPS=484, BW=1938KiB/s (1984kB/s)(18.9MiB/10008msec) 00:34:43.142 slat (usec): min=6, max=106, avg=36.89, stdev=15.03 00:34:43.142 clat (usec): min=16235, max=50366, avg=32714.94, stdev=1296.68 00:34:43.142 lat (usec): min=16289, max=50382, avg=32751.83, stdev=1294.57 00:34:43.142 clat percentiles (usec): 00:34:43.142 | 1.00th=[31589], 5.00th=[32113], 10.00th=[32113], 20.00th=[32375], 00:34:43.142 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32637], 60.00th=[32637], 00:34:43.142 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33424], 95.00th=[33817], 00:34:43.142 | 99.00th=[35390], 99.50th=[35914], 99.90th=[44827], 99.95th=[44827], 00:34:43.142 | 99.99th=[50594] 00:34:43.142 bw ( KiB/s): min= 1792, max= 2048, per=4.15%, avg=1932.80, stdev=57.24, samples=20 00:34:43.142 iops : min= 448, max= 512, avg=483.20, stdev=14.31, samples=20 00:34:43.142 lat (msec) : 20=0.29%, 50=99.67%, 100=0.04% 00:34:43.142 cpu : usr=94.49%, sys=3.24%, ctx=153, majf=0, minf=31 00:34:43.142 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:43.142 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:43.142 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:43.142 issued rwts: total=4848,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:43.142 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:43.142 filename2: (groupid=0, jobs=1): err= 0: pid=1133368: Fri Jul 26 09:07:59 2024 00:34:43.142 read: IOPS=484, BW=1936KiB/s (1983kB/s)(18.9MiB/10015msec) 00:34:43.142 slat (nsec): min=8702, max=71253, avg=32022.14, stdev=10784.32 00:34:43.142 clat (usec): min=16507, max=49039, avg=32789.79, stdev=1492.15 00:34:43.142 lat (usec): min=16531, max=49083, avg=32821.81, stdev=1491.64 00:34:43.142 clat percentiles (usec): 00:34:43.142 | 1.00th=[31589], 5.00th=[32113], 10.00th=[32375], 20.00th=[32375], 00:34:43.142 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32637], 60.00th=[32900], 00:34:43.142 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33424], 95.00th=[33817], 00:34:43.142 | 99.00th=[35914], 99.50th=[44303], 99.90th=[47973], 99.95th=[49021], 00:34:43.142 | 99.99th=[49021] 00:34:43.142 bw ( KiB/s): min= 1792, max= 2048, per=4.15%, avg=1932.80, stdev=57.24, samples=20 00:34:43.142 iops : min= 448, max= 512, avg=483.20, stdev=14.31, samples=20 00:34:43.142 lat (msec) : 20=0.25%, 50=99.75% 00:34:43.142 cpu : usr=94.80%, sys=2.93%, ctx=415, majf=0, minf=28 00:34:43.142 IO depths : 1=4.3%, 2=10.5%, 4=25.0%, 8=52.0%, 16=8.2%, 32=0.0%, >=64=0.0% 00:34:43.142 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:43.142 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:43.142 issued rwts: total=4848,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:43.142 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:43.142 filename2: (groupid=0, jobs=1): err= 0: pid=1133369: Fri Jul 26 09:07:59 2024 00:34:43.142 read: IOPS=483, BW=1933KiB/s (1980kB/s)(18.9MiB/10010msec) 00:34:43.142 slat (usec): min=6, max=111, avg=40.54, stdev=16.11 00:34:43.142 clat (usec): min=12906, max=63750, avg=32730.67, stdev=2664.40 00:34:43.142 lat (usec): min=12922, max=63768, avg=32771.21, stdev=2663.63 00:34:43.142 clat percentiles (usec): 00:34:43.142 | 1.00th=[25297], 5.00th=[31851], 10.00th=[32113], 20.00th=[32375], 00:34:43.142 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32637], 60.00th=[32637], 00:34:43.142 | 70.00th=[32900], 80.00th=[32900], 90.00th=[33162], 95.00th=[33817], 00:34:43.142 | 99.00th=[41157], 99.50th=[53216], 99.90th=[63701], 99.95th=[63701], 00:34:43.142 | 99.99th=[63701] 00:34:43.142 bw ( KiB/s): min= 1664, max= 2048, per=4.14%, avg=1928.80, stdev=79.66, samples=20 00:34:43.142 iops : min= 416, max= 512, avg=482.20, stdev=19.91, samples=20 00:34:43.142 lat (msec) : 20=0.41%, 50=99.05%, 100=0.54% 00:34:43.142 cpu : usr=98.01%, sys=1.58%, ctx=20, majf=0, minf=27 00:34:43.142 IO depths : 1=6.0%, 2=12.1%, 4=24.6%, 8=50.8%, 16=6.6%, 32=0.0%, >=64=0.0% 00:34:43.142 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:43.142 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:43.142 issued rwts: total=4838,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:43.142 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:43.142 00:34:43.142 Run status group 0 (all jobs): 00:34:43.142 READ: bw=45.4MiB/s (47.7MB/s), 1929KiB/s-2001KiB/s (1976kB/s-2049kB/s), io=455MiB (477MB), run=10002-10015msec 00:34:43.142 09:07:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:34:43.142 09:07:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:34:43.142 09:07:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:43.142 09:07:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:43.142 09:07:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:34:43.142 09:07:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:43.142 09:07:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:43.142 09:07:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:43.142 09:08:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:43.142 09:08:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:43.142 09:08:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:43.142 09:08:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:43.142 09:08:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:43.142 09:08:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:43.142 09:08:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:34:43.142 09:08:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:34:43.142 09:08:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:43.142 09:08:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:43.142 09:08:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:43.142 09:08:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:43.142 09:08:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:34:43.142 09:08:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:43.142 09:08:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:43.142 09:08:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:43.142 09:08:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:43.142 09:08:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:34:43.142 09:08:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:34:43.142 09:08:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:34:43.142 09:08:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:43.142 09:08:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:43.142 09:08:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:43.142 09:08:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:34:43.142 09:08:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:43.142 09:08:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:43.142 09:08:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:43.142 09:08:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:34:43.142 09:08:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:34:43.142 09:08:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:34:43.142 09:08:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:34:43.142 09:08:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:34:43.142 09:08:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:34:43.142 09:08:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:34:43.142 09:08:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:34:43.142 09:08:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:43.142 09:08:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:34:43.142 09:08:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:34:43.142 09:08:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:34:43.142 09:08:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:43.142 09:08:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:43.142 bdev_null0 00:34:43.142 09:08:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:43.142 09:08:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:43.142 09:08:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:43.142 09:08:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:43.142 09:08:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:43.142 09:08:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:43.142 09:08:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:43.142 09:08:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:43.142 09:08:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:43.142 09:08:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:43.142 09:08:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:43.142 09:08:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:43.142 [2024-07-26 09:08:00.071001] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:43.142 09:08:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:43.142 09:08:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:43.142 09:08:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:34:43.142 09:08:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:34:43.142 09:08:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:34:43.143 09:08:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:43.143 09:08:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:43.143 bdev_null1 00:34:43.143 09:08:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:43.143 09:08:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:34:43.143 09:08:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:43.143 09:08:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:43.143 09:08:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:43.143 09:08:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:34:43.143 09:08:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:43.143 09:08:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:43.143 09:08:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:43.143 09:08:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:43.143 09:08:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:43.143 09:08:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:43.143 09:08:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:43.143 09:08:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:34:43.143 09:08:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:34:43.143 09:08:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:34:43.143 09:08:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:34:43.143 09:08:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:34:43.143 09:08:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:43.143 09:08:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:43.143 { 00:34:43.143 "params": { 00:34:43.143 "name": "Nvme$subsystem", 00:34:43.143 "trtype": "$TEST_TRANSPORT", 00:34:43.143 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:43.143 "adrfam": "ipv4", 00:34:43.143 "trsvcid": "$NVMF_PORT", 00:34:43.143 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:43.143 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:43.143 "hdgst": ${hdgst:-false}, 00:34:43.143 "ddgst": ${ddgst:-false} 00:34:43.143 }, 00:34:43.143 "method": "bdev_nvme_attach_controller" 00:34:43.143 } 00:34:43.143 EOF 00:34:43.143 )") 00:34:43.143 09:08:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:43.143 09:08:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:43.143 09:08:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:34:43.143 09:08:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:34:43.143 09:08:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:34:43.143 09:08:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:43.143 09:08:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:34:43.143 09:08:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:34:43.143 09:08:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:43.143 09:08:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:34:43.143 09:08:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:34:43.143 09:08:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:34:43.143 09:08:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:43.143 09:08:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:43.143 09:08:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:34:43.143 09:08:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:34:43.143 09:08:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:43.143 09:08:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:34:43.143 09:08:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:43.143 09:08:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:43.143 09:08:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:43.143 { 00:34:43.143 "params": { 00:34:43.143 "name": "Nvme$subsystem", 00:34:43.143 "trtype": "$TEST_TRANSPORT", 00:34:43.143 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:43.143 "adrfam": "ipv4", 00:34:43.143 "trsvcid": "$NVMF_PORT", 00:34:43.143 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:43.143 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:43.143 "hdgst": ${hdgst:-false}, 00:34:43.143 "ddgst": ${ddgst:-false} 00:34:43.143 }, 00:34:43.143 "method": "bdev_nvme_attach_controller" 00:34:43.143 } 00:34:43.143 EOF 00:34:43.143 )") 00:34:43.143 09:08:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:34:43.143 09:08:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:34:43.143 09:08:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:43.143 09:08:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:34:43.143 09:08:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:34:43.143 09:08:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:34:43.143 "params": { 00:34:43.143 "name": "Nvme0", 00:34:43.143 "trtype": "tcp", 00:34:43.143 "traddr": "10.0.0.2", 00:34:43.143 "adrfam": "ipv4", 00:34:43.143 "trsvcid": "4420", 00:34:43.143 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:43.143 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:43.143 "hdgst": false, 00:34:43.143 "ddgst": false 00:34:43.143 }, 00:34:43.143 "method": "bdev_nvme_attach_controller" 00:34:43.143 },{ 00:34:43.143 "params": { 00:34:43.143 "name": "Nvme1", 00:34:43.143 "trtype": "tcp", 00:34:43.143 "traddr": "10.0.0.2", 00:34:43.143 "adrfam": "ipv4", 00:34:43.143 "trsvcid": "4420", 00:34:43.143 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:43.143 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:43.143 "hdgst": false, 00:34:43.143 "ddgst": false 00:34:43.143 }, 00:34:43.143 "method": "bdev_nvme_attach_controller" 00:34:43.143 }' 00:34:43.143 09:08:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:43.143 09:08:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:43.143 09:08:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:43.143 09:08:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:43.143 09:08:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:34:43.143 09:08:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:43.143 09:08:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:43.143 09:08:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:43.143 09:08:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:43.143 09:08:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:43.143 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:34:43.143 ... 00:34:43.143 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:34:43.143 ... 00:34:43.143 fio-3.35 00:34:43.143 Starting 4 threads 00:34:43.143 EAL: No free 2048 kB hugepages reported on node 1 00:34:48.463 00:34:48.463 filename0: (groupid=0, jobs=1): err= 0: pid=1134631: Fri Jul 26 09:08:06 2024 00:34:48.463 read: IOPS=1836, BW=14.3MiB/s (15.0MB/s)(71.8MiB/5003msec) 00:34:48.463 slat (nsec): min=5382, max=70883, avg=16085.54, stdev=8793.03 00:34:48.463 clat (usec): min=1032, max=8114, avg=4303.81, stdev=549.94 00:34:48.463 lat (usec): min=1047, max=8125, avg=4319.90, stdev=550.56 00:34:48.463 clat percentiles (usec): 00:34:48.463 | 1.00th=[ 2999], 5.00th=[ 3490], 10.00th=[ 3687], 20.00th=[ 3982], 00:34:48.463 | 30.00th=[ 4146], 40.00th=[ 4293], 50.00th=[ 4359], 60.00th=[ 4424], 00:34:48.463 | 70.00th=[ 4490], 80.00th=[ 4555], 90.00th=[ 4686], 95.00th=[ 5211], 00:34:48.463 | 99.00th=[ 6325], 99.50th=[ 6652], 99.90th=[ 7570], 99.95th=[ 7898], 00:34:48.463 | 99.99th=[ 8094] 00:34:48.463 bw ( KiB/s): min=14208, max=15584, per=25.31%, avg=14689.30, stdev=505.22, samples=10 00:34:48.463 iops : min= 1776, max= 1948, avg=1836.10, stdev=63.13, samples=10 00:34:48.463 lat (msec) : 2=0.08%, 4=20.93%, 10=78.99% 00:34:48.463 cpu : usr=94.54%, sys=4.82%, ctx=61, majf=0, minf=78 00:34:48.463 IO depths : 1=0.1%, 2=10.1%, 4=62.4%, 8=27.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:48.463 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:48.463 complete : 0=0.0%, 4=92.3%, 8=7.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:48.463 issued rwts: total=9187,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:48.463 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:48.463 filename0: (groupid=0, jobs=1): err= 0: pid=1134632: Fri Jul 26 09:08:06 2024 00:34:48.463 read: IOPS=1794, BW=14.0MiB/s (14.7MB/s)(70.1MiB/5001msec) 00:34:48.463 slat (nsec): min=5425, max=64066, avg=17364.03, stdev=9513.75 00:34:48.463 clat (usec): min=861, max=8035, avg=4400.42, stdev=549.63 00:34:48.463 lat (usec): min=884, max=8061, avg=4417.79, stdev=549.30 00:34:48.463 clat percentiles (usec): 00:34:48.464 | 1.00th=[ 2999], 5.00th=[ 3621], 10.00th=[ 3916], 20.00th=[ 4113], 00:34:48.464 | 30.00th=[ 4228], 40.00th=[ 4293], 50.00th=[ 4359], 60.00th=[ 4424], 00:34:48.464 | 70.00th=[ 4490], 80.00th=[ 4555], 90.00th=[ 4948], 95.00th=[ 5407], 00:34:48.464 | 99.00th=[ 6325], 99.50th=[ 6718], 99.90th=[ 7701], 99.95th=[ 7767], 00:34:48.464 | 99.99th=[ 8029] 00:34:48.464 bw ( KiB/s): min=13824, max=14928, per=24.73%, avg=14356.40, stdev=346.88, samples=10 00:34:48.464 iops : min= 1728, max= 1866, avg=1794.50, stdev=43.41, samples=10 00:34:48.464 lat (usec) : 1000=0.03% 00:34:48.464 lat (msec) : 2=0.20%, 4=13.00%, 10=86.76% 00:34:48.464 cpu : usr=94.72%, sys=4.80%, ctx=28, majf=0, minf=82 00:34:48.464 IO depths : 1=0.1%, 2=10.0%, 4=61.6%, 8=28.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:48.464 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:48.464 complete : 0=0.0%, 4=92.9%, 8=7.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:48.464 issued rwts: total=8976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:48.464 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:48.464 filename1: (groupid=0, jobs=1): err= 0: pid=1134633: Fri Jul 26 09:08:06 2024 00:34:48.464 read: IOPS=1812, BW=14.2MiB/s (14.8MB/s)(70.8MiB/5001msec) 00:34:48.464 slat (nsec): min=5072, max=64255, avg=16032.49, stdev=9053.79 00:34:48.464 clat (usec): min=1086, max=8572, avg=4361.02, stdev=617.47 00:34:48.464 lat (usec): min=1105, max=8616, avg=4377.05, stdev=617.35 00:34:48.464 clat percentiles (usec): 00:34:48.464 | 1.00th=[ 2900], 5.00th=[ 3523], 10.00th=[ 3752], 20.00th=[ 4047], 00:34:48.464 | 30.00th=[ 4178], 40.00th=[ 4293], 50.00th=[ 4359], 60.00th=[ 4424], 00:34:48.464 | 70.00th=[ 4490], 80.00th=[ 4555], 90.00th=[ 4752], 95.00th=[ 5538], 00:34:48.464 | 99.00th=[ 6783], 99.50th=[ 7046], 99.90th=[ 7701], 99.95th=[ 8356], 00:34:48.464 | 99.99th=[ 8586] 00:34:48.464 bw ( KiB/s): min=13680, max=15232, per=24.73%, avg=14355.56, stdev=432.21, samples=9 00:34:48.464 iops : min= 1710, max= 1904, avg=1794.44, stdev=54.03, samples=9 00:34:48.464 lat (msec) : 2=0.12%, 4=18.20%, 10=81.68% 00:34:48.464 cpu : usr=94.48%, sys=5.08%, ctx=10, majf=0, minf=61 00:34:48.464 IO depths : 1=0.1%, 2=9.7%, 4=63.4%, 8=26.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:48.464 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:48.464 complete : 0=0.0%, 4=91.8%, 8=8.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:48.464 issued rwts: total=9065,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:48.464 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:48.464 filename1: (groupid=0, jobs=1): err= 0: pid=1134634: Fri Jul 26 09:08:06 2024 00:34:48.464 read: IOPS=1813, BW=14.2MiB/s (14.9MB/s)(70.9MiB/5002msec) 00:34:48.464 slat (nsec): min=5140, max=66532, avg=18883.60, stdev=9046.19 00:34:48.464 clat (usec): min=766, max=8067, avg=4348.83, stdev=605.62 00:34:48.464 lat (usec): min=787, max=8088, avg=4367.71, stdev=605.47 00:34:48.464 clat percentiles (usec): 00:34:48.464 | 1.00th=[ 2966], 5.00th=[ 3490], 10.00th=[ 3752], 20.00th=[ 4015], 00:34:48.464 | 30.00th=[ 4178], 40.00th=[ 4293], 50.00th=[ 4359], 60.00th=[ 4424], 00:34:48.464 | 70.00th=[ 4490], 80.00th=[ 4555], 90.00th=[ 4752], 95.00th=[ 5538], 00:34:48.464 | 99.00th=[ 6652], 99.50th=[ 7046], 99.90th=[ 7570], 99.95th=[ 7898], 00:34:48.464 | 99.99th=[ 8094] 00:34:48.464 bw ( KiB/s): min=14128, max=15024, per=24.99%, avg=14507.20, stdev=314.90, samples=10 00:34:48.464 iops : min= 1766, max= 1878, avg=1813.40, stdev=39.36, samples=10 00:34:48.464 lat (usec) : 1000=0.07% 00:34:48.464 lat (msec) : 2=0.13%, 4=18.55%, 10=81.25% 00:34:48.464 cpu : usr=93.28%, sys=4.90%, ctx=187, majf=0, minf=117 00:34:48.464 IO depths : 1=0.1%, 2=9.5%, 4=63.2%, 8=27.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:48.464 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:48.464 complete : 0=0.0%, 4=92.1%, 8=7.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:48.464 issued rwts: total=9072,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:48.464 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:48.464 00:34:48.464 Run status group 0 (all jobs): 00:34:48.464 READ: bw=56.7MiB/s (59.4MB/s), 14.0MiB/s-14.3MiB/s (14.7MB/s-15.0MB/s), io=284MiB (297MB), run=5001-5003msec 00:34:48.464 09:08:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:34:48.464 09:08:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:34:48.464 09:08:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:48.464 09:08:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:48.464 09:08:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:34:48.464 09:08:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:48.464 09:08:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:48.464 09:08:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:48.464 09:08:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:48.464 09:08:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:48.464 09:08:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:48.464 09:08:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:48.464 09:08:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:48.464 09:08:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:48.464 09:08:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:34:48.464 09:08:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:34:48.464 09:08:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:48.464 09:08:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:48.464 09:08:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:48.464 09:08:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:48.464 09:08:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:34:48.464 09:08:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:48.464 09:08:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:48.464 09:08:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:48.464 00:34:48.464 real 0m24.118s 00:34:48.464 user 4m29.407s 00:34:48.464 sys 0m8.031s 00:34:48.464 09:08:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:48.464 09:08:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:48.464 ************************************ 00:34:48.464 END TEST fio_dif_rand_params 00:34:48.464 ************************************ 00:34:48.464 09:08:06 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:34:48.464 09:08:06 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:34:48.464 09:08:06 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:48.464 09:08:06 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:48.464 ************************************ 00:34:48.464 START TEST fio_dif_digest 00:34:48.464 ************************************ 00:34:48.464 09:08:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1125 -- # fio_dif_digest 00:34:48.464 09:08:06 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:34:48.464 09:08:06 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:34:48.464 09:08:06 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:34:48.464 09:08:06 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:34:48.464 09:08:06 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:34:48.464 09:08:06 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:34:48.464 09:08:06 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:34:48.464 09:08:06 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:34:48.464 09:08:06 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:34:48.464 09:08:06 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:34:48.464 09:08:06 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:34:48.464 09:08:06 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:34:48.464 09:08:06 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:34:48.464 09:08:06 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:34:48.464 09:08:06 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:34:48.464 09:08:06 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:34:48.464 09:08:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:48.464 09:08:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:48.464 bdev_null0 00:34:48.464 09:08:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:48.464 09:08:06 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:48.464 09:08:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:48.464 09:08:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:48.464 09:08:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:48.464 09:08:06 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:48.464 09:08:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:48.464 09:08:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:48.464 09:08:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:48.464 09:08:06 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:48.464 09:08:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:48.464 09:08:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:48.464 [2024-07-26 09:08:06.468275] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:48.464 09:08:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:48.464 09:08:06 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:34:48.464 09:08:06 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:34:48.464 09:08:06 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:34:48.464 09:08:06 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:34:48.465 09:08:06 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:34:48.465 09:08:06 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:48.465 09:08:06 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:48.465 09:08:06 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:48.465 { 00:34:48.465 "params": { 00:34:48.465 "name": "Nvme$subsystem", 00:34:48.465 "trtype": "$TEST_TRANSPORT", 00:34:48.465 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:48.465 "adrfam": "ipv4", 00:34:48.465 "trsvcid": "$NVMF_PORT", 00:34:48.465 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:48.465 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:48.465 "hdgst": ${hdgst:-false}, 00:34:48.465 "ddgst": ${ddgst:-false} 00:34:48.465 }, 00:34:48.465 "method": "bdev_nvme_attach_controller" 00:34:48.465 } 00:34:48.465 EOF 00:34:48.465 )") 00:34:48.465 09:08:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:48.465 09:08:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:34:48.465 09:08:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:48.465 09:08:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:34:48.465 09:08:06 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:34:48.465 09:08:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:48.465 09:08:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:34:48.465 09:08:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:34:48.465 09:08:06 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:34:48.465 09:08:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:48.465 09:08:06 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:34:48.465 09:08:06 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:34:48.465 09:08:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:48.465 09:08:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:34:48.465 09:08:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:48.465 09:08:06 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:34:48.465 09:08:06 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:34:48.465 09:08:06 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:34:48.465 09:08:06 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:34:48.465 09:08:06 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:34:48.465 "params": { 00:34:48.465 "name": "Nvme0", 00:34:48.465 "trtype": "tcp", 00:34:48.465 "traddr": "10.0.0.2", 00:34:48.465 "adrfam": "ipv4", 00:34:48.465 "trsvcid": "4420", 00:34:48.465 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:48.465 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:48.465 "hdgst": true, 00:34:48.465 "ddgst": true 00:34:48.465 }, 00:34:48.465 "method": "bdev_nvme_attach_controller" 00:34:48.465 }' 00:34:48.465 09:08:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:48.465 09:08:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:48.465 09:08:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:48.465 09:08:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:48.465 09:08:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:34:48.465 09:08:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:48.465 09:08:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:48.465 09:08:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:48.465 09:08:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:48.465 09:08:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:48.465 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:34:48.465 ... 00:34:48.465 fio-3.35 00:34:48.465 Starting 3 threads 00:34:48.465 EAL: No free 2048 kB hugepages reported on node 1 00:35:00.719 00:35:00.719 filename0: (groupid=0, jobs=1): err= 0: pid=1135492: Fri Jul 26 09:08:17 2024 00:35:00.719 read: IOPS=211, BW=26.4MiB/s (27.7MB/s)(265MiB/10047msec) 00:35:00.719 slat (nsec): min=5008, max=55825, avg=17577.68, stdev=5114.76 00:35:00.719 clat (usec): min=8774, max=57492, avg=14173.40, stdev=1680.77 00:35:00.719 lat (usec): min=8795, max=57510, avg=14190.97, stdev=1680.67 00:35:00.719 clat percentiles (usec): 00:35:00.719 | 1.00th=[10159], 5.00th=[12387], 10.00th=[12911], 20.00th=[13304], 00:35:00.719 | 30.00th=[13698], 40.00th=[13960], 50.00th=[14222], 60.00th=[14353], 00:35:00.719 | 70.00th=[14615], 80.00th=[15008], 90.00th=[15401], 95.00th=[15926], 00:35:00.719 | 99.00th=[16581], 99.50th=[17171], 99.90th=[24511], 99.95th=[49546], 00:35:00.719 | 99.99th=[57410] 00:35:00.719 bw ( KiB/s): min=26112, max=28416, per=34.34%, avg=27110.40, stdev=620.99, samples=20 00:35:00.719 iops : min= 204, max= 222, avg=211.80, stdev= 4.85, samples=20 00:35:00.719 lat (msec) : 10=0.66%, 20=99.10%, 50=0.19%, 100=0.05% 00:35:00.719 cpu : usr=91.10%, sys=6.96%, ctx=253, majf=0, minf=166 00:35:00.719 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:00.719 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:00.719 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:00.719 issued rwts: total=2120,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:00.719 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:00.719 filename0: (groupid=0, jobs=1): err= 0: pid=1135493: Fri Jul 26 09:08:17 2024 00:35:00.719 read: IOPS=202, BW=25.3MiB/s (26.5MB/s)(254MiB/10044msec) 00:35:00.719 slat (nsec): min=4778, max=48445, avg=19757.61, stdev=4941.91 00:35:00.719 clat (usec): min=8249, max=56564, avg=14776.15, stdev=2767.99 00:35:00.719 lat (usec): min=8264, max=56584, avg=14795.91, stdev=2768.02 00:35:00.719 clat percentiles (usec): 00:35:00.719 | 1.00th=[10814], 5.00th=[12911], 10.00th=[13304], 20.00th=[13829], 00:35:00.719 | 30.00th=[14091], 40.00th=[14353], 50.00th=[14615], 60.00th=[14877], 00:35:00.719 | 70.00th=[15139], 80.00th=[15533], 90.00th=[16057], 95.00th=[16450], 00:35:00.719 | 99.00th=[17433], 99.50th=[21365], 99.90th=[55837], 99.95th=[56361], 00:35:00.719 | 99.99th=[56361] 00:35:00.719 bw ( KiB/s): min=21504, max=28160, per=32.91%, avg=25986.50, stdev=1245.94, samples=20 00:35:00.719 iops : min= 168, max= 220, avg=203.00, stdev= 9.74, samples=20 00:35:00.719 lat (msec) : 10=0.74%, 20=98.72%, 50=0.15%, 100=0.39% 00:35:00.719 cpu : usr=93.52%, sys=5.88%, ctx=16, majf=0, minf=70 00:35:00.719 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:00.719 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:00.719 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:00.719 issued rwts: total=2033,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:00.719 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:00.719 filename0: (groupid=0, jobs=1): err= 0: pid=1135494: Fri Jul 26 09:08:17 2024 00:35:00.719 read: IOPS=203, BW=25.4MiB/s (26.7MB/s)(256MiB/10047msec) 00:35:00.719 slat (nsec): min=4816, max=87180, avg=17524.21, stdev=4539.05 00:35:00.719 clat (usec): min=8783, max=56447, avg=14704.25, stdev=2720.16 00:35:00.719 lat (usec): min=8801, max=56534, avg=14721.78, stdev=2720.62 00:35:00.719 clat percentiles (usec): 00:35:00.719 | 1.00th=[11338], 5.00th=[12780], 10.00th=[13173], 20.00th=[13698], 00:35:00.719 | 30.00th=[14091], 40.00th=[14353], 50.00th=[14615], 60.00th=[14877], 00:35:00.719 | 70.00th=[15139], 80.00th=[15401], 90.00th=[15926], 95.00th=[16450], 00:35:00.719 | 99.00th=[17433], 99.50th=[20317], 99.90th=[56361], 99.95th=[56361], 00:35:00.719 | 99.99th=[56361] 00:35:00.719 bw ( KiB/s): min=23808, max=27648, per=33.09%, avg=26124.80, stdev=811.56, samples=20 00:35:00.719 iops : min= 186, max= 216, avg=204.10, stdev= 6.34, samples=20 00:35:00.719 lat (msec) : 10=0.34%, 20=99.12%, 50=0.24%, 100=0.29% 00:35:00.719 cpu : usr=93.99%, sys=5.53%, ctx=30, majf=0, minf=200 00:35:00.719 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:00.719 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:00.719 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:00.719 issued rwts: total=2044,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:00.719 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:00.719 00:35:00.719 Run status group 0 (all jobs): 00:35:00.719 READ: bw=77.1MiB/s (80.8MB/s), 25.3MiB/s-26.4MiB/s (26.5MB/s-27.7MB/s), io=775MiB (812MB), run=10044-10047msec 00:35:00.719 09:08:17 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:35:00.719 09:08:17 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:35:00.719 09:08:17 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:35:00.719 09:08:17 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:00.719 09:08:17 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:35:00.719 09:08:17 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:00.719 09:08:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:00.719 09:08:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:00.719 09:08:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:00.719 09:08:17 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:00.719 09:08:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:00.719 09:08:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:00.719 09:08:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:00.719 00:35:00.719 real 0m11.080s 00:35:00.719 user 0m29.189s 00:35:00.719 sys 0m2.121s 00:35:00.719 09:08:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:00.719 09:08:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:00.719 ************************************ 00:35:00.719 END TEST fio_dif_digest 00:35:00.719 ************************************ 00:35:00.719 09:08:17 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:35:00.719 09:08:17 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:35:00.719 09:08:17 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:35:00.719 09:08:17 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:35:00.719 09:08:17 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:35:00.719 09:08:17 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:35:00.719 09:08:17 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:35:00.719 09:08:17 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:35:00.719 rmmod nvme_tcp 00:35:00.719 rmmod nvme_fabrics 00:35:00.719 rmmod nvme_keyring 00:35:00.719 09:08:17 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:35:00.719 09:08:17 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:35:00.719 09:08:17 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:35:00.719 09:08:17 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 1129447 ']' 00:35:00.719 09:08:17 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 1129447 00:35:00.720 09:08:17 nvmf_dif -- common/autotest_common.sh@950 -- # '[' -z 1129447 ']' 00:35:00.720 09:08:17 nvmf_dif -- common/autotest_common.sh@954 -- # kill -0 1129447 00:35:00.720 09:08:17 nvmf_dif -- common/autotest_common.sh@955 -- # uname 00:35:00.720 09:08:17 nvmf_dif -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:00.720 09:08:17 nvmf_dif -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1129447 00:35:00.720 09:08:17 nvmf_dif -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:35:00.720 09:08:17 nvmf_dif -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:35:00.720 09:08:17 nvmf_dif -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1129447' 00:35:00.720 killing process with pid 1129447 00:35:00.720 09:08:17 nvmf_dif -- common/autotest_common.sh@969 -- # kill 1129447 00:35:00.720 09:08:17 nvmf_dif -- common/autotest_common.sh@974 -- # wait 1129447 00:35:00.720 09:08:17 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:35:00.720 09:08:17 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:00.720 Waiting for block devices as requested 00:35:00.720 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:35:00.720 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:35:00.720 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:35:00.978 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:35:00.978 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:35:00.978 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:35:00.978 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:35:01.237 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:35:01.237 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:35:01.237 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:35:01.237 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:35:01.496 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:35:01.496 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:35:01.496 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:35:01.496 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:35:01.755 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:35:01.755 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:35:01.755 09:08:20 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:35:01.755 09:08:20 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:35:01.755 09:08:20 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:35:01.755 09:08:20 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:35:01.755 09:08:20 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:01.755 09:08:20 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:01.755 09:08:20 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:04.287 09:08:22 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:35:04.287 00:35:04.287 real 1m6.481s 00:35:04.287 user 6m25.533s 00:35:04.287 sys 0m19.812s 00:35:04.287 09:08:22 nvmf_dif -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:04.287 09:08:22 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:04.287 ************************************ 00:35:04.287 END TEST nvmf_dif 00:35:04.287 ************************************ 00:35:04.287 09:08:22 -- spdk/autotest.sh@297 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:35:04.287 09:08:22 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:35:04.287 09:08:22 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:04.287 09:08:22 -- common/autotest_common.sh@10 -- # set +x 00:35:04.287 ************************************ 00:35:04.287 START TEST nvmf_abort_qd_sizes 00:35:04.287 ************************************ 00:35:04.287 09:08:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:35:04.287 * Looking for test storage... 00:35:04.287 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:04.287 09:08:22 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:04.287 09:08:22 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:35:04.287 09:08:22 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:04.287 09:08:22 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:04.287 09:08:22 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:04.287 09:08:22 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:04.287 09:08:22 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:04.287 09:08:22 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:04.287 09:08:22 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:04.287 09:08:22 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:04.287 09:08:22 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:04.287 09:08:22 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:04.287 09:08:22 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:35:04.287 09:08:22 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:35:04.287 09:08:22 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:04.287 09:08:22 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:04.287 09:08:22 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:04.287 09:08:22 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:04.287 09:08:22 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:04.287 09:08:22 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:04.287 09:08:22 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:04.287 09:08:22 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:04.287 09:08:22 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:04.287 09:08:22 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:04.287 09:08:22 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:04.287 09:08:22 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:35:04.287 09:08:22 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:04.287 09:08:22 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:35:04.287 09:08:22 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:35:04.288 09:08:22 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:35:04.288 09:08:22 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:04.288 09:08:22 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:04.288 09:08:22 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:04.288 09:08:22 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:35:04.288 09:08:22 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:35:04.288 09:08:22 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:35:04.288 09:08:22 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:35:04.288 09:08:22 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:35:04.288 09:08:22 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:04.288 09:08:22 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:35:04.288 09:08:22 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:35:04.288 09:08:22 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:35:04.288 09:08:22 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:04.288 09:08:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:04.288 09:08:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:04.288 09:08:22 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:35:04.288 09:08:22 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:35:04.288 09:08:22 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:35:04.288 09:08:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:06.188 09:08:24 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:06.188 09:08:24 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:35:06.188 09:08:24 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:35:06.188 09:08:24 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:35:06.188 09:08:24 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:35:06.188 09:08:24 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:35:06.188 09:08:24 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:35:06.188 09:08:24 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:35:06.188 09:08:24 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:35:06.188 09:08:24 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:35:06.188 09:08:24 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:35:06.188 09:08:24 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:35:06.188 09:08:24 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:35:06.188 09:08:24 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:35:06.188 09:08:24 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:35:06.188 09:08:24 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:06.188 09:08:24 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:06.188 09:08:24 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:06.188 09:08:24 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:06.188 09:08:24 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:06.188 09:08:24 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:06.188 09:08:24 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:06.188 09:08:24 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:06.188 09:08:24 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:06.188 09:08:24 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:06.188 09:08:24 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:06.188 09:08:24 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:35:06.188 09:08:24 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:35:06.188 09:08:24 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:35:06.188 09:08:24 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:35:06.188 09:08:24 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:35:06.188 09:08:24 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:35:06.188 09:08:24 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:06.188 09:08:24 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:35:06.188 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:35:06.188 09:08:24 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:06.188 09:08:24 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:06.188 09:08:24 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:06.188 09:08:24 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:06.188 09:08:24 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:06.188 09:08:24 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:06.188 09:08:24 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:35:06.188 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:35:06.188 09:08:24 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:06.188 09:08:24 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:06.188 09:08:24 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:06.188 09:08:24 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:06.188 09:08:24 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:06.188 09:08:24 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:35:06.188 09:08:24 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:35:06.188 09:08:24 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:35:06.188 09:08:24 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:06.188 09:08:24 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:06.188 09:08:24 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:06.188 09:08:24 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:06.188 09:08:24 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:06.188 09:08:24 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:06.188 09:08:24 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:06.188 09:08:24 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:35:06.188 Found net devices under 0000:0a:00.0: cvl_0_0 00:35:06.188 09:08:24 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:06.188 09:08:24 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:06.188 09:08:24 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:06.188 09:08:24 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:06.188 09:08:24 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:06.188 09:08:24 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:06.188 09:08:24 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:06.188 09:08:24 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:06.188 09:08:24 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:35:06.188 Found net devices under 0000:0a:00.1: cvl_0_1 00:35:06.188 09:08:24 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:06.188 09:08:24 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:35:06.188 09:08:24 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:35:06.188 09:08:24 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:35:06.188 09:08:24 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:35:06.188 09:08:24 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:35:06.188 09:08:24 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:06.188 09:08:24 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:06.188 09:08:24 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:06.188 09:08:24 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:35:06.189 09:08:24 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:06.189 09:08:24 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:06.189 09:08:24 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:35:06.189 09:08:24 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:06.189 09:08:24 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:06.189 09:08:24 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:35:06.189 09:08:24 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:35:06.189 09:08:24 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:35:06.189 09:08:24 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:06.189 09:08:24 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:06.189 09:08:24 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:06.189 09:08:24 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:35:06.189 09:08:24 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:06.189 09:08:24 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:06.189 09:08:24 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:06.189 09:08:24 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:35:06.189 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:06.189 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.225 ms 00:35:06.189 00:35:06.189 --- 10.0.0.2 ping statistics --- 00:35:06.189 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:06.189 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:35:06.189 09:08:24 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:06.189 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:06.189 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.200 ms 00:35:06.189 00:35:06.189 --- 10.0.0.1 ping statistics --- 00:35:06.189 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:06.189 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:35:06.189 09:08:24 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:06.189 09:08:24 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:35:06.189 09:08:24 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:35:06.189 09:08:24 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:07.566 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:35:07.566 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:35:07.566 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:35:07.566 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:35:07.566 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:35:07.566 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:35:07.566 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:35:07.566 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:35:07.566 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:35:07.566 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:35:07.566 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:35:07.566 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:35:07.566 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:35:07.566 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:35:07.566 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:35:07.566 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:35:08.501 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:35:08.501 09:08:26 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:08.501 09:08:26 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:35:08.501 09:08:26 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:35:08.501 09:08:26 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:08.501 09:08:26 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:35:08.501 09:08:26 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:35:08.501 09:08:26 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:35:08.501 09:08:26 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:35:08.501 09:08:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:08.501 09:08:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:08.501 09:08:26 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=1140293 00:35:08.501 09:08:26 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:35:08.501 09:08:26 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 1140293 00:35:08.501 09:08:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@831 -- # '[' -z 1140293 ']' 00:35:08.501 09:08:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:08.501 09:08:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:08.501 09:08:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:08.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:08.501 09:08:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:08.501 09:08:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:08.501 [2024-07-26 09:08:26.846006] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:35:08.501 [2024-07-26 09:08:26.846104] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:08.501 EAL: No free 2048 kB hugepages reported on node 1 00:35:08.501 [2024-07-26 09:08:26.882290] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:35:08.501 [2024-07-26 09:08:26.912538] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:08.760 [2024-07-26 09:08:27.004590] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:08.760 [2024-07-26 09:08:27.004652] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:08.760 [2024-07-26 09:08:27.004668] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:08.760 [2024-07-26 09:08:27.004682] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:08.760 [2024-07-26 09:08:27.004693] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:08.760 [2024-07-26 09:08:27.004774] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:35:08.760 [2024-07-26 09:08:27.004842] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:35:08.760 [2024-07-26 09:08:27.004934] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:35:08.760 [2024-07-26 09:08:27.004936] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:08.760 09:08:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:08.760 09:08:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # return 0 00:35:08.760 09:08:27 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:35:08.760 09:08:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:08.760 09:08:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:08.760 09:08:27 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:08.760 09:08:27 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:35:08.760 09:08:27 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:35:08.760 09:08:27 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:35:08.760 09:08:27 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:35:08.760 09:08:27 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:35:08.760 09:08:27 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:88:00.0 ]] 00:35:08.760 09:08:27 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:35:08.760 09:08:27 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:35:08.760 09:08:27 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:88:00.0 ]] 00:35:08.760 09:08:27 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:35:08.760 09:08:27 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:35:08.760 09:08:27 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:35:08.760 09:08:27 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:35:08.760 09:08:27 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:88:00.0 00:35:08.760 09:08:27 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:35:08.760 09:08:27 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:88:00.0 00:35:08.760 09:08:27 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:35:08.760 09:08:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:35:08.760 09:08:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:08.760 09:08:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:08.760 ************************************ 00:35:08.760 START TEST spdk_target_abort 00:35:08.760 ************************************ 00:35:08.760 09:08:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1125 -- # spdk_target 00:35:08.760 09:08:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:35:08.760 09:08:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:88:00.0 -b spdk_target 00:35:08.760 09:08:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:08.760 09:08:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:12.039 spdk_targetn1 00:35:12.039 09:08:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:12.039 09:08:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:12.039 09:08:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:12.039 09:08:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:12.039 [2024-07-26 09:08:30.037379] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:12.039 09:08:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:12.039 09:08:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:35:12.039 09:08:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:12.039 09:08:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:12.039 09:08:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:12.039 09:08:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:35:12.039 09:08:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:12.039 09:08:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:12.039 09:08:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:12.039 09:08:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:35:12.039 09:08:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:12.039 09:08:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:12.040 [2024-07-26 09:08:30.069624] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:12.040 09:08:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:12.040 09:08:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:35:12.040 09:08:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:35:12.040 09:08:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:35:12.040 09:08:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:35:12.040 09:08:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:35:12.040 09:08:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:35:12.040 09:08:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:35:12.040 09:08:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:35:12.040 09:08:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:35:12.040 09:08:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:12.040 09:08:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:35:12.040 09:08:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:12.040 09:08:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:35:12.040 09:08:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:12.040 09:08:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:35:12.040 09:08:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:12.040 09:08:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:35:12.040 09:08:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:12.040 09:08:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:12.040 09:08:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:12.040 09:08:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:12.040 EAL: No free 2048 kB hugepages reported on node 1 00:35:15.318 Initializing NVMe Controllers 00:35:15.318 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:35:15.318 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:15.318 Initialization complete. Launching workers. 00:35:15.318 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 10860, failed: 0 00:35:15.318 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1323, failed to submit 9537 00:35:15.318 success 762, unsuccess 561, failed 0 00:35:15.318 09:08:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:15.318 09:08:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:15.318 EAL: No free 2048 kB hugepages reported on node 1 00:35:18.597 Initializing NVMe Controllers 00:35:18.597 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:35:18.597 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:18.597 Initialization complete. Launching workers. 00:35:18.597 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8609, failed: 0 00:35:18.597 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1249, failed to submit 7360 00:35:18.597 success 313, unsuccess 936, failed 0 00:35:18.597 09:08:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:18.597 09:08:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:18.597 EAL: No free 2048 kB hugepages reported on node 1 00:35:21.877 Initializing NVMe Controllers 00:35:21.877 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:35:21.877 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:21.877 Initialization complete. Launching workers. 00:35:21.877 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31180, failed: 0 00:35:21.877 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2720, failed to submit 28460 00:35:21.877 success 513, unsuccess 2207, failed 0 00:35:21.877 09:08:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:35:21.877 09:08:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:21.877 09:08:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:21.877 09:08:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:21.877 09:08:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:35:21.877 09:08:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:21.877 09:08:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:22.846 09:08:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:22.846 09:08:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 1140293 00:35:22.846 09:08:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@950 -- # '[' -z 1140293 ']' 00:35:22.846 09:08:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # kill -0 1140293 00:35:22.846 09:08:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # uname 00:35:22.846 09:08:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:22.846 09:08:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1140293 00:35:22.846 09:08:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:35:22.846 09:08:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:35:22.846 09:08:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1140293' 00:35:22.846 killing process with pid 1140293 00:35:22.846 09:08:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@969 -- # kill 1140293 00:35:22.846 09:08:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@974 -- # wait 1140293 00:35:23.104 00:35:23.104 real 0m14.159s 00:35:23.104 user 0m53.693s 00:35:23.104 sys 0m2.580s 00:35:23.104 09:08:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:23.104 09:08:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:23.104 ************************************ 00:35:23.104 END TEST spdk_target_abort 00:35:23.104 ************************************ 00:35:23.104 09:08:41 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:35:23.104 09:08:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:35:23.104 09:08:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:23.104 09:08:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:23.104 ************************************ 00:35:23.104 START TEST kernel_target_abort 00:35:23.104 ************************************ 00:35:23.104 09:08:41 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1125 -- # kernel_target 00:35:23.104 09:08:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:35:23.104 09:08:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:35:23.104 09:08:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:23.104 09:08:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:23.104 09:08:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:23.104 09:08:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:23.104 09:08:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:23.104 09:08:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:23.104 09:08:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:23.104 09:08:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:23.104 09:08:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:23.104 09:08:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:35:23.104 09:08:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:35:23.104 09:08:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:35:23.104 09:08:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:23.104 09:08:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:23.104 09:08:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:35:23.104 09:08:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:35:23.104 09:08:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:35:23.104 09:08:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:35:23.104 09:08:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:35:23.104 09:08:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:24.039 Waiting for block devices as requested 00:35:24.039 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:35:24.299 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:35:24.299 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:35:24.557 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:35:24.557 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:35:24.557 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:35:24.557 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:35:24.816 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:35:24.816 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:35:24.816 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:35:24.816 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:35:25.076 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:35:25.076 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:35:25.076 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:35:25.076 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:35:25.420 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:35:25.420 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:35:25.420 09:08:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:35:25.420 09:08:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:35:25.420 09:08:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:35:25.420 09:08:43 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:35:25.420 09:08:43 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:35:25.420 09:08:43 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:35:25.420 09:08:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:35:25.420 09:08:43 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:35:25.420 09:08:43 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:35:25.420 No valid GPT data, bailing 00:35:25.420 09:08:43 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:35:25.420 09:08:43 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:35:25.420 09:08:43 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:35:25.420 09:08:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:35:25.420 09:08:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:35:25.420 09:08:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:25.420 09:08:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:25.420 09:08:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:35:25.420 09:08:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:35:25.420 09:08:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:35:25.420 09:08:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:35:25.420 09:08:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:35:25.420 09:08:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:35:25.420 09:08:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:35:25.420 09:08:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:35:25.420 09:08:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:35:25.420 09:08:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:35:25.420 09:08:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:35:25.680 00:35:25.680 Discovery Log Number of Records 2, Generation counter 2 00:35:25.680 =====Discovery Log Entry 0====== 00:35:25.680 trtype: tcp 00:35:25.680 adrfam: ipv4 00:35:25.680 subtype: current discovery subsystem 00:35:25.680 treq: not specified, sq flow control disable supported 00:35:25.680 portid: 1 00:35:25.680 trsvcid: 4420 00:35:25.680 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:35:25.680 traddr: 10.0.0.1 00:35:25.680 eflags: none 00:35:25.680 sectype: none 00:35:25.680 =====Discovery Log Entry 1====== 00:35:25.680 trtype: tcp 00:35:25.680 adrfam: ipv4 00:35:25.680 subtype: nvme subsystem 00:35:25.680 treq: not specified, sq flow control disable supported 00:35:25.680 portid: 1 00:35:25.680 trsvcid: 4420 00:35:25.680 subnqn: nqn.2016-06.io.spdk:testnqn 00:35:25.680 traddr: 10.0.0.1 00:35:25.680 eflags: none 00:35:25.680 sectype: none 00:35:25.680 09:08:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:35:25.680 09:08:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:35:25.680 09:08:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:35:25.680 09:08:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:35:25.680 09:08:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:35:25.680 09:08:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:35:25.680 09:08:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:35:25.680 09:08:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:35:25.680 09:08:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:35:25.680 09:08:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:25.680 09:08:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:35:25.680 09:08:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:25.680 09:08:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:35:25.680 09:08:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:25.680 09:08:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:35:25.680 09:08:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:25.680 09:08:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:35:25.680 09:08:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:25.680 09:08:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:25.680 09:08:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:25.680 09:08:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:25.680 EAL: No free 2048 kB hugepages reported on node 1 00:35:28.969 Initializing NVMe Controllers 00:35:28.969 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:35:28.969 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:28.969 Initialization complete. Launching workers. 00:35:28.969 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 34013, failed: 0 00:35:28.969 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 34013, failed to submit 0 00:35:28.969 success 0, unsuccess 34013, failed 0 00:35:28.969 09:08:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:28.969 09:08:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:28.969 EAL: No free 2048 kB hugepages reported on node 1 00:35:32.253 Initializing NVMe Controllers 00:35:32.253 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:35:32.253 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:32.253 Initialization complete. Launching workers. 00:35:32.253 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 65831, failed: 0 00:35:32.253 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 16606, failed to submit 49225 00:35:32.253 success 0, unsuccess 16606, failed 0 00:35:32.253 09:08:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:32.253 09:08:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:32.253 EAL: No free 2048 kB hugepages reported on node 1 00:35:35.542 Initializing NVMe Controllers 00:35:35.542 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:35:35.542 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:35.542 Initialization complete. Launching workers. 00:35:35.542 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 64281, failed: 0 00:35:35.542 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 16062, failed to submit 48219 00:35:35.542 success 0, unsuccess 16062, failed 0 00:35:35.542 09:08:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:35:35.542 09:08:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:35:35.542 09:08:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:35:35.542 09:08:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:35.542 09:08:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:35.542 09:08:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:35:35.542 09:08:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:35.542 09:08:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:35:35.542 09:08:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:35:35.542 09:08:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:36.110 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:35:36.111 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:35:36.111 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:35:36.111 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:35:36.111 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:35:36.111 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:35:36.111 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:35:36.111 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:35:36.111 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:35:36.111 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:35:36.111 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:35:36.111 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:35:36.111 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:35:36.111 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:35:36.111 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:35:36.111 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:35:37.047 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:35:37.306 00:35:37.306 real 0m14.180s 00:35:37.306 user 0m5.336s 00:35:37.306 sys 0m3.289s 00:35:37.306 09:08:55 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:37.306 09:08:55 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:37.306 ************************************ 00:35:37.306 END TEST kernel_target_abort 00:35:37.306 ************************************ 00:35:37.306 09:08:55 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:35:37.306 09:08:55 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:35:37.306 09:08:55 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:35:37.306 09:08:55 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:35:37.306 09:08:55 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:35:37.306 09:08:55 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:35:37.306 09:08:55 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:35:37.306 09:08:55 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:35:37.306 rmmod nvme_tcp 00:35:37.306 rmmod nvme_fabrics 00:35:37.306 rmmod nvme_keyring 00:35:37.306 09:08:55 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:35:37.306 09:08:55 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:35:37.306 09:08:55 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:35:37.306 09:08:55 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 1140293 ']' 00:35:37.306 09:08:55 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 1140293 00:35:37.306 09:08:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@950 -- # '[' -z 1140293 ']' 00:35:37.306 09:08:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # kill -0 1140293 00:35:37.307 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1140293) - No such process 00:35:37.307 09:08:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@977 -- # echo 'Process with pid 1140293 is not found' 00:35:37.307 Process with pid 1140293 is not found 00:35:37.307 09:08:55 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:35:37.307 09:08:55 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:38.242 Waiting for block devices as requested 00:35:38.242 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:35:38.502 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:35:38.502 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:35:38.761 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:35:38.761 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:35:38.761 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:35:38.761 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:35:39.019 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:35:39.019 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:35:39.019 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:35:39.019 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:35:39.278 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:35:39.278 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:35:39.278 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:35:39.278 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:35:39.537 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:35:39.537 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:35:39.537 09:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:35:39.537 09:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:35:39.537 09:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:35:39.537 09:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:35:39.537 09:08:57 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:39.537 09:08:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:39.537 09:08:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:42.072 09:09:00 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:35:42.072 00:35:42.072 real 0m37.755s 00:35:42.072 user 1m1.082s 00:35:42.072 sys 0m9.319s 00:35:42.072 09:09:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:42.072 09:09:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:42.072 ************************************ 00:35:42.072 END TEST nvmf_abort_qd_sizes 00:35:42.072 ************************************ 00:35:42.072 09:09:00 -- spdk/autotest.sh@299 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:35:42.072 09:09:00 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:35:42.072 09:09:00 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:42.072 09:09:00 -- common/autotest_common.sh@10 -- # set +x 00:35:42.072 ************************************ 00:35:42.072 START TEST keyring_file 00:35:42.072 ************************************ 00:35:42.072 09:09:00 keyring_file -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:35:42.072 * Looking for test storage... 00:35:42.072 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:35:42.072 09:09:00 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:35:42.072 09:09:00 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:42.072 09:09:00 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:35:42.072 09:09:00 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:42.072 09:09:00 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:42.072 09:09:00 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:42.072 09:09:00 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:42.072 09:09:00 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:42.072 09:09:00 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:42.072 09:09:00 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:42.072 09:09:00 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:42.072 09:09:00 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:42.072 09:09:00 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:42.072 09:09:00 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:35:42.072 09:09:00 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:35:42.072 09:09:00 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:42.072 09:09:00 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:42.072 09:09:00 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:42.072 09:09:00 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:42.072 09:09:00 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:42.072 09:09:00 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:42.072 09:09:00 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:42.072 09:09:00 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:42.072 09:09:00 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:42.072 09:09:00 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:42.072 09:09:00 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:42.072 09:09:00 keyring_file -- paths/export.sh@5 -- # export PATH 00:35:42.072 09:09:00 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:42.072 09:09:00 keyring_file -- nvmf/common.sh@47 -- # : 0 00:35:42.072 09:09:00 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:35:42.072 09:09:00 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:35:42.072 09:09:00 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:42.072 09:09:00 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:42.073 09:09:00 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:42.073 09:09:00 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:35:42.073 09:09:00 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:35:42.073 09:09:00 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:35:42.073 09:09:00 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:35:42.073 09:09:00 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:35:42.073 09:09:00 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:35:42.073 09:09:00 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:35:42.073 09:09:00 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:35:42.073 09:09:00 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:35:42.073 09:09:00 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:35:42.073 09:09:00 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:35:42.073 09:09:00 keyring_file -- keyring/common.sh@17 -- # name=key0 00:35:42.073 09:09:00 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:35:42.073 09:09:00 keyring_file -- keyring/common.sh@17 -- # digest=0 00:35:42.073 09:09:00 keyring_file -- keyring/common.sh@18 -- # mktemp 00:35:42.073 09:09:00 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.bvZIAWwsBj 00:35:42.073 09:09:00 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:35:42.073 09:09:00 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:35:42.073 09:09:00 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:35:42.073 09:09:00 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:35:42.073 09:09:00 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:35:42.073 09:09:00 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:35:42.073 09:09:00 keyring_file -- nvmf/common.sh@705 -- # python - 00:35:42.073 09:09:00 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.bvZIAWwsBj 00:35:42.073 09:09:00 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.bvZIAWwsBj 00:35:42.073 09:09:00 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.bvZIAWwsBj 00:35:42.073 09:09:00 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:35:42.073 09:09:00 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:35:42.073 09:09:00 keyring_file -- keyring/common.sh@17 -- # name=key1 00:35:42.073 09:09:00 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:35:42.073 09:09:00 keyring_file -- keyring/common.sh@17 -- # digest=0 00:35:42.073 09:09:00 keyring_file -- keyring/common.sh@18 -- # mktemp 00:35:42.073 09:09:00 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.TRLJzCwdUJ 00:35:42.073 09:09:00 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:35:42.073 09:09:00 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:35:42.073 09:09:00 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:35:42.073 09:09:00 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:35:42.073 09:09:00 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:35:42.073 09:09:00 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:35:42.073 09:09:00 keyring_file -- nvmf/common.sh@705 -- # python - 00:35:42.073 09:09:00 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.TRLJzCwdUJ 00:35:42.073 09:09:00 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.TRLJzCwdUJ 00:35:42.073 09:09:00 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.TRLJzCwdUJ 00:35:42.073 09:09:00 keyring_file -- keyring/file.sh@30 -- # tgtpid=1146066 00:35:42.073 09:09:00 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:35:42.073 09:09:00 keyring_file -- keyring/file.sh@32 -- # waitforlisten 1146066 00:35:42.073 09:09:00 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 1146066 ']' 00:35:42.073 09:09:00 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:42.073 09:09:00 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:42.073 09:09:00 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:42.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:42.073 09:09:00 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:42.073 09:09:00 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:42.073 [2024-07-26 09:09:00.281969] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:35:42.073 [2024-07-26 09:09:00.282081] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1146066 ] 00:35:42.073 EAL: No free 2048 kB hugepages reported on node 1 00:35:42.073 [2024-07-26 09:09:00.314881] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:35:42.073 [2024-07-26 09:09:00.345165] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:42.073 [2024-07-26 09:09:00.439510] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:42.331 09:09:00 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:42.331 09:09:00 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:35:42.331 09:09:00 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:35:42.331 09:09:00 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:42.331 09:09:00 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:42.331 [2024-07-26 09:09:00.683834] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:42.331 null0 00:35:42.331 [2024-07-26 09:09:00.715909] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:35:42.332 [2024-07-26 09:09:00.716421] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:35:42.332 [2024-07-26 09:09:00.723901] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:35:42.332 09:09:00 keyring_file -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:42.332 09:09:00 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:35:42.332 09:09:00 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:35:42.332 09:09:00 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:35:42.332 09:09:00 keyring_file -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:35:42.332 09:09:00 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:42.332 09:09:00 keyring_file -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:35:42.332 09:09:00 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:42.332 09:09:00 keyring_file -- common/autotest_common.sh@653 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:35:42.332 09:09:00 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:42.332 09:09:00 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:42.332 [2024-07-26 09:09:00.735938] nvmf_rpc.c: 788:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:35:42.332 request: 00:35:42.332 { 00:35:42.332 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:35:42.332 "secure_channel": false, 00:35:42.332 "listen_address": { 00:35:42.332 "trtype": "tcp", 00:35:42.332 "traddr": "127.0.0.1", 00:35:42.332 "trsvcid": "4420" 00:35:42.332 }, 00:35:42.332 "method": "nvmf_subsystem_add_listener", 00:35:42.332 "req_id": 1 00:35:42.332 } 00:35:42.332 Got JSON-RPC error response 00:35:42.332 response: 00:35:42.332 { 00:35:42.332 "code": -32602, 00:35:42.332 "message": "Invalid parameters" 00:35:42.332 } 00:35:42.332 09:09:00 keyring_file -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:35:42.332 09:09:00 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:35:42.332 09:09:00 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:35:42.332 09:09:00 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:35:42.332 09:09:00 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:35:42.332 09:09:00 keyring_file -- keyring/file.sh@46 -- # bperfpid=1146097 00:35:42.332 09:09:00 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:35:42.332 09:09:00 keyring_file -- keyring/file.sh@48 -- # waitforlisten 1146097 /var/tmp/bperf.sock 00:35:42.332 09:09:00 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 1146097 ']' 00:35:42.332 09:09:00 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:42.332 09:09:00 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:42.332 09:09:00 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:42.332 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:42.332 09:09:00 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:42.332 09:09:00 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:42.332 [2024-07-26 09:09:00.782146] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:35:42.332 [2024-07-26 09:09:00.782233] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1146097 ] 00:35:42.590 EAL: No free 2048 kB hugepages reported on node 1 00:35:42.590 [2024-07-26 09:09:00.815997] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:35:42.590 [2024-07-26 09:09:00.845242] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:42.590 [2024-07-26 09:09:00.942659] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:35:42.849 09:09:01 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:42.849 09:09:01 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:35:42.849 09:09:01 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.bvZIAWwsBj 00:35:42.849 09:09:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.bvZIAWwsBj 00:35:43.108 09:09:01 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.TRLJzCwdUJ 00:35:43.108 09:09:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.TRLJzCwdUJ 00:35:43.380 09:09:01 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:35:43.380 09:09:01 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:35:43.380 09:09:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:43.380 09:09:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:43.380 09:09:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:43.660 09:09:01 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.bvZIAWwsBj == \/\t\m\p\/\t\m\p\.\b\v\Z\I\A\W\w\s\B\j ]] 00:35:43.660 09:09:01 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:35:43.660 09:09:01 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:35:43.660 09:09:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:43.660 09:09:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:43.660 09:09:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:43.660 09:09:02 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.TRLJzCwdUJ == \/\t\m\p\/\t\m\p\.\T\R\L\J\z\C\w\d\U\J ]] 00:35:43.660 09:09:02 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:35:43.660 09:09:02 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:43.660 09:09:02 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:43.660 09:09:02 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:43.660 09:09:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:43.660 09:09:02 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:43.918 09:09:02 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:35:43.918 09:09:02 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:35:43.918 09:09:02 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:44.176 09:09:02 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:44.176 09:09:02 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:44.176 09:09:02 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:44.176 09:09:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:44.176 09:09:02 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:35:44.176 09:09:02 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:44.176 09:09:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:44.434 [2024-07-26 09:09:02.871452] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:35:44.691 nvme0n1 00:35:44.691 09:09:02 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:35:44.691 09:09:02 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:44.691 09:09:02 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:44.691 09:09:02 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:44.691 09:09:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:44.691 09:09:02 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:44.949 09:09:03 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:35:44.949 09:09:03 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:35:44.949 09:09:03 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:44.949 09:09:03 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:44.949 09:09:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:44.949 09:09:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:44.949 09:09:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:45.207 09:09:03 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:35:45.207 09:09:03 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:45.207 Running I/O for 1 seconds... 00:35:46.581 00:35:46.581 Latency(us) 00:35:46.581 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:46.581 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:35:46.581 nvme0n1 : 1.02 5128.48 20.03 0.00 0.00 24673.58 4393.34 27767.85 00:35:46.581 =================================================================================================================== 00:35:46.581 Total : 5128.48 20.03 0.00 0.00 24673.58 4393.34 27767.85 00:35:46.581 0 00:35:46.581 09:09:04 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:35:46.581 09:09:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:35:46.581 09:09:04 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:35:46.581 09:09:04 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:46.581 09:09:04 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:46.581 09:09:04 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:46.581 09:09:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:46.581 09:09:04 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:46.840 09:09:05 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:35:46.840 09:09:05 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:35:46.840 09:09:05 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:46.840 09:09:05 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:46.840 09:09:05 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:46.840 09:09:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:46.840 09:09:05 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:47.098 09:09:05 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:35:47.098 09:09:05 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:35:47.098 09:09:05 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:35:47.098 09:09:05 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:35:47.098 09:09:05 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:35:47.098 09:09:05 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:47.098 09:09:05 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:35:47.098 09:09:05 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:47.098 09:09:05 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:35:47.098 09:09:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:35:47.356 [2024-07-26 09:09:05.647852] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:35:47.356 [2024-07-26 09:09:05.648358] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24c97b0 (107): Transport endpoint is not connected 00:35:47.356 [2024-07-26 09:09:05.649337] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24c97b0 (9): Bad file descriptor 00:35:47.356 [2024-07-26 09:09:05.650346] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:35:47.356 [2024-07-26 09:09:05.650384] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:35:47.356 [2024-07-26 09:09:05.650401] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:35:47.356 request: 00:35:47.356 { 00:35:47.356 "name": "nvme0", 00:35:47.356 "trtype": "tcp", 00:35:47.356 "traddr": "127.0.0.1", 00:35:47.356 "adrfam": "ipv4", 00:35:47.356 "trsvcid": "4420", 00:35:47.356 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:47.356 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:47.356 "prchk_reftag": false, 00:35:47.356 "prchk_guard": false, 00:35:47.356 "hdgst": false, 00:35:47.356 "ddgst": false, 00:35:47.356 "psk": "key1", 00:35:47.356 "method": "bdev_nvme_attach_controller", 00:35:47.356 "req_id": 1 00:35:47.356 } 00:35:47.356 Got JSON-RPC error response 00:35:47.356 response: 00:35:47.356 { 00:35:47.356 "code": -5, 00:35:47.356 "message": "Input/output error" 00:35:47.356 } 00:35:47.356 09:09:05 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:35:47.356 09:09:05 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:35:47.356 09:09:05 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:35:47.356 09:09:05 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:35:47.356 09:09:05 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:35:47.356 09:09:05 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:47.356 09:09:05 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:47.356 09:09:05 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:47.356 09:09:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:47.356 09:09:05 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:47.614 09:09:05 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:35:47.614 09:09:05 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:35:47.614 09:09:05 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:47.614 09:09:05 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:47.614 09:09:05 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:47.614 09:09:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:47.614 09:09:05 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:47.872 09:09:06 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:35:47.872 09:09:06 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:35:47.872 09:09:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:35:48.130 09:09:06 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:35:48.130 09:09:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:35:48.387 09:09:06 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:35:48.387 09:09:06 keyring_file -- keyring/file.sh@77 -- # jq length 00:35:48.387 09:09:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:48.646 09:09:06 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:35:48.646 09:09:06 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.bvZIAWwsBj 00:35:48.646 09:09:06 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.bvZIAWwsBj 00:35:48.646 09:09:06 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:35:48.646 09:09:06 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.bvZIAWwsBj 00:35:48.646 09:09:06 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:35:48.646 09:09:06 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:48.646 09:09:06 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:35:48.646 09:09:06 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:48.646 09:09:06 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.bvZIAWwsBj 00:35:48.646 09:09:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.bvZIAWwsBj 00:35:48.904 [2024-07-26 09:09:07.192318] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.bvZIAWwsBj': 0100660 00:35:48.904 [2024-07-26 09:09:07.192378] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:35:48.904 request: 00:35:48.904 { 00:35:48.904 "name": "key0", 00:35:48.904 "path": "/tmp/tmp.bvZIAWwsBj", 00:35:48.904 "method": "keyring_file_add_key", 00:35:48.904 "req_id": 1 00:35:48.904 } 00:35:48.904 Got JSON-RPC error response 00:35:48.904 response: 00:35:48.904 { 00:35:48.904 "code": -1, 00:35:48.904 "message": "Operation not permitted" 00:35:48.904 } 00:35:48.904 09:09:07 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:35:48.904 09:09:07 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:35:48.904 09:09:07 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:35:48.904 09:09:07 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:35:48.904 09:09:07 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.bvZIAWwsBj 00:35:48.904 09:09:07 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.bvZIAWwsBj 00:35:48.904 09:09:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.bvZIAWwsBj 00:35:49.162 09:09:07 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.bvZIAWwsBj 00:35:49.162 09:09:07 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:35:49.162 09:09:07 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:49.162 09:09:07 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:49.162 09:09:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:49.162 09:09:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:49.162 09:09:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:49.421 09:09:07 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:35:49.421 09:09:07 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:49.421 09:09:07 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:35:49.421 09:09:07 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:49.421 09:09:07 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:35:49.421 09:09:07 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:49.421 09:09:07 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:35:49.421 09:09:07 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:49.421 09:09:07 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:49.421 09:09:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:49.679 [2024-07-26 09:09:08.010648] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.bvZIAWwsBj': No such file or directory 00:35:49.679 [2024-07-26 09:09:08.010720] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:35:49.679 [2024-07-26 09:09:08.010761] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:35:49.679 [2024-07-26 09:09:08.010774] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:35:49.679 [2024-07-26 09:09:08.010787] bdev_nvme.c:6296:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:35:49.679 request: 00:35:49.679 { 00:35:49.679 "name": "nvme0", 00:35:49.679 "trtype": "tcp", 00:35:49.679 "traddr": "127.0.0.1", 00:35:49.679 "adrfam": "ipv4", 00:35:49.679 "trsvcid": "4420", 00:35:49.679 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:49.679 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:49.679 "prchk_reftag": false, 00:35:49.679 "prchk_guard": false, 00:35:49.679 "hdgst": false, 00:35:49.679 "ddgst": false, 00:35:49.679 "psk": "key0", 00:35:49.679 "method": "bdev_nvme_attach_controller", 00:35:49.679 "req_id": 1 00:35:49.679 } 00:35:49.679 Got JSON-RPC error response 00:35:49.679 response: 00:35:49.679 { 00:35:49.679 "code": -19, 00:35:49.679 "message": "No such device" 00:35:49.679 } 00:35:49.679 09:09:08 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:35:49.679 09:09:08 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:35:49.679 09:09:08 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:35:49.679 09:09:08 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:35:49.679 09:09:08 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:35:49.679 09:09:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:35:49.937 09:09:08 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:35:49.937 09:09:08 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:35:49.937 09:09:08 keyring_file -- keyring/common.sh@17 -- # name=key0 00:35:49.937 09:09:08 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:35:49.937 09:09:08 keyring_file -- keyring/common.sh@17 -- # digest=0 00:35:49.937 09:09:08 keyring_file -- keyring/common.sh@18 -- # mktemp 00:35:49.937 09:09:08 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.vuY5BrKsh3 00:35:49.937 09:09:08 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:35:49.937 09:09:08 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:35:49.937 09:09:08 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:35:49.937 09:09:08 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:35:49.937 09:09:08 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:35:49.937 09:09:08 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:35:49.937 09:09:08 keyring_file -- nvmf/common.sh@705 -- # python - 00:35:49.937 09:09:08 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.vuY5BrKsh3 00:35:49.937 09:09:08 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.vuY5BrKsh3 00:35:49.937 09:09:08 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.vuY5BrKsh3 00:35:49.937 09:09:08 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.vuY5BrKsh3 00:35:49.937 09:09:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.vuY5BrKsh3 00:35:50.195 09:09:08 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:50.195 09:09:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:50.453 nvme0n1 00:35:50.710 09:09:08 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:35:50.710 09:09:08 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:50.710 09:09:08 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:50.710 09:09:08 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:50.710 09:09:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:50.710 09:09:08 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:50.968 09:09:09 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:35:50.968 09:09:09 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:35:50.968 09:09:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:35:51.226 09:09:09 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:35:51.226 09:09:09 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:35:51.226 09:09:09 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:51.226 09:09:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:51.226 09:09:09 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:51.226 09:09:09 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:35:51.484 09:09:09 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:35:51.484 09:09:09 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:51.484 09:09:09 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:51.484 09:09:09 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:51.484 09:09:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:51.484 09:09:09 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:51.484 09:09:09 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:35:51.484 09:09:09 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:35:51.484 09:09:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:35:51.741 09:09:10 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:35:51.741 09:09:10 keyring_file -- keyring/file.sh@104 -- # jq length 00:35:51.741 09:09:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:51.999 09:09:10 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:35:52.256 09:09:10 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.vuY5BrKsh3 00:35:52.256 09:09:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.vuY5BrKsh3 00:35:52.256 09:09:10 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.TRLJzCwdUJ 00:35:52.256 09:09:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.TRLJzCwdUJ 00:35:52.513 09:09:10 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:52.513 09:09:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:53.078 nvme0n1 00:35:53.078 09:09:11 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:35:53.078 09:09:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:35:53.337 09:09:11 keyring_file -- keyring/file.sh@112 -- # config='{ 00:35:53.337 "subsystems": [ 00:35:53.337 { 00:35:53.337 "subsystem": "keyring", 00:35:53.337 "config": [ 00:35:53.337 { 00:35:53.337 "method": "keyring_file_add_key", 00:35:53.337 "params": { 00:35:53.337 "name": "key0", 00:35:53.337 "path": "/tmp/tmp.vuY5BrKsh3" 00:35:53.337 } 00:35:53.337 }, 00:35:53.337 { 00:35:53.337 "method": "keyring_file_add_key", 00:35:53.337 "params": { 00:35:53.337 "name": "key1", 00:35:53.337 "path": "/tmp/tmp.TRLJzCwdUJ" 00:35:53.337 } 00:35:53.337 } 00:35:53.337 ] 00:35:53.337 }, 00:35:53.337 { 00:35:53.337 "subsystem": "iobuf", 00:35:53.337 "config": [ 00:35:53.337 { 00:35:53.337 "method": "iobuf_set_options", 00:35:53.337 "params": { 00:35:53.337 "small_pool_count": 8192, 00:35:53.337 "large_pool_count": 1024, 00:35:53.337 "small_bufsize": 8192, 00:35:53.337 "large_bufsize": 135168 00:35:53.337 } 00:35:53.337 } 00:35:53.337 ] 00:35:53.337 }, 00:35:53.337 { 00:35:53.337 "subsystem": "sock", 00:35:53.337 "config": [ 00:35:53.337 { 00:35:53.337 "method": "sock_set_default_impl", 00:35:53.337 "params": { 00:35:53.337 "impl_name": "posix" 00:35:53.337 } 00:35:53.337 }, 00:35:53.337 { 00:35:53.337 "method": "sock_impl_set_options", 00:35:53.337 "params": { 00:35:53.337 "impl_name": "ssl", 00:35:53.337 "recv_buf_size": 4096, 00:35:53.337 "send_buf_size": 4096, 00:35:53.337 "enable_recv_pipe": true, 00:35:53.337 "enable_quickack": false, 00:35:53.337 "enable_placement_id": 0, 00:35:53.337 "enable_zerocopy_send_server": true, 00:35:53.337 "enable_zerocopy_send_client": false, 00:35:53.337 "zerocopy_threshold": 0, 00:35:53.337 "tls_version": 0, 00:35:53.337 "enable_ktls": false 00:35:53.337 } 00:35:53.337 }, 00:35:53.337 { 00:35:53.337 "method": "sock_impl_set_options", 00:35:53.337 "params": { 00:35:53.337 "impl_name": "posix", 00:35:53.337 "recv_buf_size": 2097152, 00:35:53.337 "send_buf_size": 2097152, 00:35:53.337 "enable_recv_pipe": true, 00:35:53.337 "enable_quickack": false, 00:35:53.337 "enable_placement_id": 0, 00:35:53.337 "enable_zerocopy_send_server": true, 00:35:53.337 "enable_zerocopy_send_client": false, 00:35:53.337 "zerocopy_threshold": 0, 00:35:53.337 "tls_version": 0, 00:35:53.337 "enable_ktls": false 00:35:53.337 } 00:35:53.337 } 00:35:53.337 ] 00:35:53.337 }, 00:35:53.337 { 00:35:53.337 "subsystem": "vmd", 00:35:53.337 "config": [] 00:35:53.337 }, 00:35:53.337 { 00:35:53.337 "subsystem": "accel", 00:35:53.337 "config": [ 00:35:53.337 { 00:35:53.337 "method": "accel_set_options", 00:35:53.337 "params": { 00:35:53.337 "small_cache_size": 128, 00:35:53.337 "large_cache_size": 16, 00:35:53.337 "task_count": 2048, 00:35:53.337 "sequence_count": 2048, 00:35:53.337 "buf_count": 2048 00:35:53.337 } 00:35:53.337 } 00:35:53.337 ] 00:35:53.337 }, 00:35:53.337 { 00:35:53.337 "subsystem": "bdev", 00:35:53.337 "config": [ 00:35:53.337 { 00:35:53.337 "method": "bdev_set_options", 00:35:53.337 "params": { 00:35:53.337 "bdev_io_pool_size": 65535, 00:35:53.337 "bdev_io_cache_size": 256, 00:35:53.337 "bdev_auto_examine": true, 00:35:53.337 "iobuf_small_cache_size": 128, 00:35:53.337 "iobuf_large_cache_size": 16 00:35:53.337 } 00:35:53.337 }, 00:35:53.337 { 00:35:53.337 "method": "bdev_raid_set_options", 00:35:53.337 "params": { 00:35:53.337 "process_window_size_kb": 1024, 00:35:53.337 "process_max_bandwidth_mb_sec": 0 00:35:53.337 } 00:35:53.337 }, 00:35:53.337 { 00:35:53.337 "method": "bdev_iscsi_set_options", 00:35:53.337 "params": { 00:35:53.337 "timeout_sec": 30 00:35:53.337 } 00:35:53.337 }, 00:35:53.337 { 00:35:53.337 "method": "bdev_nvme_set_options", 00:35:53.337 "params": { 00:35:53.337 "action_on_timeout": "none", 00:35:53.337 "timeout_us": 0, 00:35:53.337 "timeout_admin_us": 0, 00:35:53.337 "keep_alive_timeout_ms": 10000, 00:35:53.337 "arbitration_burst": 0, 00:35:53.337 "low_priority_weight": 0, 00:35:53.337 "medium_priority_weight": 0, 00:35:53.337 "high_priority_weight": 0, 00:35:53.337 "nvme_adminq_poll_period_us": 10000, 00:35:53.337 "nvme_ioq_poll_period_us": 0, 00:35:53.337 "io_queue_requests": 512, 00:35:53.337 "delay_cmd_submit": true, 00:35:53.337 "transport_retry_count": 4, 00:35:53.337 "bdev_retry_count": 3, 00:35:53.337 "transport_ack_timeout": 0, 00:35:53.337 "ctrlr_loss_timeout_sec": 0, 00:35:53.337 "reconnect_delay_sec": 0, 00:35:53.337 "fast_io_fail_timeout_sec": 0, 00:35:53.337 "disable_auto_failback": false, 00:35:53.337 "generate_uuids": false, 00:35:53.337 "transport_tos": 0, 00:35:53.337 "nvme_error_stat": false, 00:35:53.337 "rdma_srq_size": 0, 00:35:53.337 "io_path_stat": false, 00:35:53.337 "allow_accel_sequence": false, 00:35:53.337 "rdma_max_cq_size": 0, 00:35:53.337 "rdma_cm_event_timeout_ms": 0, 00:35:53.337 "dhchap_digests": [ 00:35:53.337 "sha256", 00:35:53.337 "sha384", 00:35:53.337 "sha512" 00:35:53.337 ], 00:35:53.337 "dhchap_dhgroups": [ 00:35:53.337 "null", 00:35:53.337 "ffdhe2048", 00:35:53.337 "ffdhe3072", 00:35:53.337 "ffdhe4096", 00:35:53.337 "ffdhe6144", 00:35:53.337 "ffdhe8192" 00:35:53.337 ] 00:35:53.337 } 00:35:53.337 }, 00:35:53.337 { 00:35:53.337 "method": "bdev_nvme_attach_controller", 00:35:53.337 "params": { 00:35:53.337 "name": "nvme0", 00:35:53.337 "trtype": "TCP", 00:35:53.337 "adrfam": "IPv4", 00:35:53.337 "traddr": "127.0.0.1", 00:35:53.337 "trsvcid": "4420", 00:35:53.337 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:53.337 "prchk_reftag": false, 00:35:53.337 "prchk_guard": false, 00:35:53.337 "ctrlr_loss_timeout_sec": 0, 00:35:53.337 "reconnect_delay_sec": 0, 00:35:53.337 "fast_io_fail_timeout_sec": 0, 00:35:53.337 "psk": "key0", 00:35:53.337 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:53.337 "hdgst": false, 00:35:53.337 "ddgst": false 00:35:53.337 } 00:35:53.337 }, 00:35:53.337 { 00:35:53.337 "method": "bdev_nvme_set_hotplug", 00:35:53.337 "params": { 00:35:53.337 "period_us": 100000, 00:35:53.337 "enable": false 00:35:53.337 } 00:35:53.337 }, 00:35:53.337 { 00:35:53.337 "method": "bdev_wait_for_examine" 00:35:53.337 } 00:35:53.337 ] 00:35:53.337 }, 00:35:53.337 { 00:35:53.337 "subsystem": "nbd", 00:35:53.337 "config": [] 00:35:53.337 } 00:35:53.337 ] 00:35:53.337 }' 00:35:53.337 09:09:11 keyring_file -- keyring/file.sh@114 -- # killprocess 1146097 00:35:53.337 09:09:11 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 1146097 ']' 00:35:53.337 09:09:11 keyring_file -- common/autotest_common.sh@954 -- # kill -0 1146097 00:35:53.337 09:09:11 keyring_file -- common/autotest_common.sh@955 -- # uname 00:35:53.337 09:09:11 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:53.337 09:09:11 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1146097 00:35:53.337 09:09:11 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:35:53.337 09:09:11 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:35:53.337 09:09:11 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1146097' 00:35:53.338 killing process with pid 1146097 00:35:53.338 09:09:11 keyring_file -- common/autotest_common.sh@969 -- # kill 1146097 00:35:53.338 Received shutdown signal, test time was about 1.000000 seconds 00:35:53.338 00:35:53.338 Latency(us) 00:35:53.338 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:53.338 =================================================================================================================== 00:35:53.338 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:53.338 09:09:11 keyring_file -- common/autotest_common.sh@974 -- # wait 1146097 00:35:53.596 09:09:11 keyring_file -- keyring/file.sh@117 -- # bperfpid=1148125 00:35:53.596 09:09:11 keyring_file -- keyring/file.sh@119 -- # waitforlisten 1148125 /var/tmp/bperf.sock 00:35:53.596 09:09:11 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 1148125 ']' 00:35:53.596 09:09:11 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:53.596 09:09:11 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:35:53.596 09:09:11 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:53.596 09:09:11 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:53.596 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:53.596 09:09:11 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:53.596 09:09:11 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:53.596 09:09:11 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:35:53.596 "subsystems": [ 00:35:53.596 { 00:35:53.596 "subsystem": "keyring", 00:35:53.596 "config": [ 00:35:53.596 { 00:35:53.596 "method": "keyring_file_add_key", 00:35:53.596 "params": { 00:35:53.596 "name": "key0", 00:35:53.596 "path": "/tmp/tmp.vuY5BrKsh3" 00:35:53.596 } 00:35:53.596 }, 00:35:53.596 { 00:35:53.596 "method": "keyring_file_add_key", 00:35:53.596 "params": { 00:35:53.596 "name": "key1", 00:35:53.596 "path": "/tmp/tmp.TRLJzCwdUJ" 00:35:53.596 } 00:35:53.596 } 00:35:53.596 ] 00:35:53.596 }, 00:35:53.596 { 00:35:53.596 "subsystem": "iobuf", 00:35:53.596 "config": [ 00:35:53.596 { 00:35:53.596 "method": "iobuf_set_options", 00:35:53.596 "params": { 00:35:53.596 "small_pool_count": 8192, 00:35:53.596 "large_pool_count": 1024, 00:35:53.596 "small_bufsize": 8192, 00:35:53.596 "large_bufsize": 135168 00:35:53.596 } 00:35:53.596 } 00:35:53.596 ] 00:35:53.596 }, 00:35:53.596 { 00:35:53.596 "subsystem": "sock", 00:35:53.596 "config": [ 00:35:53.596 { 00:35:53.596 "method": "sock_set_default_impl", 00:35:53.596 "params": { 00:35:53.596 "impl_name": "posix" 00:35:53.596 } 00:35:53.596 }, 00:35:53.596 { 00:35:53.596 "method": "sock_impl_set_options", 00:35:53.596 "params": { 00:35:53.596 "impl_name": "ssl", 00:35:53.596 "recv_buf_size": 4096, 00:35:53.596 "send_buf_size": 4096, 00:35:53.596 "enable_recv_pipe": true, 00:35:53.596 "enable_quickack": false, 00:35:53.596 "enable_placement_id": 0, 00:35:53.596 "enable_zerocopy_send_server": true, 00:35:53.596 "enable_zerocopy_send_client": false, 00:35:53.596 "zerocopy_threshold": 0, 00:35:53.596 "tls_version": 0, 00:35:53.596 "enable_ktls": false 00:35:53.596 } 00:35:53.596 }, 00:35:53.596 { 00:35:53.596 "method": "sock_impl_set_options", 00:35:53.596 "params": { 00:35:53.596 "impl_name": "posix", 00:35:53.596 "recv_buf_size": 2097152, 00:35:53.596 "send_buf_size": 2097152, 00:35:53.596 "enable_recv_pipe": true, 00:35:53.596 "enable_quickack": false, 00:35:53.596 "enable_placement_id": 0, 00:35:53.596 "enable_zerocopy_send_server": true, 00:35:53.596 "enable_zerocopy_send_client": false, 00:35:53.596 "zerocopy_threshold": 0, 00:35:53.596 "tls_version": 0, 00:35:53.596 "enable_ktls": false 00:35:53.596 } 00:35:53.596 } 00:35:53.596 ] 00:35:53.596 }, 00:35:53.596 { 00:35:53.596 "subsystem": "vmd", 00:35:53.596 "config": [] 00:35:53.596 }, 00:35:53.596 { 00:35:53.596 "subsystem": "accel", 00:35:53.596 "config": [ 00:35:53.596 { 00:35:53.596 "method": "accel_set_options", 00:35:53.596 "params": { 00:35:53.596 "small_cache_size": 128, 00:35:53.596 "large_cache_size": 16, 00:35:53.596 "task_count": 2048, 00:35:53.596 "sequence_count": 2048, 00:35:53.596 "buf_count": 2048 00:35:53.597 } 00:35:53.597 } 00:35:53.597 ] 00:35:53.597 }, 00:35:53.597 { 00:35:53.597 "subsystem": "bdev", 00:35:53.597 "config": [ 00:35:53.597 { 00:35:53.597 "method": "bdev_set_options", 00:35:53.597 "params": { 00:35:53.597 "bdev_io_pool_size": 65535, 00:35:53.597 "bdev_io_cache_size": 256, 00:35:53.597 "bdev_auto_examine": true, 00:35:53.597 "iobuf_small_cache_size": 128, 00:35:53.597 "iobuf_large_cache_size": 16 00:35:53.597 } 00:35:53.597 }, 00:35:53.597 { 00:35:53.597 "method": "bdev_raid_set_options", 00:35:53.597 "params": { 00:35:53.597 "process_window_size_kb": 1024, 00:35:53.597 "process_max_bandwidth_mb_sec": 0 00:35:53.597 } 00:35:53.597 }, 00:35:53.597 { 00:35:53.597 "method": "bdev_iscsi_set_options", 00:35:53.597 "params": { 00:35:53.597 "timeout_sec": 30 00:35:53.597 } 00:35:53.597 }, 00:35:53.597 { 00:35:53.597 "method": "bdev_nvme_set_options", 00:35:53.597 "params": { 00:35:53.597 "action_on_timeout": "none", 00:35:53.597 "timeout_us": 0, 00:35:53.597 "timeout_admin_us": 0, 00:35:53.597 "keep_alive_timeout_ms": 10000, 00:35:53.597 "arbitration_burst": 0, 00:35:53.597 "low_priority_weight": 0, 00:35:53.597 "medium_priority_weight": 0, 00:35:53.597 "high_priority_weight": 0, 00:35:53.597 "nvme_adminq_poll_period_us": 10000, 00:35:53.597 "nvme_ioq_poll_period_us": 0, 00:35:53.597 "io_queue_requests": 512, 00:35:53.597 "delay_cmd_submit": true, 00:35:53.597 "transport_retry_count": 4, 00:35:53.597 "bdev_retry_count": 3, 00:35:53.597 "transport_ack_timeout": 0, 00:35:53.597 "ctrlr_loss_timeout_sec": 0, 00:35:53.597 "reconnect_delay_sec": 0, 00:35:53.597 "fast_io_fail_timeout_sec": 0, 00:35:53.597 "disable_auto_failback": false, 00:35:53.597 "generate_uuids": false, 00:35:53.597 "transport_tos": 0, 00:35:53.597 "nvme_error_stat": false, 00:35:53.597 "rdma_srq_size": 0, 00:35:53.597 "io_path_stat": false, 00:35:53.597 "allow_accel_sequence": false, 00:35:53.597 "rdma_max_cq_size": 0, 00:35:53.597 "rdma_cm_event_timeout_ms": 0, 00:35:53.597 "dhchap_digests": [ 00:35:53.597 "sha256", 00:35:53.597 "sha384", 00:35:53.597 "sha512" 00:35:53.597 ], 00:35:53.597 "dhchap_dhgroups": [ 00:35:53.597 "null", 00:35:53.597 "ffdhe2048", 00:35:53.597 "ffdhe3072", 00:35:53.597 "ffdhe4096", 00:35:53.597 "ffdhe6144", 00:35:53.597 "ffdhe8192" 00:35:53.597 ] 00:35:53.597 } 00:35:53.597 }, 00:35:53.597 { 00:35:53.597 "method": "bdev_nvme_attach_controller", 00:35:53.597 "params": { 00:35:53.597 "name": "nvme0", 00:35:53.597 "trtype": "TCP", 00:35:53.597 "adrfam": "IPv4", 00:35:53.597 "traddr": "127.0.0.1", 00:35:53.597 "trsvcid": "4420", 00:35:53.597 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:53.597 "prchk_reftag": false, 00:35:53.597 "prchk_guard": false, 00:35:53.597 "ctrlr_loss_timeout_sec": 0, 00:35:53.597 "reconnect_delay_sec": 0, 00:35:53.597 "fast_io_fail_timeout_sec": 0, 00:35:53.597 "psk": "key0", 00:35:53.597 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:53.597 "hdgst": false, 00:35:53.597 "ddgst": false 00:35:53.597 } 00:35:53.597 }, 00:35:53.597 { 00:35:53.597 "method": "bdev_nvme_set_hotplug", 00:35:53.597 "params": { 00:35:53.597 "period_us": 100000, 00:35:53.597 "enable": false 00:35:53.597 } 00:35:53.597 }, 00:35:53.597 { 00:35:53.597 "method": "bdev_wait_for_examine" 00:35:53.597 } 00:35:53.597 ] 00:35:53.597 }, 00:35:53.597 { 00:35:53.597 "subsystem": "nbd", 00:35:53.597 "config": [] 00:35:53.597 } 00:35:53.597 ] 00:35:53.597 }' 00:35:53.597 [2024-07-26 09:09:11.851428] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:35:53.597 [2024-07-26 09:09:11.851517] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1148125 ] 00:35:53.597 EAL: No free 2048 kB hugepages reported on node 1 00:35:53.597 [2024-07-26 09:09:11.882982] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:35:53.597 [2024-07-26 09:09:11.910862] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:53.597 [2024-07-26 09:09:11.997149] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:35:53.855 [2024-07-26 09:09:12.178331] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:35:54.420 09:09:12 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:54.420 09:09:12 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:35:54.420 09:09:12 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:35:54.420 09:09:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:54.420 09:09:12 keyring_file -- keyring/file.sh@120 -- # jq length 00:35:54.678 09:09:13 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:35:54.678 09:09:13 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:35:54.678 09:09:13 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:54.678 09:09:13 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:54.678 09:09:13 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:54.678 09:09:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:54.678 09:09:13 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:54.935 09:09:13 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:35:54.935 09:09:13 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:35:54.935 09:09:13 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:54.935 09:09:13 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:54.935 09:09:13 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:54.936 09:09:13 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:54.936 09:09:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:55.193 09:09:13 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:35:55.193 09:09:13 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:35:55.193 09:09:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:35:55.193 09:09:13 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:35:55.449 09:09:13 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:35:55.449 09:09:13 keyring_file -- keyring/file.sh@1 -- # cleanup 00:35:55.449 09:09:13 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.vuY5BrKsh3 /tmp/tmp.TRLJzCwdUJ 00:35:55.449 09:09:13 keyring_file -- keyring/file.sh@20 -- # killprocess 1148125 00:35:55.449 09:09:13 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 1148125 ']' 00:35:55.449 09:09:13 keyring_file -- common/autotest_common.sh@954 -- # kill -0 1148125 00:35:55.449 09:09:13 keyring_file -- common/autotest_common.sh@955 -- # uname 00:35:55.449 09:09:13 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:55.449 09:09:13 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1148125 00:35:55.449 09:09:13 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:35:55.449 09:09:13 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:35:55.449 09:09:13 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1148125' 00:35:55.449 killing process with pid 1148125 00:35:55.449 09:09:13 keyring_file -- common/autotest_common.sh@969 -- # kill 1148125 00:35:55.449 Received shutdown signal, test time was about 1.000000 seconds 00:35:55.449 00:35:55.449 Latency(us) 00:35:55.449 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:55.449 =================================================================================================================== 00:35:55.449 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:35:55.449 09:09:13 keyring_file -- common/autotest_common.sh@974 -- # wait 1148125 00:35:55.706 09:09:14 keyring_file -- keyring/file.sh@21 -- # killprocess 1146066 00:35:55.706 09:09:14 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 1146066 ']' 00:35:55.706 09:09:14 keyring_file -- common/autotest_common.sh@954 -- # kill -0 1146066 00:35:55.706 09:09:14 keyring_file -- common/autotest_common.sh@955 -- # uname 00:35:55.706 09:09:14 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:55.706 09:09:14 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1146066 00:35:55.706 09:09:14 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:35:55.706 09:09:14 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:35:55.706 09:09:14 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1146066' 00:35:55.706 killing process with pid 1146066 00:35:55.706 09:09:14 keyring_file -- common/autotest_common.sh@969 -- # kill 1146066 00:35:55.706 [2024-07-26 09:09:14.084369] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:35:55.706 09:09:14 keyring_file -- common/autotest_common.sh@974 -- # wait 1146066 00:35:56.272 00:35:56.272 real 0m14.391s 00:35:56.272 user 0m35.797s 00:35:56.272 sys 0m3.400s 00:35:56.272 09:09:14 keyring_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:56.272 09:09:14 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:56.272 ************************************ 00:35:56.272 END TEST keyring_file 00:35:56.272 ************************************ 00:35:56.272 09:09:14 -- spdk/autotest.sh@300 -- # [[ y == y ]] 00:35:56.272 09:09:14 -- spdk/autotest.sh@301 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:35:56.272 09:09:14 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:35:56.272 09:09:14 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:56.272 09:09:14 -- common/autotest_common.sh@10 -- # set +x 00:35:56.272 ************************************ 00:35:56.272 START TEST keyring_linux 00:35:56.272 ************************************ 00:35:56.272 09:09:14 keyring_linux -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:35:56.272 * Looking for test storage... 00:35:56.272 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:35:56.272 09:09:14 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:35:56.272 09:09:14 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:56.272 09:09:14 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:35:56.272 09:09:14 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:56.272 09:09:14 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:56.272 09:09:14 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:56.272 09:09:14 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:56.272 09:09:14 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:56.272 09:09:14 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:56.272 09:09:14 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:56.272 09:09:14 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:56.272 09:09:14 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:56.272 09:09:14 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:56.272 09:09:14 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:35:56.272 09:09:14 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:35:56.272 09:09:14 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:56.272 09:09:14 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:56.272 09:09:14 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:56.272 09:09:14 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:56.272 09:09:14 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:56.272 09:09:14 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:56.272 09:09:14 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:56.272 09:09:14 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:56.272 09:09:14 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:56.272 09:09:14 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:56.272 09:09:14 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:56.272 09:09:14 keyring_linux -- paths/export.sh@5 -- # export PATH 00:35:56.272 09:09:14 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:56.272 09:09:14 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:35:56.272 09:09:14 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:35:56.272 09:09:14 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:35:56.272 09:09:14 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:56.272 09:09:14 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:56.272 09:09:14 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:56.272 09:09:14 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:35:56.272 09:09:14 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:35:56.272 09:09:14 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:35:56.272 09:09:14 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:35:56.272 09:09:14 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:35:56.272 09:09:14 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:35:56.272 09:09:14 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:35:56.272 09:09:14 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:35:56.272 09:09:14 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:35:56.272 09:09:14 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:35:56.272 09:09:14 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:35:56.272 09:09:14 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:35:56.272 09:09:14 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:35:56.272 09:09:14 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:35:56.272 09:09:14 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:35:56.272 09:09:14 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:35:56.272 09:09:14 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:35:56.272 09:09:14 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:35:56.272 09:09:14 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:35:56.272 09:09:14 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:35:56.272 09:09:14 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:35:56.272 09:09:14 keyring_linux -- nvmf/common.sh@705 -- # python - 00:35:56.272 09:09:14 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:35:56.272 09:09:14 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:35:56.272 /tmp/:spdk-test:key0 00:35:56.272 09:09:14 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:35:56.272 09:09:14 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:35:56.272 09:09:14 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:35:56.272 09:09:14 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:35:56.272 09:09:14 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:35:56.272 09:09:14 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:35:56.272 09:09:14 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:35:56.272 09:09:14 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:35:56.272 09:09:14 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:35:56.272 09:09:14 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:35:56.272 09:09:14 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:35:56.272 09:09:14 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:35:56.272 09:09:14 keyring_linux -- nvmf/common.sh@705 -- # python - 00:35:56.272 09:09:14 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:35:56.272 09:09:14 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:35:56.272 /tmp/:spdk-test:key1 00:35:56.272 09:09:14 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=1148483 00:35:56.272 09:09:14 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:35:56.272 09:09:14 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 1148483 00:35:56.272 09:09:14 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 1148483 ']' 00:35:56.272 09:09:14 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:56.272 09:09:14 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:56.272 09:09:14 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:56.272 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:56.272 09:09:14 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:56.272 09:09:14 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:35:56.272 [2024-07-26 09:09:14.714202] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:35:56.272 [2024-07-26 09:09:14.714294] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1148483 ] 00:35:56.531 EAL: No free 2048 kB hugepages reported on node 1 00:35:56.531 [2024-07-26 09:09:14.751822] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:35:56.531 [2024-07-26 09:09:14.780233] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:56.531 [2024-07-26 09:09:14.876098] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:56.790 09:09:15 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:56.790 09:09:15 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:35:56.790 09:09:15 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:35:56.790 09:09:15 keyring_linux -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:56.790 09:09:15 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:35:56.790 [2024-07-26 09:09:15.137525] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:56.790 null0 00:35:56.790 [2024-07-26 09:09:15.169597] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:35:56.790 [2024-07-26 09:09:15.170113] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:35:56.790 09:09:15 keyring_linux -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:56.790 09:09:15 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:35:56.790 56141961 00:35:56.790 09:09:15 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:35:56.790 1051584011 00:35:56.790 09:09:15 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=1148619 00:35:56.790 09:09:15 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:35:56.790 09:09:15 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 1148619 /var/tmp/bperf.sock 00:35:56.790 09:09:15 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 1148619 ']' 00:35:56.790 09:09:15 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:56.790 09:09:15 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:56.790 09:09:15 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:56.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:56.790 09:09:15 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:56.790 09:09:15 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:35:56.790 [2024-07-26 09:09:15.235306] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:35:56.790 [2024-07-26 09:09:15.235392] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1148619 ] 00:35:57.051 EAL: No free 2048 kB hugepages reported on node 1 00:35:57.051 [2024-07-26 09:09:15.267651] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:35:57.051 [2024-07-26 09:09:15.297802] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:57.051 [2024-07-26 09:09:15.387810] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:35:57.051 09:09:15 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:57.051 09:09:15 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:35:57.051 09:09:15 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:35:57.051 09:09:15 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:35:57.308 09:09:15 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:35:57.308 09:09:15 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:35:57.566 09:09:16 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:35:57.566 09:09:16 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:35:57.823 [2024-07-26 09:09:16.253232] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:35:58.080 nvme0n1 00:35:58.080 09:09:16 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:35:58.080 09:09:16 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:35:58.080 09:09:16 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:35:58.080 09:09:16 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:35:58.080 09:09:16 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:58.080 09:09:16 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:35:58.338 09:09:16 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:35:58.338 09:09:16 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:35:58.338 09:09:16 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:35:58.338 09:09:16 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:35:58.338 09:09:16 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:58.338 09:09:16 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:58.338 09:09:16 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:35:58.623 09:09:16 keyring_linux -- keyring/linux.sh@25 -- # sn=56141961 00:35:58.623 09:09:16 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:35:58.623 09:09:16 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:35:58.623 09:09:16 keyring_linux -- keyring/linux.sh@26 -- # [[ 56141961 == \5\6\1\4\1\9\6\1 ]] 00:35:58.623 09:09:16 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 56141961 00:35:58.623 09:09:16 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:35:58.623 09:09:16 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:58.623 Running I/O for 1 seconds... 00:35:59.564 00:35:59.564 Latency(us) 00:35:59.564 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:59.564 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:35:59.564 nvme0n1 : 1.02 5150.26 20.12 0.00 0.00 24644.84 9563.40 35146.71 00:35:59.564 =================================================================================================================== 00:35:59.564 Total : 5150.26 20.12 0.00 0.00 24644.84 9563.40 35146.71 00:35:59.564 0 00:35:59.564 09:09:17 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:35:59.564 09:09:17 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:35:59.822 09:09:18 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:35:59.822 09:09:18 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:35:59.822 09:09:18 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:35:59.822 09:09:18 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:35:59.822 09:09:18 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:59.822 09:09:18 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:36:00.080 09:09:18 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:36:00.080 09:09:18 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:36:00.080 09:09:18 keyring_linux -- keyring/linux.sh@23 -- # return 00:36:00.080 09:09:18 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:36:00.080 09:09:18 keyring_linux -- common/autotest_common.sh@650 -- # local es=0 00:36:00.081 09:09:18 keyring_linux -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:36:00.081 09:09:18 keyring_linux -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:36:00.081 09:09:18 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:00.081 09:09:18 keyring_linux -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:36:00.081 09:09:18 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:00.081 09:09:18 keyring_linux -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:36:00.081 09:09:18 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:36:00.339 [2024-07-26 09:09:18.703850] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:36:00.339 [2024-07-26 09:09:18.704263] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e1a00 (107): Transport endpoint is not connected 00:36:00.339 [2024-07-26 09:09:18.705255] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e1a00 (9): Bad file descriptor 00:36:00.339 [2024-07-26 09:09:18.706254] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:36:00.339 [2024-07-26 09:09:18.706274] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:36:00.339 [2024-07-26 09:09:18.706289] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:36:00.339 request: 00:36:00.339 { 00:36:00.339 "name": "nvme0", 00:36:00.339 "trtype": "tcp", 00:36:00.339 "traddr": "127.0.0.1", 00:36:00.339 "adrfam": "ipv4", 00:36:00.339 "trsvcid": "4420", 00:36:00.339 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:00.339 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:00.339 "prchk_reftag": false, 00:36:00.339 "prchk_guard": false, 00:36:00.339 "hdgst": false, 00:36:00.339 "ddgst": false, 00:36:00.339 "psk": ":spdk-test:key1", 00:36:00.339 "method": "bdev_nvme_attach_controller", 00:36:00.339 "req_id": 1 00:36:00.339 } 00:36:00.339 Got JSON-RPC error response 00:36:00.339 response: 00:36:00.339 { 00:36:00.339 "code": -5, 00:36:00.339 "message": "Input/output error" 00:36:00.339 } 00:36:00.339 09:09:18 keyring_linux -- common/autotest_common.sh@653 -- # es=1 00:36:00.339 09:09:18 keyring_linux -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:36:00.339 09:09:18 keyring_linux -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:36:00.339 09:09:18 keyring_linux -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:36:00.339 09:09:18 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:36:00.339 09:09:18 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:36:00.339 09:09:18 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:36:00.339 09:09:18 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:36:00.339 09:09:18 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:36:00.339 09:09:18 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:36:00.339 09:09:18 keyring_linux -- keyring/linux.sh@33 -- # sn=56141961 00:36:00.339 09:09:18 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 56141961 00:36:00.339 1 links removed 00:36:00.339 09:09:18 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:36:00.339 09:09:18 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:36:00.339 09:09:18 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:36:00.339 09:09:18 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:36:00.339 09:09:18 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:36:00.339 09:09:18 keyring_linux -- keyring/linux.sh@33 -- # sn=1051584011 00:36:00.339 09:09:18 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 1051584011 00:36:00.339 1 links removed 00:36:00.339 09:09:18 keyring_linux -- keyring/linux.sh@41 -- # killprocess 1148619 00:36:00.339 09:09:18 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 1148619 ']' 00:36:00.339 09:09:18 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 1148619 00:36:00.339 09:09:18 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:36:00.339 09:09:18 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:00.339 09:09:18 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1148619 00:36:00.339 09:09:18 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:36:00.339 09:09:18 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:36:00.339 09:09:18 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1148619' 00:36:00.339 killing process with pid 1148619 00:36:00.339 09:09:18 keyring_linux -- common/autotest_common.sh@969 -- # kill 1148619 00:36:00.339 Received shutdown signal, test time was about 1.000000 seconds 00:36:00.339 00:36:00.339 Latency(us) 00:36:00.339 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:00.339 =================================================================================================================== 00:36:00.339 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:00.339 09:09:18 keyring_linux -- common/autotest_common.sh@974 -- # wait 1148619 00:36:00.597 09:09:18 keyring_linux -- keyring/linux.sh@42 -- # killprocess 1148483 00:36:00.597 09:09:18 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 1148483 ']' 00:36:00.597 09:09:18 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 1148483 00:36:00.597 09:09:18 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:36:00.597 09:09:18 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:00.597 09:09:18 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1148483 00:36:00.597 09:09:18 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:36:00.597 09:09:18 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:36:00.597 09:09:18 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1148483' 00:36:00.597 killing process with pid 1148483 00:36:00.597 09:09:18 keyring_linux -- common/autotest_common.sh@969 -- # kill 1148483 00:36:00.597 09:09:18 keyring_linux -- common/autotest_common.sh@974 -- # wait 1148483 00:36:01.165 00:36:01.165 real 0m4.844s 00:36:01.165 user 0m9.068s 00:36:01.165 sys 0m1.545s 00:36:01.165 09:09:19 keyring_linux -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:01.165 09:09:19 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:36:01.165 ************************************ 00:36:01.165 END TEST keyring_linux 00:36:01.165 ************************************ 00:36:01.165 09:09:19 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:36:01.165 09:09:19 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:36:01.165 09:09:19 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:36:01.165 09:09:19 -- spdk/autotest.sh@325 -- # '[' 0 -eq 1 ']' 00:36:01.165 09:09:19 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:36:01.165 09:09:19 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:36:01.165 09:09:19 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:36:01.165 09:09:19 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:36:01.165 09:09:19 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:36:01.165 09:09:19 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:36:01.165 09:09:19 -- spdk/autotest.sh@360 -- # '[' 0 -eq 1 ']' 00:36:01.165 09:09:19 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:36:01.165 09:09:19 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:36:01.165 09:09:19 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:36:01.165 09:09:19 -- spdk/autotest.sh@379 -- # [[ 0 -eq 1 ]] 00:36:01.165 09:09:19 -- spdk/autotest.sh@384 -- # trap - SIGINT SIGTERM EXIT 00:36:01.165 09:09:19 -- spdk/autotest.sh@386 -- # timing_enter post_cleanup 00:36:01.165 09:09:19 -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:01.165 09:09:19 -- common/autotest_common.sh@10 -- # set +x 00:36:01.165 09:09:19 -- spdk/autotest.sh@387 -- # autotest_cleanup 00:36:01.165 09:09:19 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:36:01.165 09:09:19 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:36:01.165 09:09:19 -- common/autotest_common.sh@10 -- # set +x 00:36:03.068 INFO: APP EXITING 00:36:03.068 INFO: killing all VMs 00:36:03.068 INFO: killing vhost app 00:36:03.068 INFO: EXIT DONE 00:36:04.003 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:36:04.003 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:36:04.003 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:36:04.003 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:36:04.003 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:36:04.003 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:36:04.003 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:36:04.003 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:36:04.003 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:36:04.003 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:36:04.261 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:36:04.261 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:36:04.261 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:36:04.261 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:36:04.261 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:36:04.261 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:36:04.261 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:36:05.195 Cleaning 00:36:05.195 Removing: /var/run/dpdk/spdk0/config 00:36:05.195 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:36:05.195 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:36:05.453 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:36:05.453 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:36:05.453 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:36:05.453 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:36:05.453 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:36:05.453 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:36:05.453 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:36:05.453 Removing: /var/run/dpdk/spdk0/hugepage_info 00:36:05.453 Removing: /var/run/dpdk/spdk1/config 00:36:05.453 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:36:05.453 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:36:05.453 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:36:05.453 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:36:05.453 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:36:05.453 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:36:05.453 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:36:05.453 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:36:05.453 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:36:05.453 Removing: /var/run/dpdk/spdk1/hugepage_info 00:36:05.453 Removing: /var/run/dpdk/spdk1/mp_socket 00:36:05.453 Removing: /var/run/dpdk/spdk2/config 00:36:05.453 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:36:05.453 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:36:05.453 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:36:05.453 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:36:05.453 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:36:05.453 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:36:05.453 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:36:05.453 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:36:05.453 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:36:05.453 Removing: /var/run/dpdk/spdk2/hugepage_info 00:36:05.453 Removing: /var/run/dpdk/spdk3/config 00:36:05.453 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:36:05.453 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:36:05.453 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:36:05.453 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:36:05.453 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:36:05.453 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:36:05.453 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:36:05.453 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:36:05.453 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:36:05.453 Removing: /var/run/dpdk/spdk3/hugepage_info 00:36:05.453 Removing: /var/run/dpdk/spdk4/config 00:36:05.453 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:36:05.453 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:36:05.453 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:36:05.453 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:36:05.453 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:36:05.453 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:36:05.453 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:36:05.453 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:36:05.453 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:36:05.453 Removing: /var/run/dpdk/spdk4/hugepage_info 00:36:05.453 Removing: /dev/shm/bdev_svc_trace.1 00:36:05.453 Removing: /dev/shm/nvmf_trace.0 00:36:05.453 Removing: /dev/shm/spdk_tgt_trace.pid832166 00:36:05.453 Removing: /var/run/dpdk/spdk0 00:36:05.453 Removing: /var/run/dpdk/spdk1 00:36:05.453 Removing: /var/run/dpdk/spdk2 00:36:05.453 Removing: /var/run/dpdk/spdk3 00:36:05.453 Removing: /var/run/dpdk/spdk4 00:36:05.453 Removing: /var/run/dpdk/spdk_pid1000993 00:36:05.453 Removing: /var/run/dpdk/spdk_pid1001130 00:36:05.453 Removing: /var/run/dpdk/spdk_pid1001145 00:36:05.453 Removing: /var/run/dpdk/spdk_pid1001281 00:36:05.453 Removing: /var/run/dpdk/spdk_pid1001717 00:36:05.453 Removing: /var/run/dpdk/spdk_pid1002913 00:36:05.453 Removing: /var/run/dpdk/spdk_pid1003637 00:36:05.453 Removing: /var/run/dpdk/spdk_pid1004061 00:36:05.453 Removing: /var/run/dpdk/spdk_pid1005676 00:36:05.453 Removing: /var/run/dpdk/spdk_pid1006045 00:36:05.453 Removing: /var/run/dpdk/spdk_pid1006542 00:36:05.453 Removing: /var/run/dpdk/spdk_pid1008930 00:36:05.453 Removing: /var/run/dpdk/spdk_pid1012231 00:36:05.453 Removing: /var/run/dpdk/spdk_pid1015721 00:36:05.453 Removing: /var/run/dpdk/spdk_pid1039221 00:36:05.453 Removing: /var/run/dpdk/spdk_pid1041866 00:36:05.453 Removing: /var/run/dpdk/spdk_pid1045551 00:36:05.453 Removing: /var/run/dpdk/spdk_pid1046545 00:36:05.453 Removing: /var/run/dpdk/spdk_pid1047658 00:36:05.453 Removing: /var/run/dpdk/spdk_pid1050814 00:36:05.453 Removing: /var/run/dpdk/spdk_pid1053090 00:36:05.453 Removing: /var/run/dpdk/spdk_pid1057258 00:36:05.453 Removing: /var/run/dpdk/spdk_pid1057286 00:36:05.453 Removing: /var/run/dpdk/spdk_pid1060056 00:36:05.453 Removing: /var/run/dpdk/spdk_pid1060192 00:36:05.453 Removing: /var/run/dpdk/spdk_pid1060322 00:36:05.453 Removing: /var/run/dpdk/spdk_pid1060594 00:36:05.453 Removing: /var/run/dpdk/spdk_pid1060714 00:36:05.453 Removing: /var/run/dpdk/spdk_pid1061789 00:36:05.453 Removing: /var/run/dpdk/spdk_pid1062963 00:36:05.453 Removing: /var/run/dpdk/spdk_pid1064140 00:36:05.453 Removing: /var/run/dpdk/spdk_pid1065314 00:36:05.453 Removing: /var/run/dpdk/spdk_pid1066502 00:36:05.453 Removing: /var/run/dpdk/spdk_pid1067676 00:36:05.453 Removing: /var/run/dpdk/spdk_pid1071477 00:36:05.453 Removing: /var/run/dpdk/spdk_pid1071813 00:36:05.453 Removing: /var/run/dpdk/spdk_pid1073207 00:36:05.453 Removing: /var/run/dpdk/spdk_pid1073954 00:36:05.453 Removing: /var/run/dpdk/spdk_pid1077547 00:36:05.453 Removing: /var/run/dpdk/spdk_pid1079636 00:36:05.453 Removing: /var/run/dpdk/spdk_pid1083551 00:36:05.453 Removing: /var/run/dpdk/spdk_pid1086865 00:36:05.453 Removing: /var/run/dpdk/spdk_pid1093078 00:36:05.454 Removing: /var/run/dpdk/spdk_pid1097536 00:36:05.454 Removing: /var/run/dpdk/spdk_pid1097541 00:36:05.454 Removing: /var/run/dpdk/spdk_pid1109733 00:36:05.454 Removing: /var/run/dpdk/spdk_pid1110145 00:36:05.454 Removing: /var/run/dpdk/spdk_pid1110549 00:36:05.454 Removing: /var/run/dpdk/spdk_pid1111076 00:36:05.454 Removing: /var/run/dpdk/spdk_pid1111591 00:36:05.454 Removing: /var/run/dpdk/spdk_pid1112065 00:36:05.454 Removing: /var/run/dpdk/spdk_pid1112475 00:36:05.454 Removing: /var/run/dpdk/spdk_pid1112875 00:36:05.454 Removing: /var/run/dpdk/spdk_pid1115479 00:36:05.454 Removing: /var/run/dpdk/spdk_pid1115855 00:36:05.454 Removing: /var/run/dpdk/spdk_pid1119919 00:36:05.454 Removing: /var/run/dpdk/spdk_pid1120085 00:36:05.454 Removing: /var/run/dpdk/spdk_pid1121691 00:36:05.454 Removing: /var/run/dpdk/spdk_pid1126596 00:36:05.454 Removing: /var/run/dpdk/spdk_pid1126607 00:36:05.454 Removing: /var/run/dpdk/spdk_pid1129494 00:36:05.454 Removing: /var/run/dpdk/spdk_pid1130915 00:36:05.454 Removing: /var/run/dpdk/spdk_pid1132313 00:36:05.454 Removing: /var/run/dpdk/spdk_pid1133171 00:36:05.454 Removing: /var/run/dpdk/spdk_pid1134457 00:36:05.454 Removing: /var/run/dpdk/spdk_pid1135328 00:36:05.454 Removing: /var/run/dpdk/spdk_pid1140691 00:36:05.454 Removing: /var/run/dpdk/spdk_pid1140979 00:36:05.454 Removing: /var/run/dpdk/spdk_pid1141376 00:36:05.454 Removing: /var/run/dpdk/spdk_pid1142929 00:36:05.454 Removing: /var/run/dpdk/spdk_pid1143240 00:36:05.454 Removing: /var/run/dpdk/spdk_pid1143604 00:36:05.454 Removing: /var/run/dpdk/spdk_pid1146066 00:36:05.454 Removing: /var/run/dpdk/spdk_pid1146097 00:36:05.454 Removing: /var/run/dpdk/spdk_pid1148125 00:36:05.454 Removing: /var/run/dpdk/spdk_pid1148483 00:36:05.454 Removing: /var/run/dpdk/spdk_pid1148619 00:36:05.454 Removing: /var/run/dpdk/spdk_pid830615 00:36:05.454 Removing: /var/run/dpdk/spdk_pid831346 00:36:05.454 Removing: /var/run/dpdk/spdk_pid832166 00:36:05.454 Removing: /var/run/dpdk/spdk_pid832592 00:36:05.454 Removing: /var/run/dpdk/spdk_pid833285 00:36:05.712 Removing: /var/run/dpdk/spdk_pid833427 00:36:05.712 Removing: /var/run/dpdk/spdk_pid834141 00:36:05.712 Removing: /var/run/dpdk/spdk_pid834156 00:36:05.712 Removing: /var/run/dpdk/spdk_pid834394 00:36:05.712 Removing: /var/run/dpdk/spdk_pid835593 00:36:05.712 Removing: /var/run/dpdk/spdk_pid836641 00:36:05.712 Removing: /var/run/dpdk/spdk_pid836822 00:36:05.712 Removing: /var/run/dpdk/spdk_pid837126 00:36:05.712 Removing: /var/run/dpdk/spdk_pid837334 00:36:05.712 Removing: /var/run/dpdk/spdk_pid837521 00:36:05.712 Removing: /var/run/dpdk/spdk_pid837681 00:36:05.712 Removing: /var/run/dpdk/spdk_pid837838 00:36:05.712 Removing: /var/run/dpdk/spdk_pid838024 00:36:05.712 Removing: /var/run/dpdk/spdk_pid838324 00:36:05.712 Removing: /var/run/dpdk/spdk_pid841300 00:36:05.712 Removing: /var/run/dpdk/spdk_pid841471 00:36:05.712 Removing: /var/run/dpdk/spdk_pid841631 00:36:05.712 Removing: /var/run/dpdk/spdk_pid841639 00:36:05.712 Removing: /var/run/dpdk/spdk_pid842065 00:36:05.712 Removing: /var/run/dpdk/spdk_pid842070 00:36:05.712 Removing: /var/run/dpdk/spdk_pid842498 00:36:05.712 Removing: /var/run/dpdk/spdk_pid842508 00:36:05.712 Removing: /var/run/dpdk/spdk_pid842792 00:36:05.712 Removing: /var/run/dpdk/spdk_pid842808 00:36:05.712 Removing: /var/run/dpdk/spdk_pid842975 00:36:05.712 Removing: /var/run/dpdk/spdk_pid843100 00:36:05.712 Removing: /var/run/dpdk/spdk_pid843473 00:36:05.712 Removing: /var/run/dpdk/spdk_pid843626 00:36:05.712 Removing: /var/run/dpdk/spdk_pid843823 00:36:05.712 Removing: /var/run/dpdk/spdk_pid845893 00:36:05.712 Removing: /var/run/dpdk/spdk_pid848494 00:36:05.712 Removing: /var/run/dpdk/spdk_pid855375 00:36:05.712 Removing: /var/run/dpdk/spdk_pid855899 00:36:05.712 Removing: /var/run/dpdk/spdk_pid858292 00:36:05.712 Removing: /var/run/dpdk/spdk_pid858569 00:36:05.712 Removing: /var/run/dpdk/spdk_pid861076 00:36:05.712 Removing: /var/run/dpdk/spdk_pid864786 00:36:05.712 Removing: /var/run/dpdk/spdk_pid866851 00:36:05.712 Removing: /var/run/dpdk/spdk_pid873243 00:36:05.713 Removing: /var/run/dpdk/spdk_pid879069 00:36:05.713 Removing: /var/run/dpdk/spdk_pid880292 00:36:05.713 Removing: /var/run/dpdk/spdk_pid880954 00:36:05.713 Removing: /var/run/dpdk/spdk_pid891170 00:36:05.713 Removing: /var/run/dpdk/spdk_pid893462 00:36:05.713 Removing: /var/run/dpdk/spdk_pid947008 00:36:05.713 Removing: /var/run/dpdk/spdk_pid950179 00:36:05.713 Removing: /var/run/dpdk/spdk_pid953985 00:36:05.713 Removing: /var/run/dpdk/spdk_pid957699 00:36:05.713 Removing: /var/run/dpdk/spdk_pid957739 00:36:05.713 Removing: /var/run/dpdk/spdk_pid958351 00:36:05.713 Removing: /var/run/dpdk/spdk_pid959008 00:36:05.713 Removing: /var/run/dpdk/spdk_pid959583 00:36:05.713 Removing: /var/run/dpdk/spdk_pid960067 00:36:05.713 Removing: /var/run/dpdk/spdk_pid960075 00:36:05.713 Removing: /var/run/dpdk/spdk_pid960210 00:36:05.713 Removing: /var/run/dpdk/spdk_pid960345 00:36:05.713 Removing: /var/run/dpdk/spdk_pid960347 00:36:05.713 Removing: /var/run/dpdk/spdk_pid961003 00:36:05.713 Removing: /var/run/dpdk/spdk_pid961657 00:36:05.713 Removing: /var/run/dpdk/spdk_pid962200 00:36:05.713 Removing: /var/run/dpdk/spdk_pid962595 00:36:05.713 Removing: /var/run/dpdk/spdk_pid962713 00:36:05.713 Removing: /var/run/dpdk/spdk_pid962860 00:36:05.713 Removing: /var/run/dpdk/spdk_pid963737 00:36:05.713 Removing: /var/run/dpdk/spdk_pid964453 00:36:05.713 Removing: /var/run/dpdk/spdk_pid970266 00:36:05.713 Removing: /var/run/dpdk/spdk_pid995707 00:36:05.713 Removing: /var/run/dpdk/spdk_pid998502 00:36:05.713 Removing: /var/run/dpdk/spdk_pid999679 00:36:05.713 Clean 00:36:05.713 09:09:24 -- common/autotest_common.sh@1451 -- # return 0 00:36:05.713 09:09:24 -- spdk/autotest.sh@388 -- # timing_exit post_cleanup 00:36:05.713 09:09:24 -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:05.713 09:09:24 -- common/autotest_common.sh@10 -- # set +x 00:36:05.713 09:09:24 -- spdk/autotest.sh@390 -- # timing_exit autotest 00:36:05.713 09:09:24 -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:05.713 09:09:24 -- common/autotest_common.sh@10 -- # set +x 00:36:05.971 09:09:24 -- spdk/autotest.sh@391 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:36:05.971 09:09:24 -- spdk/autotest.sh@393 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:36:05.971 09:09:24 -- spdk/autotest.sh@393 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:36:05.971 09:09:24 -- spdk/autotest.sh@395 -- # hash lcov 00:36:05.971 09:09:24 -- spdk/autotest.sh@395 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:36:05.971 09:09:24 -- spdk/autotest.sh@397 -- # hostname 00:36:05.971 09:09:24 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-11 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:36:05.971 geninfo: WARNING: invalid characters removed from testname! 00:36:38.038 09:09:51 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:38.038 09:09:55 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:39.935 09:09:58 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:43.214 09:10:01 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:45.773 09:10:04 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:49.053 09:10:06 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:51.592 09:10:09 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:36:51.592 09:10:09 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:51.592 09:10:09 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:36:51.592 09:10:09 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:51.592 09:10:09 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:51.592 09:10:09 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:51.592 09:10:09 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:51.592 09:10:09 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:51.592 09:10:09 -- paths/export.sh@5 -- $ export PATH 00:36:51.592 09:10:09 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:51.592 09:10:09 -- common/autobuild_common.sh@446 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:36:51.592 09:10:09 -- common/autobuild_common.sh@447 -- $ date +%s 00:36:51.592 09:10:09 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721977809.XXXXXX 00:36:51.592 09:10:09 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721977809.FYYP3g 00:36:51.592 09:10:09 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:36:51.592 09:10:09 -- common/autobuild_common.sh@453 -- $ '[' -n main ']' 00:36:51.592 09:10:09 -- common/autobuild_common.sh@454 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:36:51.592 09:10:09 -- common/autobuild_common.sh@454 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:36:51.592 09:10:09 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:36:51.592 09:10:09 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:36:51.592 09:10:09 -- common/autobuild_common.sh@463 -- $ get_config_params 00:36:51.592 09:10:09 -- common/autotest_common.sh@398 -- $ xtrace_disable 00:36:51.592 09:10:09 -- common/autotest_common.sh@10 -- $ set +x 00:36:51.592 09:10:09 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:36:51.592 09:10:09 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:36:51.592 09:10:09 -- pm/common@17 -- $ local monitor 00:36:51.592 09:10:09 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:36:51.592 09:10:09 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:36:51.592 09:10:09 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:36:51.592 09:10:09 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:36:51.592 09:10:09 -- pm/common@21 -- $ date +%s 00:36:51.592 09:10:09 -- pm/common@25 -- $ sleep 1 00:36:51.592 09:10:09 -- pm/common@21 -- $ date +%s 00:36:51.592 09:10:09 -- pm/common@21 -- $ date +%s 00:36:51.592 09:10:09 -- pm/common@21 -- $ date +%s 00:36:51.592 09:10:09 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721977809 00:36:51.592 09:10:09 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721977809 00:36:51.592 09:10:09 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721977809 00:36:51.592 09:10:09 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721977809 00:36:51.592 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721977809_collect-vmstat.pm.log 00:36:51.592 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721977809_collect-cpu-load.pm.log 00:36:51.592 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721977809_collect-cpu-temp.pm.log 00:36:51.592 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721977809_collect-bmc-pm.bmc.pm.log 00:36:52.532 09:10:10 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:36:52.532 09:10:10 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j48 00:36:52.532 09:10:10 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:36:52.532 09:10:10 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:36:52.532 09:10:10 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:36:52.532 09:10:10 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:36:52.532 09:10:10 -- spdk/autopackage.sh@19 -- $ timing_finish 00:36:52.532 09:10:10 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:36:52.532 09:10:10 -- common/autotest_common.sh@737 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:36:52.532 09:10:10 -- common/autotest_common.sh@739 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:36:52.532 09:10:10 -- spdk/autopackage.sh@20 -- $ exit 0 00:36:52.532 09:10:10 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:36:52.532 09:10:10 -- pm/common@29 -- $ signal_monitor_resources TERM 00:36:52.532 09:10:10 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:36:52.532 09:10:10 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:36:52.532 09:10:10 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:36:52.532 09:10:10 -- pm/common@44 -- $ pid=1159743 00:36:52.532 09:10:10 -- pm/common@50 -- $ kill -TERM 1159743 00:36:52.532 09:10:10 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:36:52.532 09:10:10 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:36:52.532 09:10:10 -- pm/common@44 -- $ pid=1159745 00:36:52.532 09:10:10 -- pm/common@50 -- $ kill -TERM 1159745 00:36:52.532 09:10:10 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:36:52.532 09:10:10 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:36:52.532 09:10:10 -- pm/common@44 -- $ pid=1159747 00:36:52.532 09:10:10 -- pm/common@50 -- $ kill -TERM 1159747 00:36:52.532 09:10:10 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:36:52.532 09:10:10 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:36:52.532 09:10:10 -- pm/common@44 -- $ pid=1159775 00:36:52.532 09:10:10 -- pm/common@50 -- $ sudo -E kill -TERM 1159775 00:36:52.532 + [[ -n 731087 ]] 00:36:52.532 + sudo kill 731087 00:36:52.804 [Pipeline] } 00:36:52.824 [Pipeline] // stage 00:36:52.828 [Pipeline] } 00:36:52.845 [Pipeline] // timeout 00:36:52.851 [Pipeline] } 00:36:52.869 [Pipeline] // catchError 00:36:52.874 [Pipeline] } 00:36:52.894 [Pipeline] // wrap 00:36:52.900 [Pipeline] } 00:36:52.918 [Pipeline] // catchError 00:36:52.927 [Pipeline] stage 00:36:52.930 [Pipeline] { (Epilogue) 00:36:52.945 [Pipeline] catchError 00:36:52.946 [Pipeline] { 00:36:52.962 [Pipeline] echo 00:36:52.963 Cleanup processes 00:36:52.970 [Pipeline] sh 00:36:53.260 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:36:53.260 1159881 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:36:53.260 1160008 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:36:53.274 [Pipeline] sh 00:36:53.560 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:36:53.560 ++ grep -v 'sudo pgrep' 00:36:53.560 ++ awk '{print $1}' 00:36:53.560 + sudo kill -9 1159881 00:36:53.573 [Pipeline] sh 00:36:53.856 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:37:03.836 [Pipeline] sh 00:37:04.121 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:37:04.121 Artifacts sizes are good 00:37:04.137 [Pipeline] archiveArtifacts 00:37:04.144 Archiving artifacts 00:37:04.391 [Pipeline] sh 00:37:04.675 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:37:04.689 [Pipeline] cleanWs 00:37:04.699 [WS-CLEANUP] Deleting project workspace... 00:37:04.699 [WS-CLEANUP] Deferred wipeout is used... 00:37:04.706 [WS-CLEANUP] done 00:37:04.708 [Pipeline] } 00:37:04.727 [Pipeline] // catchError 00:37:04.740 [Pipeline] sh 00:37:05.020 + logger -p user.info -t JENKINS-CI 00:37:05.070 [Pipeline] } 00:37:05.084 [Pipeline] // stage 00:37:05.088 [Pipeline] } 00:37:05.099 [Pipeline] // node 00:37:05.102 [Pipeline] End of Pipeline 00:37:05.136 Finished: SUCCESS